https://www.myjoyonline.com/emmanuel-koranteng-asomani-assessing-the-nexus-between-artificial-intelligence-and-misinformation/-------https://www.myjoyonline.com/emmanuel-koranteng-asomani-assessing-the-nexus-between-artificial-intelligence-and-misinformation/
AI technology microchip background digital transformation concept

Generative AI tools and platforms are currently taking the world by storm. New generative AI tools like ChatGPT; OpenAI’s prototype artificial intelligence chatbot, Microsoft’s BingGPT, and Bard from Google, have been trending online and have impressed everyone from ardent technophiles to steadfast doubters.

They have been programmed to write articles, create realistic images and objects that look like photographed by humans amongst others.

Generative AI tools and platforms are software applications that use artificial intelligence to generate new content, such as text, images, videos, and audio. These tools work by analyzing large amounts of data and learning patterns from existing data. Once the AI has learned these patterns, it can generate new content that is similar to the original data. For instance, there are some AI tools that can complete tasks that require human-like reasoning and decision-making abilities.

Undoubtedly, these AI tools are very promising in solving some real-world problems, from writing legal briefs, explaining complex problems, making medical diagnoses, to authoring screenplays, just to name a few.

Nonetheless, their increasing sophistication also raises concerns about their potential misuse, particularly in spreading misinformation.; if they don’t know something, they make it up and are also capable of conjecturing tales of misinformation. Microsoft and OpenAI call their products (BingGPT and ChatGPT respectively) “public tests” because they are aware of how flawed they are to some extent. For instance, ChatGPT cannot always distinguish fact from fiction and is prone to making up answers.

Already, it is becoming increasingly clear that it is difficult for users, regulators, or even platforms like Facebook and Twitter to know who is generating misinformation. While regulating complicated, fast-developing global technologies can be difficult for lawmakers, there remains a role for policy that balances accountability and consumer protection. Considering the rate of proliferation of inaccurate or erratic responses from generative AI bots, experts have warned that matters could worsen if something is not done real fast. Experts like Professor Gary Marcus of New York University, who has become a leading voice of A.I. skepticism, says he is deeply worried about the direction current AI research is headed. “Because such systems contain literally no mechanisms for checking the truth of what they say,” Marcus writes, “they can easily be automated to generate misinformation at unprecedented scale.”

According to experts, the biggest generative AI misinformation threat is the flow of misinformation into AI models where generative AI will be subject to “injection attacks” wherein unscrupulous users teach lies to the programs, and thereby spread false narratives at an alarming rate.

In dealing with AI misinformation, tech firms are trying to get ahead of regulatory action by developing their own tools to detect falsehoods – and are using feedback to train the algorithms in real time. For instance, OpenAI, developers of ChatGPT, have developed today a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine. Also, Google has cautioned web publishers, it will use extra caution when elevating health, civic or financial information in search results.

More than anything else we can ever imagine; misinformation poses a serious threat to the stability of our society. Both small and major economies have been affected by this menace. The only hope for humanity at a time when they are unable to know in whom to put their faith in, is to invest in “technology for good.”

DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.


DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.