AI’s influence on information: from conspiracy debunking to ethical concerns

Artificial Intelligence (AI), curse and delight: a study recently conducted by the Toulouse School of Economics (France) indicates how a conversation with an AI-driven chatbot can significantly reduce misinformation, making a significant number of people convinced of all theories change their minds. 'other than scientific. So can we really rely on AI to fight disinformation?

Artificial Intelligence, while fraught with its own set of problems, may have the potential to positively affect information flow. Recent research out of the Toulouse School of Economics in France illustrates just how exposure to an AI-driven chatbot can substantially reduce misinformation and help break beliefs in unfounded scientific theories.

AI is powerful but may be dangerous-a veritable double-edged sword. Its growing presence in a range of domains has brought mixed effects; some have serious negative consequences.AI also entails some real risks unless managed properly or misused too.
But, the World Economic Forum’s Global Risks Report 2024 judged AI-driven misinformation as one of the most risky threats facing the world at present.

Yet, there is hope at the other end: technology has proved invaluable across fields-from predicting accidents to making early diagnoses, or even identifying food fraud.

Recent research in Science also emphasized AI’s beneficial role towards information. The authors gathered more than 2,000 people who believed in conspiracy theories and established that a brief, personalized discussion with an AI chatbot can reduce misinformed beliefs by an average of roughly 20%. This effect was in no way transient-it persisted at least two months across a wide array of conspiracy beliefs.

These findings run counter to the conventional wisdom on conspiracy beliefs,” the researchers write, “and demonstrate that even deeply entrenched views can be changed when presented with persuasive evidence.”

Human users of AI remain, ultimately, human

Another recent study led by George Mason University in the U.S. took a different tack: whether people can accept robots telling lies. Researchers recently published their study in  Frontiers in Robotics and AI, wherein nearly 500 participants rated and explained a variety of types of robot deceptions.

I wanted to investigate one of the least studied aspects of ethics in robots,” said Andres Rosero, lead author of the work. “In this way, I could contribute to the understanding of distrust in emerging technologies and in their creators.

The study presented three real-world robot deception scenarios. The first involved a robotic caregiver who tells a woman suffering from Alzheimer’s disease that her husband, who has long since died, will be returning home shortly. Another scenario involves a house where a cleaning robot records a visitor. A third involves a retail setting, in which a robot pretends to be in pain while moving furniture to get a human coworker to take its place.

Reactions to these were manifold: the first, which termed external deception, was widely accepted; the second, hidden deception, was sharply criticized as was the third, surface-level deception, though some justified the former for possible safety reasons.

Such an opinion of scientists means that it is worth being concerned with any technology hiding its true potential, simply because that may be a sort of manipulation against users. On the other hand, it reminds us that we are ultimately the “filters” of technology.

Sources: Science / EurekAlert /  Frontiers in Robotics and AI

Condividi su Whatsapp Condividi su Linkedin