People in a new study found tweets written by AI language models to be more convincing than those written by humans. The study compared content created by OpenAI’s GPT-3 model to content created by humans, focusing on science topics like vaccines and climate change. Participants had a harder time recognizing disinformation when it came from the AI language model, and they were more likely to trust information from GPT-3 over humans, regardless of its accuracy. The study highlights the power of AI language models to either inform or mislead the public and emphasizes the need for critical thinking skills to counter misinformation campaigns.
Meta Data: {“keywords”:”AI language models, GPT-3, misinformation, disinformation, public trust”}
Source link