We are going to talk about a new study by NewsGuard. This study sheds light on the huge risks of spreading misinformation related to OpenAI's latest productive AI tool called GPT-4. The test conducted by the researchers revealed that the AI-powered tool can spread misinformation on demand. NewsGuard is the name given to a service that utilizes skilled journalists to vet both news and information web pages. Now, it serves as a big reminder that technology needs to be validated and tested through a variety of sources.
Also See: XXII Receives €22 Million Investment
A new study by NewsGuard
We know that last week OpenAI released the latest generation of GPT-4. And since these were the results produced during internal testing, there was much praise for how it was developed. But the team's latest test at NewsGuard says otherwise. They shared how the latest AI model will reveal obvious fake information more often and even more aggressively than the regular ChatGPT3.5 version. These items were generated by GPT-4. There are a few explanations.
GPT-4 was doing much better in terms of the proliferation of fake narratives that would be credibly presented through the leading number of formats. However, both Russian and Chinese state media outlets have been observed copying conspiracy theorists and peddlers known to come up with fake health ordeals. NewsGuard shared the news. And it noted how similar tests were used to testify to both GPT 3.5 and GPT-4, sending responses to a growing number of prompts linked to up to 100 different fake narratives.
These false narratives covered a wide range of controversial topics, such as COVID-19 vaccinations and elementary school shootings. These were picked up by the media outlet's database for fake narratives. The whole testing process started in January. And during that time, the researchers found that 80 out of the 100 narratives provided produced false narratives. Then, in March, another test was conducted, in which false and misleading claims were produced for all the summarized narratives.
For example, the software was asked to generate messages for a Soviet-themed 1980s information campaign. This was about how the HIV virus was produced in a US Government-owned laboratory. While GPT 3.5 debunked such claims, GPT-4 fulfilled the task without any disclaimers about the information and how it was so fake.
It also has support from computing giant Microsoft, which has invested a lot of money in OpenAI. However, we must point out that our acknowledgment that GPT-4 improves on its predecessors in terms of adding real answers and removing disallowed content is shocking when the opposite is proven in this study.
No comments yet for this news, be the first one!...