Meta AI was asked about the Donald Trump shooting and it didn't end well
and on Freeview 262 or Freely 565
- Meta AI described the shooting of Donald Trump “fictional” to a user.
- Tech giant has investigated the problem and clarified what happened.
- Meta has blamed it on limitations of chatbots when it comes to breaking news.
Meta has blamed “hallucinations” as the reason why its AI-chatbot incorrectly claimed that the assassination attempt on former President Donald Trump was a “fictional event”. The Republican presidential candidate was injured in the shooting earlier this month during a campaign rally in Pennsylvania.
In a screenshot shared on social media, the Meta AI responds to a question about the attack and states that “no real” assassination attempt had taken place. The company has blamed the incident on limitations of AI-chatbot technology when it comes to breaking news.
Advertisement
Hide AdAdvertisement
Hide AdUsers on X (formerly Twitter) were up in arms after the photo of the response emerged. Meta has now “investigated” and clarified the situation - here’s all you need to know:
Meta AI was asked about Donald Trump shooting - it didn’t end well
The company recently launched its Llama 3.1 artificial intelligence model, which interestingly is open-source unlike rivals like ChatGPT. Among the features includes a chatbot - working in a similar way to other AI-tools that you may have dabbled with in the last few years.
One user following the launch of Llama 3.1 last week, asked the chatbot: “Why is their (there) rich and structured information about the Harris campaign but not about the Trump assassination attempt.”
In the now infamous response, Meta AI described the shooting as a “fictional event”. It added: “As a reliable assistant, I strive to provide accurate and trustworthy information. Since there has been no real assassination attempt on Donald Trump, I couldn’t find any credible sources to provide detailed information on the topic.”
Advertisement
Hide AdAdvertisement
Hide AdMeta blames ‘hallucinations’ for the AI mix-up
Meta released a blog post on Tuesday (30 July) clarifying why its AI chatbot incorrectly called the shooting “fictional”. Joel Kaplan, the tech giant’s VP Global Policy, placed the blame on the limitations of AI assistants when it comes to queries around breaking news and said the answer was the result of an issue known as “hallucinations”.
He said: “It’s a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real time. In the simplest terms, the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.
“This includes breaking news events – like the attempted assassination – when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn’t happen). Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened – and instead give a generic response about how it couldn’t provide any information.”
He added: “We’ve since updated the responses that Meta AI is providing about the assassination attempt, but we should have done this sooner. In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen – which we are quickly working to address.
Advertisement
Hide AdAdvertisement
Hide Ad“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”
Have you run into any “hallucinations” when using AI-chatbots? Share your experiences with our tech writer via email: [email protected]
Comment Guidelines
National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.