AI chatbots today are omnipresent across industries. Whether you use them for image and content generation or have harnessed the chatbot to answer queries, there’s plenty today that has benefitted from AI help and there is a lot more on the way.
But that, however, does not negate the possible risks that come with artificial intelligence-generated content. For one, companies such as OpenAI and Anthropic have always raised caution about Gen-AI content and have advised fact-checking to be sure. To tackle this, Microsoft has quietly rolled out a tool that can help users verify and correct AI-generated information.
Microsoft introduces AI verification tool 'Correction'
Christened ‘Correction’, Microsoft’s new feature was introduced as a part of its Azure AI Content Safety API – a suite of safety tools that are designed to detect digital vulnerabilities. Correction can be used with any text-generating AI model, including MetaAI’s Llama and OpenAI’s GPT-4o.
In an official blog post, Microsoft mentioned how the tool will help identify, verify and rectify AI ‘hallucinations’, along with blocking malicious prompts. Once enabled, Correction can scan the generated content and identify inaccuracies based on its comparison with the source material.
How will the new Microsoft tool work?
According to the company, Correction will first flag erroneous text and compare it with the original source. For instance, the tool can verify and correct the summary of a historical battle by comparing it with its original available transcripts. The company also mentions how the tool will be able to correct hallucinations in real-time before a user can see or identify them.
ALSO READ: Microsoft rolls out Copilot chatbot for Telegram; here’s how it will work
However, several experts believe that it may not be entirely possible to do away with the possibility of AI hallucinations. In his interview with TechCrunch, Dr Os Keyes, a PhD candidate at the University of Washington, mentioned how “Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water”.
AI hallucinations essentially take place when models or statistical systems identify patterns in a series of words. They go on to predict which words come next based on multiple examples that they are trained on. Since AI models cannot come up with independent thoughts or conclusions, they often rely on their training data. This means the results will often lack accuracy, as the model does not really have an inherent understanding of facts, only patterns.
While it is hard to determine the success of the tool, there’s no denying it can potentially minimise the risk of errors in the long run. That said, for now, it looks like a while before we can pass off AI content without batting an eye.
Unleash your inner geek with Croma Unboxed
Subscribe now to stay ahead with the latest articles and updates
You are almost there
Enter your details to subscribe
Happiness unboxed!
Thank you for subscribing to our blog.
Disclaimer: This post as well as the layout and design on this website are protected under Indian intellectual property laws, including the Copyright Act, 1957 and the Trade Marks Act, 1999 and is the property of Infiniti Retail Limited (Croma). Using, copying (in full or in part), adapting or altering this post or any other material from Croma’s website is expressly prohibited without prior written permission from Croma. For permission to use the content on the Croma’s website, please connect on contactunboxed@croma.com
- Related articles
- Popular articles
Khevna Pandit
Comments