Why Amazon is Betting on ‘Automated Reasoning’ to Reduce AI’s Hallucinations?

There has been a steady rise in AI and its use in various walks of life. Companies worldwide have effectively used AI to solve most, if not all, of its problems. Advancements and developments in the field of Artificial Intelligence are almost a daily occurrence now. It has also created a tech race between countries, and it continues to be a huge threat to human jobs in the future. However, amidst all these disruptive features of AI, there has been a concern with its prompts and solutions – AI Hallucinations. So, what is AI Hallucination and how is Amazon tackling it for the future?

AI Hallucinations are instances when the answer generated by a prompt is either incorrect, nonsensical, or absurd by all measures. There can be multiple reasons behind this so-called ‘Hallucination’. Insufficient Data Training is a common cause, mainly because Data Training is extremely expensive and therefore, isn’t always done to its full potential. Biased Data and Processing Errors are some of the other reasons why AI Hallucinations happen.

This issue i.e. AI Hallucination first emerged around two years ago and has affected AI chatbots immensely. Now, to tackle this problem, Amazon is seeking the help of ‘Automated Reasoning’ to reduce it. Automated Reasoning is a field in Computer Science that uses mathematical reasoning to guide a system as to what it will do. It also uses the same logic to systematically include knowledge within AI systems. It can answer questions on programs, mathematical formulas, and many more things.

Hence, Amazon is betting on Automated Reasoning and logical reasoning to ensure that AI responses align perfectly with factually correct answers. The Automated Reasoning method is tied to and rooted in symbolic AI. It verifies for factual accuracy through logic chains which makes it more reliable.

Symbolic AI, although modern, is a field whose roots go back more than 2000 years and can be traced to the work of Socrates and Plato. Symbolic AI uses mathematical logic to encipher knowledge inside AI systems in a structured manner. After that, it uses rule-based decision-making to reach conclusions. Amazon trusts mathematics and AI for this approach since there is almost a guarantee of 100% accuracy. There won’t be any room for ambiguity after this.

Amazon has critical applications in fields like finance, healthcare, and retail, among others. It would, therefore, like to enhance its AI services in these fields and build trust in them. Amazon is betting on this approach since it is a part of Amazon Web Services’ larger strategy of increasing the efficiency of AI and making it an error-free service.

If Amazon becomes successful in this approach, and things go over as smoothly as expected, then it can secure AI deals worth millions of dollars with businesses, some analysts believe as per the Wall Street Journal. This approach is similar to the notion where AI models reason with problems. Except in this case, the models themselves are put to the test whether they are providing correct answers.

To bring this grand plan to fruition, Amazon has hired employees who have been experts in Automated Reasoning for the past decade. Their first mode of application will be in cybersecurity. This is because Amazon Web Services has found success before. Their use of Automated Reasoning in cybersecurity provided them with an entirely correct response. This accuracy of cryptography for business customers ensured the trust of other customers as they poured their data and applications into Amazon’s cloud.

However, the path to winning customers’ trust hasn’t exactly been an easy one for Amazon Web Services. Due to the unreliability of AI i.e. the ‘hallucinations’, key businesses have been hesitant to adopt Amazon’s approach to AI. Customers also need to set up policies that serve as the truth.

Although Amazon is investing a lot in Automated Reasoning, and the method seems to bring a lot of success, it still has its limitations. Automated Reasoning is just one component of a multi-pronged approach to help eliminate hallucinations. Other components would include ‘retrieval-augmented generation’, or RAG, and ‘fine-tuning,’ among others. RAG is a method to connect AI models with external data sources, whereas ‘fine-tuning’ is a method of customizing a large language model with private or company data.

Also, besides Amazon, Google and Microsoft also offer tools and services that minimize hallucinations. Google has Vertex AI that works towards hallucination mitigation, but it doesn’t use automated reasoning. As for Amazon Web Services, they might still have to evolve and adapt in the future to get their AI response 100% accurate. While Automated Reasoning is a good enough measure for it now, they will have to combine tools like RAG and fine-tuning as well in the future.

Author: SEO Team