GenAI (generative AI) does not produce "results" or "hallucinations", but "approximations". Even the correct answers are approximations of understanding, not the understanding itself. GenAI behaves the same way when it gets the right answer as when it is wrong – it does not distinguish between right and wrong. GenAI is also unable to fact-check and reproduce answers.
Limitations of the RAG Approach
RAG (Retrieval-Augmented Generation), which attempts to solve "hallucinations", is merely a band-aid and not a solution. Adding contextual relevance through RAG does not reduce hallucinations in the GenAI models themselves, but only increases the score for semantic relevance. It does not solve the fundamental way the GenAI model operates.
Shifting Responsibility to the End-User
In contrast to other AI solutions where model developers are responsible for the correctness of the models, this responsibility shifts to the end-user when using GenAI. The paradox is that 95% accuracy is a greater challenge if the accuracy is only 70%.
Success Criteria for Using GenAI
To succeed in adopting GenAI, it is important to have a fundamental understanding of the technology's strengths and weaknesses. There are many use cases for those who learn to utilize and understand language models, but maybe not for cases that are business-critical, require accuracy, traceability, transparency, or finding facts or good summaries of reports or searches.
Potential and Ethical Challenges
There are divided opinions on the potential for future breakthroughs in GenAI technology. Continuous research and development can lead to more accurate and reliable models that reduce hallucinations and inaccurate approximations. At the same time, it is important to be aware of the ethical challenges associated with the use of GenAI, such as the risk of reinforcing existing biases and lack of transparency in decision-making processes.
A recent report from Google DeepMind highlights the need for ethical, safe, and socially robust development of generative AI systems, especially in critical applications that affect our safety, dignity, and equality. Such AI agents can radically change work, education, communication, and our perception of ourselves. Responsible development and regulation of GenAI will be crucial to ensuring that the technology is used in a safe and ethical manner.
Commercial Sustainability
It will be interesting to follow how the 5 major technology companies, which deliver identical products with the same strengths and weaknesses trained on the same training data, will find the willingness to pay to cover the costs of developing and operating. Reports show that the willingness to invest in language models has decreased since the peak year of 2021.
Learning Curve and Co-intelligence
For my part, I have spent a lot of time understanding the technology and learning how I can use it to get good support and benefit in some relevant areas. For it to become true "co-intelligence", I still have a way to go.