After some bad press in various industries, many professionals have become wary of implementing AI into their workflows. This is understandable however, we believe that the benefits far out way the drawbacks and with the correct knowledge and tools the risks of AI can be eradicated.
Large Language Models are built using neural networks, a concept derived from the human brain. Artificial intelligence is the simulation of human behaviour by machines, given this it is natural that AI will, on occasion, be wrong, much like humans. The laws that govern neural networks in LLMs are based on probabilities, and as a result the output will vary. Parameters can be adjusted to encourage models to be more creative, thus increasing the likelihood of fictitious output, or hallucination. General purpose models are tuned to ensure they stay close to the truth but the nature probability guarantees there be instances where particular words make it to the output stage, which together constitute hallucination. Models may also produce fictitious output when the desired information is not represented in the training corpus. For this reason generative use cases are the most likely to suffer from hallucination.
Jylo uses AI for both generative and extractive use cases. During our document review process, extractive AI identifies information from thousands of documents and highlights the source, ready for instant human verification. Generative AI is used for chat functions. All team members can access chat and view the output from AI, increasing the likelihood of spotting an error. However, it is the data extracted from document review which provide the most value to organisations and can be checked for validity without interrupting workflow or carrying out a human review.
Image Sources
What Is AI Hallucination? Examples, Causes & How To Spot Them (techopedia.com)
Add comment
Please sign in to leave a comment.