Everyone is aware that the biggest obstacle for AI large models to be implemented in vertical application scenarios such as finance, healthcare, and law is the issue of “hallucinations” in AI output results, which cannot match the precision required in real-world applications. How to solve this? Recently, @Mira_Network launched a public testnet, providing a solution. Let me explain what’s going on:
First of all, AI large model tools have the phenomenon of “hallucination,” which everyone can perceive, mainly for two reasons:
AI LLMs’ training data is not complete enough. Although the data scale is already very large, it still cannot cover information from some niche or specialized fields. At this point, AI tends to make “creative completions,” which can lead to some real-time errors.
The work of AI LLMs essentially relies on “probabilistic sampling”, which is to identify statistical patterns and correlations in training data, rather than really “understanding”, so the randomness of probabilistic sampling and the inconsistency of training and inference results will lead to bias in AI processing high-precision factual problems;
How can we solve this problem? A paper published on the Cornell University ArXiv platform proposes a method to improve the reliability of LLM results through validation by multiple models.
In simple terms, it means first allowing the main model to generate the results, and then integrating multiple verification models to conduct a “majority voting analysis” on the issue, thereby reducing the “hallucinations” produced by the model.
In a series of tests, it was found that this method can improve the accuracy of AI outputs to 95.6%.
In that case, a distributed verification platform is definitely needed to manage and verify the collaborative interaction process between the main model and the validation model. The Mira Network is such a middleware network specifically built for the verification of AI LLMs, establishing a reliable verification layer between users and the underlying AI models.
With the existence of this verification layer network, integration services including privacy protection, accuracy assurance, scalable design, standardized API interfaces and other integrated services can be realized, and the possibility of AI landing in various subdivided application scenarios can be expanded by reducing the output illusion of AI LLMs, which is also a practice in the implementation of AI LLMs projects by the Crypto distributed verification network.
For example, Mira Network shared several cases in finance, education, and the blockchain ecosystem that can serve as evidence:
After integrating Mira, the Gigabrain trading platform can add a verification layer to assess the accuracy of market analysis and predictions, filtering out unreliable suggestions, which can improve the accuracy of AI trading signals and make AI LLMs more reliable in DeFi scenarios;
Learnrite uses mira to verify the standardized exam questions generated by AI, allowing educational institutions to utilize AI-generated content on a large scale without compromising the accuracy of educational testing content, thus maintaining strict educational standards.
The blockchain Kernel project utilizes Mira’s LLM consensus mechanism and integrates it into the BNB ecosystem, creating a decentralized verification network (DVN), which ensures a certain level of accuracy and security for AI computations executed on the blockchain.
Above.
In fact, what Mira Network provides is middleware consensus network services, which is certainly not the only way to enhance AI application capabilities. In fact, enhancing through training on the data side, enhancing through interactions with multimodal large models, and enhancing through potential cryptographic technologies like ZKP, FHE, TEE for privacy computing, etc., are all optional paths. However, compared to these, Mira’s solution is valuable for its quick implementation and direct effectiveness.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Can Mira Network solve the "hallucination" problem of AI large models?
Written by: Haotian
Everyone is aware that the biggest obstacle for AI large models to be implemented in vertical application scenarios such as finance, healthcare, and law is the issue of “hallucinations” in AI output results, which cannot match the precision required in real-world applications. How to solve this? Recently, @Mira_Network launched a public testnet, providing a solution. Let me explain what’s going on:
First of all, AI large model tools have the phenomenon of “hallucination,” which everyone can perceive, mainly for two reasons:
AI LLMs’ training data is not complete enough. Although the data scale is already very large, it still cannot cover information from some niche or specialized fields. At this point, AI tends to make “creative completions,” which can lead to some real-time errors.
The work of AI LLMs essentially relies on “probabilistic sampling”, which is to identify statistical patterns and correlations in training data, rather than really “understanding”, so the randomness of probabilistic sampling and the inconsistency of training and inference results will lead to bias in AI processing high-precision factual problems;
How can we solve this problem? A paper published on the Cornell University ArXiv platform proposes a method to improve the reliability of LLM results through validation by multiple models.
In simple terms, it means first allowing the main model to generate the results, and then integrating multiple verification models to conduct a “majority voting analysis” on the issue, thereby reducing the “hallucinations” produced by the model.
In a series of tests, it was found that this method can improve the accuracy of AI outputs to 95.6%.
In that case, a distributed verification platform is definitely needed to manage and verify the collaborative interaction process between the main model and the validation model. The Mira Network is such a middleware network specifically built for the verification of AI LLMs, establishing a reliable verification layer between users and the underlying AI models.
With the existence of this verification layer network, integration services including privacy protection, accuracy assurance, scalable design, standardized API interfaces and other integrated services can be realized, and the possibility of AI landing in various subdivided application scenarios can be expanded by reducing the output illusion of AI LLMs, which is also a practice in the implementation of AI LLMs projects by the Crypto distributed verification network.
For example, Mira Network shared several cases in finance, education, and the blockchain ecosystem that can serve as evidence:
After integrating Mira, the Gigabrain trading platform can add a verification layer to assess the accuracy of market analysis and predictions, filtering out unreliable suggestions, which can improve the accuracy of AI trading signals and make AI LLMs more reliable in DeFi scenarios;
Learnrite uses mira to verify the standardized exam questions generated by AI, allowing educational institutions to utilize AI-generated content on a large scale without compromising the accuracy of educational testing content, thus maintaining strict educational standards.
The blockchain Kernel project utilizes Mira’s LLM consensus mechanism and integrates it into the BNB ecosystem, creating a decentralized verification network (DVN), which ensures a certain level of accuracy and security for AI computations executed on the blockchain.
Above.
In fact, what Mira Network provides is middleware consensus network services, which is certainly not the only way to enhance AI application capabilities. In fact, enhancing through training on the data side, enhancing through interactions with multimodal large models, and enhancing through potential cryptographic technologies like ZKP, FHE, TEE for privacy computing, etc., are all optional paths. However, compared to these, Mira’s solution is valuable for its quick implementation and direct effectiveness.