Everyone knows that the biggest obstacle for AI large models to be applied to vertical application scenarios such as finance, healthcare, and law is the “hallucination” problem of AI output results, which cannot match the accuracy required in practical application scenarios. How to solve this? Recently, @Mira_Network launched a public testnet, providing a set of solutions. Let me explain what’s going on:
First of all, there is a phenomenon of “hallucination” in AI large model tools that everyone can perceive, mainly for two reasons:
The training data for AI LLMs is not comprehensive enough. Although the scale of existing data is large, it still cannot cover information from niche or specialized fields. In such cases, AI tends to perform “creative completion,” which can lead to some real-time errors.
The essence of AI LLMs fundamentally relies on “probability sampling”. It identifies statistical patterns and correlations in the training data, rather than truly “understanding”. Therefore, the randomness of probability sampling, inconsistencies in training and inference results, etc., can lead to biases in AI when dealing with high-precision factual issues.
How can this problem be solved? A paper has been published on the Cornell University ArXiv platform that verifies the reliability of LLM results through multiple models.
In simple terms, it means first letting the main model generate results, and then integrating multiple validation models to conduct a “majority voting analysis” on the issue, thereby reducing the “hallucinations” produced by the model.
In a series of tests, it was found that this method can improve the accuracy of AI output to 95.6%.
In this case, a distributed verification platform is definitely needed to manage and verify the collaborative interaction process between the master model and the verification model, and Mira Network is such a middleware network that specializes in building AI LLMs verification, building a reliable verification layer between users and the underlying AI model.
With the existence of this verification layer network, integration services including privacy protection, accuracy assurance, scalable design, standardized API interfaces and other integrated services can be realized, and the possibility of AI landing in various subdivided application scenarios can be expanded by reducing the output illusion of AI LLMs, which is also a practice in the implementation of AI LLMs projects by the Crypto distributed verification network.
For example, Mira Network shared several cases in finance, education, and the blockchain ecosystem that can serve as evidence:
After Gigabrain integrates with Mira, the system can add a layer of verification to the accuracy of market analysis and predictions, filtering out unreliable suggestions, which can improve the accuracy of AI trading signals and make AI LLMs more reliable in DeFi scenarios.
Learnrite uses mira to verify AI-generated standardized exam questions, enabling educational institutions to utilize AI-generated content on a large scale without compromising the accuracy of educational test content, thus maintaining strict educational standards.
The blockchain Kernel project utilizes the LLM consensus mechanism of Mira, integrating it into the BNB ecosystem to create a decentralized verification network (DVN), ensuring a certain level of accuracy and security for AI computations executed on the blockchain.
Above.
In fact, Mira Network provides middleware consensus network services, which is certainly not the only way to enhance AI application capabilities. In reality, enhancing through data-side training, enhancing through multimodal large model interactions, and enhancing through potential cryptographic technologies like ZKP, FHE, TEE, etc., are all alternative paths. However, compared to these, Mira’s solution is valuable for its quick implementation and immediate effectiveness.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Can Mira Network solve the "hallucination" problem of AI large models?
Written by: Haotian
Everyone knows that the biggest obstacle for AI large models to be applied to vertical application scenarios such as finance, healthcare, and law is the “hallucination” problem of AI output results, which cannot match the accuracy required in practical application scenarios. How to solve this? Recently, @Mira_Network launched a public testnet, providing a set of solutions. Let me explain what’s going on:
First of all, there is a phenomenon of “hallucination” in AI large model tools that everyone can perceive, mainly for two reasons:
The training data for AI LLMs is not comprehensive enough. Although the scale of existing data is large, it still cannot cover information from niche or specialized fields. In such cases, AI tends to perform “creative completion,” which can lead to some real-time errors.
The essence of AI LLMs fundamentally relies on “probability sampling”. It identifies statistical patterns and correlations in the training data, rather than truly “understanding”. Therefore, the randomness of probability sampling, inconsistencies in training and inference results, etc., can lead to biases in AI when dealing with high-precision factual issues.
How can this problem be solved? A paper has been published on the Cornell University ArXiv platform that verifies the reliability of LLM results through multiple models.
In simple terms, it means first letting the main model generate results, and then integrating multiple validation models to conduct a “majority voting analysis” on the issue, thereby reducing the “hallucinations” produced by the model.
In a series of tests, it was found that this method can improve the accuracy of AI output to 95.6%.
In this case, a distributed verification platform is definitely needed to manage and verify the collaborative interaction process between the master model and the verification model, and Mira Network is such a middleware network that specializes in building AI LLMs verification, building a reliable verification layer between users and the underlying AI model.
With the existence of this verification layer network, integration services including privacy protection, accuracy assurance, scalable design, standardized API interfaces and other integrated services can be realized, and the possibility of AI landing in various subdivided application scenarios can be expanded by reducing the output illusion of AI LLMs, which is also a practice in the implementation of AI LLMs projects by the Crypto distributed verification network.
For example, Mira Network shared several cases in finance, education, and the blockchain ecosystem that can serve as evidence:
After Gigabrain integrates with Mira, the system can add a layer of verification to the accuracy of market analysis and predictions, filtering out unreliable suggestions, which can improve the accuracy of AI trading signals and make AI LLMs more reliable in DeFi scenarios.
Learnrite uses mira to verify AI-generated standardized exam questions, enabling educational institutions to utilize AI-generated content on a large scale without compromising the accuracy of educational test content, thus maintaining strict educational standards.
The blockchain Kernel project utilizes the LLM consensus mechanism of Mira, integrating it into the BNB ecosystem to create a decentralized verification network (DVN), ensuring a certain level of accuracy and security for AI computations executed on the blockchain.
Above.
In fact, Mira Network provides middleware consensus network services, which is certainly not the only way to enhance AI application capabilities. In reality, enhancing through data-side training, enhancing through multimodal large model interactions, and enhancing through potential cryptographic technologies like ZKP, FHE, TEE, etc., are all alternative paths. However, compared to these, Mira’s solution is valuable for its quick implementation and immediate effectiveness.