The public Testnet of the Mira network was launched yesterday. It attempts to build a trust layer for AI. So, why does AI need to be trusted? How does Mira address this issue?
When people discuss AI, they tend to focus more on the powerful aspects of AI capabilities. However, interestingly, AI has “hallucinations” or biases. People do not pay much attention to this issue. What is AI’s “hallucination”? In simple terms, it means that AI sometimes “makes things up” and speaks nonsense seriously. For example, if you ask AI why the moon is pink, it might earnestly provide you with many seemingly reasonable explanations.
The “hallucinations” or biases in AI are somewhat related to the current paths of certain AI technologies. For example, generative AI outputs content by predicting the “most probable” to achieve coherence and reasonableness, but sometimes it cannot verify authenticity. Moreover, the training data itself may contain errors, biases, or even fabricated content, which can also affect the AI’s output. In other words, AI learns human language patterns rather than the facts themselves.
In summary, the current probabilistic generation mechanism combined with data-driven models almost inevitably leads to the possibility of AI hallucinations.
If the content with bias or hallucination output is just general knowledge or entertainment, there will not be immediate consequences for the time being. However, if it occurs in highly rigorous fields such as healthcare, law, aviation, and finance, it can directly lead to significant consequences. Therefore, how to address AI hallucinations and biases is one of the core issues in the evolution of AI. Some adopt retrieval-augmented generation technology (integrating with real-time databases to prioritize verified facts), while others introduce human feedback to correct model errors through manual labeling and human supervision.
The Mira project is also trying to address the issues of AI bias and hallucinations, meaning that Mira aims to build a trust layer for AI, reduce AI bias and hallucinations, and enhance the reliability of AI. So, from an overall framework perspective, how does Mira reduce AI bias and hallucinations and ultimately achieve trustworthy AI?
The core of Mira achieving this is through the consensus of multiple AI models to verify AI outputs. In other words, Mira itself is a verification network that validates the reliability of AI outputs, leveraging the consensus of multiple AI models. Additionally, it is also very important to have decentralized consensus for verification.
The key to the Mira network is decentralized consensus verification. Decentralized consensus verification is a strength in the field of cryptocurrency, and it also leverages multi-model collaboration to reduce bias and hallucinations through collective verification models.
In terms of verification architecture, it requires a form of independently verifiable statement, and the Mira protocol supports the conversion of complex content into independently verifiable statements. These statements require the participation of node operators in the verification process. To ensure the honesty of node operators, cryptoeconomic incentives/penalties will be utilized. Different AI models and decentralized node operators participate to ensure the reliability of the verification results.
Mira’s network architecture includes content transformation, distributed verification, and consensus mechanisms to achieve the reliability of verification. In this architecture, content transformation is an important part. The Mira network first decomposes the candidate content (generally submitted by clients) into different verifiable statements (to ensure that the model can understand them in the same context). These statements are distributed by the system to nodes for verification to determine the validity of the statements and to aggregate results to reach consensus. These results and consensus will be returned to the clients. Additionally, to protect client privacy, the candidate content transformation is decomposed into statement pairs, which are given to different nodes in a randomly sharded manner to prevent information leakage during the verification process.
Node operators are responsible for running validator models, processing claims, and submitting verification results. Why are node operators willing to participate in claim verification? Because they can earn rewards. Where do the rewards come from? They come from the value created for clients. The purpose of the Mira network is to reduce the error rates of AI (hallucinations and biases); once this goal is achieved, value can be generated, such as reducing error rates in fields like healthcare, law, aviation, and finance, which will create significant value. Therefore, clients are willing to pay. Of course, the sustainability and scale of payments depend on whether the Mira network can continuously bring value to clients (reducing AI error rates). Additionally, to prevent node operators from exploiting random responses, nodes that consistently deviate from consensus will have their staked tokens penalized. In summary, it is through the game of economic mechanisms that node operators are ensured to honestly participate in verification.
Overall, Mira provides a new solution for achieving the reliability of AI by building a decentralized consensus verification network based on multiple AI models, bringing higher reliability to customers’ AI services, reducing AI bias and hallucinations to meet customers’ demands for higher accuracy and precision. While providing value to customers, it also brings profits to the participants of the Mira network. In one sentence, Mira attempts to construct a trust layer for AI. This plays a role in promoting the in-depth application of AI.
Currently, the AI agent framework that Mira collaborates with includes ai16z, ARC, and others. The public Testnet of the Mira network was launched yesterday. Users participating in the Mira public Testnet can do so by using Klok, which is a LLM chat application based on Mira. Using the Klok app allows users to experience verified AI outputs (they can compare the difference with unverified AI outputs) and also earn Mira points. As for the future uses of the points, it has not been disclosed yet.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Multi-model Consensus + Decentralization Verification: How does Mira Network build an AI trust layer to combat illusions and biases?
The public Testnet of the Mira network was launched yesterday. It attempts to build a trust layer for AI. So, why does AI need to be trusted? How does Mira address this issue?
When people discuss AI, they tend to focus more on the powerful aspects of AI capabilities. However, interestingly, AI has “hallucinations” or biases. People do not pay much attention to this issue. What is AI’s “hallucination”? In simple terms, it means that AI sometimes “makes things up” and speaks nonsense seriously. For example, if you ask AI why the moon is pink, it might earnestly provide you with many seemingly reasonable explanations.
The “hallucinations” or biases in AI are somewhat related to the current paths of certain AI technologies. For example, generative AI outputs content by predicting the “most probable” to achieve coherence and reasonableness, but sometimes it cannot verify authenticity. Moreover, the training data itself may contain errors, biases, or even fabricated content, which can also affect the AI’s output. In other words, AI learns human language patterns rather than the facts themselves.
In summary, the current probabilistic generation mechanism combined with data-driven models almost inevitably leads to the possibility of AI hallucinations.
If the content with bias or hallucination output is just general knowledge or entertainment, there will not be immediate consequences for the time being. However, if it occurs in highly rigorous fields such as healthcare, law, aviation, and finance, it can directly lead to significant consequences. Therefore, how to address AI hallucinations and biases is one of the core issues in the evolution of AI. Some adopt retrieval-augmented generation technology (integrating with real-time databases to prioritize verified facts), while others introduce human feedback to correct model errors through manual labeling and human supervision.
The Mira project is also trying to address the issues of AI bias and hallucinations, meaning that Mira aims to build a trust layer for AI, reduce AI bias and hallucinations, and enhance the reliability of AI. So, from an overall framework perspective, how does Mira reduce AI bias and hallucinations and ultimately achieve trustworthy AI?
The core of Mira achieving this is through the consensus of multiple AI models to verify AI outputs. In other words, Mira itself is a verification network that validates the reliability of AI outputs, leveraging the consensus of multiple AI models. Additionally, it is also very important to have decentralized consensus for verification.
The key to the Mira network is decentralized consensus verification. Decentralized consensus verification is a strength in the field of cryptocurrency, and it also leverages multi-model collaboration to reduce bias and hallucinations through collective verification models.
In terms of verification architecture, it requires a form of independently verifiable statement, and the Mira protocol supports the conversion of complex content into independently verifiable statements. These statements require the participation of node operators in the verification process. To ensure the honesty of node operators, cryptoeconomic incentives/penalties will be utilized. Different AI models and decentralized node operators participate to ensure the reliability of the verification results.
Mira’s network architecture includes content transformation, distributed verification, and consensus mechanisms to achieve the reliability of verification. In this architecture, content transformation is an important part. The Mira network first decomposes the candidate content (generally submitted by clients) into different verifiable statements (to ensure that the model can understand them in the same context). These statements are distributed by the system to nodes for verification to determine the validity of the statements and to aggregate results to reach consensus. These results and consensus will be returned to the clients. Additionally, to protect client privacy, the candidate content transformation is decomposed into statement pairs, which are given to different nodes in a randomly sharded manner to prevent information leakage during the verification process.
Node operators are responsible for running validator models, processing claims, and submitting verification results. Why are node operators willing to participate in claim verification? Because they can earn rewards. Where do the rewards come from? They come from the value created for clients. The purpose of the Mira network is to reduce the error rates of AI (hallucinations and biases); once this goal is achieved, value can be generated, such as reducing error rates in fields like healthcare, law, aviation, and finance, which will create significant value. Therefore, clients are willing to pay. Of course, the sustainability and scale of payments depend on whether the Mira network can continuously bring value to clients (reducing AI error rates). Additionally, to prevent node operators from exploiting random responses, nodes that consistently deviate from consensus will have their staked tokens penalized. In summary, it is through the game of economic mechanisms that node operators are ensured to honestly participate in verification.
Overall, Mira provides a new solution for achieving the reliability of AI by building a decentralized consensus verification network based on multiple AI models, bringing higher reliability to customers’ AI services, reducing AI bias and hallucinations to meet customers’ demands for higher accuracy and precision. While providing value to customers, it also brings profits to the participants of the Mira network. In one sentence, Mira attempts to construct a trust layer for AI. This plays a role in promoting the in-depth application of AI.
Currently, the AI agent framework that Mira collaborates with includes ai16z, ARC, and others. The public Testnet of the Mira network was launched yesterday. Users participating in the Mira public Testnet can do so by using Klok, which is a LLM chat application based on Mira. Using the Klok app allows users to experience verified AI outputs (they can compare the difference with unverified AI outputs) and also earn Mira points. As for the future uses of the points, it has not been disclosed yet.