Dolphin is a decentralized AI inference network that combines AI with DePIN. It aims to build open AI infrastructure by making use of idle GPU resources around the world. As large language models (LLMs) and AI Agents continue to demand more computing power, the high cost and resource concentration of traditional centralized cloud platforms have become increasingly visible. Dolphin seeks to lower the barrier to AI inference through distributed GPU collaboration, while improving network openness and censorship resistance.
Within today’s Web3 AI infrastructure sector, Dolphin carries several overlapping identities, including AI, DePIN, and distributed inference network. Its core product, Dolphin Network, allows GPU owners to contribute computing power during idle periods to process AI requests and earn token rewards. Developers, meanwhile, can access inference capacity within the network without relying entirely on traditional cloud computing platforms.
As a project focused on AI model development and distributed inference, Dolphin’s core goal is to build an open, decentralized AI inference network. Its main product, Dolphin Network, aggregates GPU resources around the world to provide distributed inference services for AI models, while using crypto economic mechanisms to coordinate the relationship between nodes and users.

Dolphin is not positioned as a conventional AI chat application. It is closer to a foundational layer of AI infrastructure. The project aims to help developers access AI inference capabilities with fewer barriers, while reducing dependence on any single centralized cloud platform. Its long term goals include open model deployment, a distributed inference marketplace, and a more autonomous AI infrastructure ecosystem.
At the token level, POD is the token ticker used on trading platforms, and POD is also the core token of the project ecosystem. It is mainly used for inference payments, node incentives, and the network’s economic cycle.
The core logic of Dolphin Network is to distribute AI inference tasks to decentralized GPU nodes for processing. When a developer or application submits an inference request, the network automatically splits the task and sends it to available nodes, then uses verification mechanisms to confirm that the result is valid.
GPU owners can run nodes while their devices are idle and take part in inference tasks within the network. After completing tasks, nodes can receive POD rewards, which can help offset GPU costs or be used in future ecosystem activities.
To prevent malicious nodes from submitting incorrect results, Dolphin uses mechanisms such as random sampling verification, encryption, and economic staking to maintain network trust. This design is similar in some ways to the validation logic used in traditional blockchain networks, but the object being verified shifts from transaction data to AI inference results.
POD is the core utility token within the Dolphin network. Its functions cover several areas, including AI inference payments, node rewards, staking, and governance.
At the AI service layer, developers can use POD to pay for model inference fees. At the network layer, GPU nodes earn POD incentives by contributing computing power. In some mechanisms, nodes may also need to stake tokens to participate in network validation, which helps strengthen system security.
The design logic of POD is similar to that of many DePIN projects. It uses token incentives to drive the growth of real infrastructure supply. As more GPU nodes join the network, Dolphin’s overall inference capacity can expand as well, creating a circular relationship between AI infrastructure and the token economy.
DePIN, or Decentralized Physical Infrastructure Network, refers to Web3 networks that use token incentives to coordinate real world infrastructure resources. Common DePIN projects include decentralized storage, wireless networks, and GPU networks.
Dolphin’s core resource is GPU computing power, so it essentially belongs to the AI DePIN sector. By incentivizing GPU owners to share idle resources, the project turns originally scattered hardware into a unified AI inference network.
Compared with traditional cloud platforms, the DePIN model places more emphasis on openness and resource sharing. For example, ordinary gamers or GPU users can also participate in the network without having to build large data centers. This model is seen as a way to reduce the centralization of AI infrastructure and improve global computing resource utilization.
Dolphin’s use cases are mainly centered on AI inference and open AI services.
At the AI model level, developers can use Dolphin to deploy open source large models and perform distributed inference through the network. The project also supports certain chatbot and AI Agent scenarios, such as open AI assistants and automated inference applications.
Because the project emphasizes openness and controllability, Dolphin is also discussed in the context of censorship resistant AI models and autonomous AI systems. Some Dolphin models stress that users can customize system rules, model behavior, and data control methods, rather than relying entirely on the default policies of centralized AI service providers.
Dolphin and Render are both Web3 projects that use distributed GPU resources to build infrastructure, so they are often compared with each other.
However, Dolphin and Render have different core objectives. Render is more focused on GPU rendering and digital content generation, while Dolphin focuses on building a decentralized AI inference network. The two differ clearly in task type, resource scheduling, target users, and network structure.
| Comparison Dimension | Dolphin | Render |
|---|---|---|
| Core Positioning | Decentralized AI inference network | Decentralized GPU rendering network |
| Main Uses | AI inference, AI Agents, LLM services | 3D rendering, visual content generation |
| Core Resource | AI inference computing power | Graphics rendering computing power |
| Target Users | AI developers, AI applications | Designers, animation teams, creators |
| Network Direction | AI DePIN | GPU Render DePIN |
| Typical Scenarios | AI APIs, inference services, model deployment | Blender, OctaneRender, animation rendering |
| Open Model Support | Emphasizes open AI models | Open AI models are not the main focus |
The most fundamental difference between Dolphin and traditional AI platforms lies in infrastructure and control.
Traditional AI services usually rely on centralized data centers, where a single platform controls the models, system rules, APIs, and data access permissions. Developers must follow platform restrictions and bear the risk of changes to models or pricing made by the platform.
Dolphin attempts to reduce this centralized dependence through a distributed GPU network. GPU nodes are provided collectively by users around the world, allowing developers to use more open models and inference environments while retaining greater control over their data.
However, this open model also means Dolphin must deal with issues such as node stability, result verification, network latency, and infrastructure coordination. For that reason, decentralized AI networks are still in an early stage of exploration.
Dolphin’s core strengths lie in its open GPU network and decentralized AI inference capability. Compared with traditional centralized AI platforms, its model could, in theory, improve GPU utilization and reduce some AI service costs.
Open AI networks also offer stronger censorship resistance. Developers can deploy models more freely and control system behavior and data policies.
At the same time, Dolphin faces several practical challenges. For example, the performance of distributed GPU nodes can vary significantly, which may affect inference stability. AI inference result verification remains complex, and the regulatory environment for open AI models also carries uncertainty.
Dolphin (POD) is a decentralized AI inference project that combines AI, DePIN, and distributed GPU networks. Its goal is to build open AI infrastructure and use token incentives to encourage GPU owners around the world to collaborate within the network.
As the computing demands of AI models continue to grow, the resource concentration of traditional centralized AI cloud platforms is receiving more attention. The AI DePIN model represented by Dolphin attempts to use Web3 incentive mechanisms and an open network structure to offer a new infrastructure path for AI inference.
Dolphin belongs to both the AI and DePIN sectors. Its core function is to provide AI inference capabilities through a distributed GPU network.
GPU owners can run nodes while their devices are idle, participate in AI inference tasks, and earn token rewards.
Traditional AI platforms rely on centralized data centers, while Dolphin provides AI inference services through a distributed GPU network, with greater emphasis on openness and resource sharing.
Yes. Some Dolphin models emphasize openness and controllability, allowing users to customize system rules and model behavior.





