AI Agent framework, as a key piece of the industry’s development, may contain dual potential to promote the landing of technology and the maturity of the ecosystem. The hotly discussed frameworks in the market include: Eliza, Rig, Swarms, ZerePy, and so on. These frameworks attract developers through Github Repo and build reputation. By issuing coins in the form of ‘library’, these frameworks, like light, simultaneously possess the characteristics of waves and particles. The Agent framework simultaneously possesses serious externality and Memecoin characteristics. This article will focus on interpreting the ‘wave-particle duality’ of the framework and why the Agent framework can become the final piece.
The externalities brought by the Agent framework can leave spring buds after the recession of the bubble.
Since the birth of GOAT, the impact of Agent’s narrative on the market has been increasing, like a kung fu master, with the left fist being “Memecoin” and the right palm being “industry hope”. You will always be defeated in one move. In fact, the application scenarios of AI Agent are not strictly distinguished, and the boundaries between platforms, frameworks, and specific applications are blurred, but they can still be roughly classified according to the preferences of tokens or protocols. However, according to the development preferences of tokens or protocols, they can still be divided into the following categories:
Launchpad: Asset launch platform. Virtuals Protocol on Base chain and clanker, Dasha on Solana chain.
AI Agent application: It is positioned between Agent and Memecoin, excelling in memory configuration, such as GOAT, aixbt, etc. These applications generally have one-way output and very limited input conditions.
AI Agent Engine: griffain on the Solana chain and Spectre AI on the base chain. griffain can evolve from read-write mode to read, write, and action mode; Spectre AI is an RAG engine for on-chain search.
AI Agent Framework: For framework platforms, the Agent itself is an asset, so the Agent Framework is the asset issuance platform for Agents, and is the Launchpad for Agents. Currently representative projects include ai16, Zerebro, ARC and the recently discussed Swarms.
Other minor directions: comprehensive Agent Simmi; AgentFi protocol Mode; falsification type Agent Seraph; real-time API Agent Creator.Bid.
Further discussion of the Agent framework shows that it has sufficient externalities. Unlike the major public chains and protocols, developers can only choose from different development language environments, and the total number of developers in the industry does not show a corresponding increase in market value growth. Github Repo is a place for Web2 and Web3 developers to build consensus, and to build a developer community here, which is more attractive and influential to Web2 developers than the “plug and play” packages developed by any one protocol alone.
The 4 frameworks mentioned in this article have all been open-sourced: ai16z’s Eliza framework has received 6200 stars; Zerebro’s ZerePy framework has received 191 stars; ARC’s RIG framework has received 1700 stars; Swarms’s Swarms framework has received 2100 stars. Currently, the Eliza framework is widely used in various Agent applications and has the widest coverage. The development level of ZerePy is not high, and the main development direction is on X, and it does not yet support local LLM and integrated memory. RIG has the highest relative development difficulty, but it can give developers the greatest freedom to achieve performance optimization. Apart from the team’s launch of mcs, Swarms has no other use cases yet, but Swarms can integrate different frameworks and has a large space for imagination.
In addition, in the above classification, separating the Agent engine from the framework may cause confusion. But I think there is a difference between the two. First, why is it called an engine? It is relatively appropriate to compare it to the search engine in real life. Unlike homogeneous Agent applications, the performance of the Agent engine is built on top of it, but at the same time, it is completely encapsulated and adjusted through API interfaces, like a black box. Users can experience the performance of the Agent engine in the form of a fork, but they cannot grasp the overall situation and customize it freely like the basic framework. Each user’s engine is like generating a mirror on a well-trained Agent, and it interacts with the mirror. The essence of the framework is to adapt to the chain, because whether it is an Agent doing an Agent framework, the ultimate goal is to integrate with the corresponding chain. How to define the data interaction method, how to define the data verification method, how to define the block size, and how to balance consensus and performance are the things that the framework needs to consider. What about the engine? It only needs to fine-tune the model and set the relationship between data interaction and memory in one direction. Performance is the only evaluation criterion, while the framework is not.
Using the perspective of “wave-particle duality” to evaluate the Agent framework may be the premise of ensuring the right direction.
In the lifecycle of Agent executing input and output once, three parts are needed. First, the underlying model determines the depth and manner of thinking, then the memory is a custom place. After the basic model has output, it is modified according to the memory, and finally the output operation is completed on different clients.
Source: @SuhailKakar
In order to verify that the Agent framework has the ‘wave-particle duality’, ‘wave’ represents the characteristics of ‘Memecoin’, representing community culture and developer activity, emphasizing the attractiveness and dissemination ability of the Agent; ‘particle’ represents the characteristics of ‘industry expectations’, representing underlying performance, practical use cases, and technical depth. I will illustrate this with the development tutorials of three frameworks from two aspects respectively:
Fast modular Eliza framework
Set up the environment
Source: @SuhailKakar
Install Eliza
Source: @SuhailKakar
Configuration File
Source: @SuhailKakar
Set Agent Personality
Source: @SuhailKakar
Eliza’s framework is relatively easy to get started with. It is based on TypeScript, a language familiar to most web and web3 developers. The framework is concise and free of excessive abstractions, allowing developers to easily add the desired functionality. In Step 3, Eliza can be seen as a multi-client integrator. Eliza supports platforms such as DC, TG, and X, as well as multiple large language models. It can take input from the aforementioned social media and output using the LLM model. It also supports built-in memory management, enabling developers of any skill level to quickly deploy an AI Agent.
Due to the simplicity of the framework and the richness of the interface, Eliza greatly reduces the threshold for access and realizes a relatively unified interface standard.
One-click-use ZerePy framework
1.Fork ZerePy’s repository
Source:
Configure X and GPT
Source:
Set Agent Personality
Source:
Performance-optimized Rig framework
Taking the construction of the RAG (Retrieval-Augmented Generation) Agent as an example:
Configure the environment and OpenAI key
Source:
Set up the OpenAI client and use Chunking for PDF processing
Source:
Set document structure and embedding
Source:
Create Vector Storage and RAG Agent
Source:
Rig (ARC) is an AI system construction framework based on the Rust language and aimed at the LLM workflow engine. It aims to solve more low-level performance optimization problems. In other words, ARC is an AI engine ‘toolbox’ that provides background support services such as AI invocation, performance optimization, data storage, and exception handling.
Rig is to solve the problem of “invocation” to help developers better choose LLM, optimize prompts, manage tokens more effectively, and how to handle concurrent processing, resource management, reduce latency, etc. Its emphasis is on how to “use it” in the collaboration process of AI LLM model and AI Agent system.
Rig is an open-source Rust library designed to simplify the development of LLM-driven applications (including RAG Agent). Because Rig is more open, it requires higher demands on developers and a better understanding of Rust and Agents. The tutorial here is the most basic configuration process for the RAG Agent, which enhances LLM by combining it with external knowledge retrieval. In other DEMOs on the official website, you can see that Rig has the following features:
LLM Interface Unification: Support consistent APIs for different LLM providers, simplifying integration.
Abstract Workflow: Pre-built modular components allow Rig to undertake the design of complex AI systems.
Integrated Vector Storage: Built-in support for entity storage, providing efficient performance in search agents such as RAG Agent.
Embedding Flexibility: Provides an easy-to-use API for handling embedding, reducing the difficulty of semantic understanding in the development of similar search agents such as RAG Agent.
It can be seen that compared to Eliza, Rig provides additional performance optimization space for developers, helping developers better debug the invocation and collaborative optimization of LLM and Agent. Rig drives the performance of Rust, utilizes Rust’s advantage of zero-cost abstraction and memory safety, and performs high-performance, low-latency LLM operations. It provides more freedom at the underlying level.
The Swarms framework for decomposing and composing expressions
Swarms aims to provide an enterprise-level production-grade multi-agent orchestration framework. The official website provides dozens of workflows and agent parallel and serial architectures. Here we introduce a small part of them.
Sequential Workflow
Source:
The Sequential Swarm architecture processes tasks in a linear sequence. Each agent completes its task before passing the result to the next agent in the chain. This architecture ensures orderly processing and is very useful when tasks have dependencies.
Use case:
Each step in the workflow depends on the previous step, such as assembly lines or sequential data processing.
Scenarios that need to strictly follow the operation sequence.
Hierarchical architecture:
Source:
Implement top-down control, with higher-level agents coordinating tasks between lower-level agents. The agents simultaneously execute tasks and then feed their results back into the loop for final aggregation. This is particularly useful for highly parallelizable tasks.
Electronic table format architecture:
Source:
A large-scale group architecture for managing multiple agents working simultaneously. It can manage thousands of agents at the same time, each running on its own thread. It is an ideal choice for supervising large-scale agent outputs.
Swarms is not only an Agent framework, but also compatible with the above Eliza, ZerePy and Rig frameworks, with a modularized approach, to maximize the performance of the Agent in different workflows and architectures, to solve corresponding problems. There is no problem with the conception and developer community progress of Swarms.
Eliza: The easiest to use, suitable for beginners and rapid prototyping development, especially suitable for AI interaction on social media platforms. The framework is simple and easy to integrate and modify, suitable for scenarios that do not require excessive performance optimization.
ZerePy: A one-click deployment AI Agent application suitable for rapid development of Web3 and social platforms. It is suitable for lightweight AI applications with a simple framework, flexible configuration, and suitable for rapid construction and iteration.
Rig: Focuses on performance optimization, especially excelling in high-concurrency and high-performance tasks, suitable for developers who require detailed control and optimization. The framework is more complex, requiring a certain level of Rust knowledge, and is suitable for more experienced developers.
Swarms: Suitable for enterprise applications, supporting multi-agent collaboration and complex task management. The framework is flexible, supports large-scale parallel processing, and provides a variety of architectural configurations, but due to its complexity, it may require a stronger technical background for effective application.
Overall, Eliza and ZerePy have advantages in ease of use and rapid development, while Rig and Swarms are more suitable for professional developers or enterprise applications that require high performance and large-scale processing.
This is why the Agent framework has the characteristic of ‘industry hope’. The above framework is still in its early stages, and the most urgent task is to seize the first-mover advantage and establish an active developer community. The performance of the framework itself and whether it is lagging behind popular Web2 applications are not the main contradictions. Only by attracting a continuous stream of developers can the framework eventually win, because the Web3 industry always needs to attract market attention. No matter how strong the framework’s performance is and how strong its fundamentals are, if it is difficult to get started and no one is interested, it is a case of putting the cart before the horse. On the premise that the framework itself can attract developers, a framework with a more mature and complete token economic model will stand out.
And the agent framework’s ‘Memecoin’ feature is easy to understand. The above framework tokens lack a reasonable token economic design, have no use cases or very limited use cases, lack validated business models, and lack effective token flywheels. The framework is just a framework and is not organically integrated with the tokens. The growth of token prices, apart from FOMO, is difficult to obtain fundamental support and lacks sufficient moat to ensure stable and sustainable value growth. At the same time, the above-mentioned framework itself appears to be relatively rough, and its actual value does not match the current market value, thus having strong ‘Memecoin’ characteristics.
It’s important to note that the “wave-particle duality” of the Agent framework is not a shortcoming, and it should not be crudely understood as a half-jar of water that is neither pure Memecoin nor a token use case. As I mentioned in my previous article, lightweight agents are covered with an ambiguous memecoin veil, community culture and fundamentals will no longer be a contradiction, and a new asset development path is gradually emerging; Despite the initial bubble and uncertainty of the Agent framework, its potential to attract developers and drive applications cannot be ignored. In the future, a framework with a sound token economic model and a strong developer ecosystem may become a key pillar of this track.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Is the AI Agent framework the final piece of the puzzle? How to interpret the "wave-particle duality" of the framework?
Author: Kevin, the Researcher at BlockBooster
AI Agent framework, as a key piece of the industry’s development, may contain dual potential to promote the landing of technology and the maturity of the ecosystem. The hotly discussed frameworks in the market include: Eliza, Rig, Swarms, ZerePy, and so on. These frameworks attract developers through Github Repo and build reputation. By issuing coins in the form of ‘library’, these frameworks, like light, simultaneously possess the characteristics of waves and particles. The Agent framework simultaneously possesses serious externality and Memecoin characteristics. This article will focus on interpreting the ‘wave-particle duality’ of the framework and why the Agent framework can become the final piece.
The externalities brought by the Agent framework can leave spring buds after the recession of the bubble.
Since the birth of GOAT, the impact of Agent’s narrative on the market has been increasing, like a kung fu master, with the left fist being “Memecoin” and the right palm being “industry hope”. You will always be defeated in one move. In fact, the application scenarios of AI Agent are not strictly distinguished, and the boundaries between platforms, frameworks, and specific applications are blurred, but they can still be roughly classified according to the preferences of tokens or protocols. However, according to the development preferences of tokens or protocols, they can still be divided into the following categories:
Launchpad: Asset launch platform. Virtuals Protocol on Base chain and clanker, Dasha on Solana chain.
AI Agent application: It is positioned between Agent and Memecoin, excelling in memory configuration, such as GOAT, aixbt, etc. These applications generally have one-way output and very limited input conditions.
AI Agent Engine: griffain on the Solana chain and Spectre AI on the base chain. griffain can evolve from read-write mode to read, write, and action mode; Spectre AI is an RAG engine for on-chain search.
AI Agent Framework: For framework platforms, the Agent itself is an asset, so the Agent Framework is the asset issuance platform for Agents, and is the Launchpad for Agents. Currently representative projects include ai16, Zerebro, ARC and the recently discussed Swarms.
Other minor directions: comprehensive Agent Simmi; AgentFi protocol Mode; falsification type Agent Seraph; real-time API Agent Creator.Bid.
Further discussion of the Agent framework shows that it has sufficient externalities. Unlike the major public chains and protocols, developers can only choose from different development language environments, and the total number of developers in the industry does not show a corresponding increase in market value growth. Github Repo is a place for Web2 and Web3 developers to build consensus, and to build a developer community here, which is more attractive and influential to Web2 developers than the “plug and play” packages developed by any one protocol alone.
The 4 frameworks mentioned in this article have all been open-sourced: ai16z’s Eliza framework has received 6200 stars; Zerebro’s ZerePy framework has received 191 stars; ARC’s RIG framework has received 1700 stars; Swarms’s Swarms framework has received 2100 stars. Currently, the Eliza framework is widely used in various Agent applications and has the widest coverage. The development level of ZerePy is not high, and the main development direction is on X, and it does not yet support local LLM and integrated memory. RIG has the highest relative development difficulty, but it can give developers the greatest freedom to achieve performance optimization. Apart from the team’s launch of mcs, Swarms has no other use cases yet, but Swarms can integrate different frameworks and has a large space for imagination.
In addition, in the above classification, separating the Agent engine from the framework may cause confusion. But I think there is a difference between the two. First, why is it called an engine? It is relatively appropriate to compare it to the search engine in real life. Unlike homogeneous Agent applications, the performance of the Agent engine is built on top of it, but at the same time, it is completely encapsulated and adjusted through API interfaces, like a black box. Users can experience the performance of the Agent engine in the form of a fork, but they cannot grasp the overall situation and customize it freely like the basic framework. Each user’s engine is like generating a mirror on a well-trained Agent, and it interacts with the mirror. The essence of the framework is to adapt to the chain, because whether it is an Agent doing an Agent framework, the ultimate goal is to integrate with the corresponding chain. How to define the data interaction method, how to define the data verification method, how to define the block size, and how to balance consensus and performance are the things that the framework needs to consider. What about the engine? It only needs to fine-tune the model and set the relationship between data interaction and memory in one direction. Performance is the only evaluation criterion, while the framework is not.
Using the perspective of “wave-particle duality” to evaluate the Agent framework may be the premise of ensuring the right direction.
In the lifecycle of Agent executing input and output once, three parts are needed. First, the underlying model determines the depth and manner of thinking, then the memory is a custom place. After the basic model has output, it is modified according to the memory, and finally the output operation is completed on different clients.
Source: @SuhailKakar
In order to verify that the Agent framework has the ‘wave-particle duality’, ‘wave’ represents the characteristics of ‘Memecoin’, representing community culture and developer activity, emphasizing the attractiveness and dissemination ability of the Agent; ‘particle’ represents the characteristics of ‘industry expectations’, representing underlying performance, practical use cases, and technical depth. I will illustrate this with the development tutorials of three frameworks from two aspects respectively:
Fast modular Eliza framework
Source: @SuhailKakar
Source: @SuhailKakar
Source: @SuhailKakar
Source: @SuhailKakar
Eliza’s framework is relatively easy to get started with. It is based on TypeScript, a language familiar to most web and web3 developers. The framework is concise and free of excessive abstractions, allowing developers to easily add the desired functionality. In Step 3, Eliza can be seen as a multi-client integrator. Eliza supports platforms such as DC, TG, and X, as well as multiple large language models. It can take input from the aforementioned social media and output using the LLM model. It also supports built-in memory management, enabling developers of any skill level to quickly deploy an AI Agent.
Due to the simplicity of the framework and the richness of the interface, Eliza greatly reduces the threshold for access and realizes a relatively unified interface standard.
One-click-use ZerePy framework
1.Fork ZerePy’s repository
Source:
Source:
Source:
Performance-optimized Rig framework
Taking the construction of the RAG (Retrieval-Augmented Generation) Agent as an example:
Source:
Source:
Source:
Source:
Rig (ARC) is an AI system construction framework based on the Rust language and aimed at the LLM workflow engine. It aims to solve more low-level performance optimization problems. In other words, ARC is an AI engine ‘toolbox’ that provides background support services such as AI invocation, performance optimization, data storage, and exception handling.
Rig is to solve the problem of “invocation” to help developers better choose LLM, optimize prompts, manage tokens more effectively, and how to handle concurrent processing, resource management, reduce latency, etc. Its emphasis is on how to “use it” in the collaboration process of AI LLM model and AI Agent system.
Rig is an open-source Rust library designed to simplify the development of LLM-driven applications (including RAG Agent). Because Rig is more open, it requires higher demands on developers and a better understanding of Rust and Agents. The tutorial here is the most basic configuration process for the RAG Agent, which enhances LLM by combining it with external knowledge retrieval. In other DEMOs on the official website, you can see that Rig has the following features:
LLM Interface Unification: Support consistent APIs for different LLM providers, simplifying integration.
Abstract Workflow: Pre-built modular components allow Rig to undertake the design of complex AI systems.
Integrated Vector Storage: Built-in support for entity storage, providing efficient performance in search agents such as RAG Agent.
Embedding Flexibility: Provides an easy-to-use API for handling embedding, reducing the difficulty of semantic understanding in the development of similar search agents such as RAG Agent.
It can be seen that compared to Eliza, Rig provides additional performance optimization space for developers, helping developers better debug the invocation and collaborative optimization of LLM and Agent. Rig drives the performance of Rust, utilizes Rust’s advantage of zero-cost abstraction and memory safety, and performs high-performance, low-latency LLM operations. It provides more freedom at the underlying level.
The Swarms framework for decomposing and composing expressions
Swarms aims to provide an enterprise-level production-grade multi-agent orchestration framework. The official website provides dozens of workflows and agent parallel and serial architectures. Here we introduce a small part of them.
Sequential Workflow
Source:
The Sequential Swarm architecture processes tasks in a linear sequence. Each agent completes its task before passing the result to the next agent in the chain. This architecture ensures orderly processing and is very useful when tasks have dependencies.
Use case:
Each step in the workflow depends on the previous step, such as assembly lines or sequential data processing.
Scenarios that need to strictly follow the operation sequence.
Hierarchical architecture:
Source:
Implement top-down control, with higher-level agents coordinating tasks between lower-level agents. The agents simultaneously execute tasks and then feed their results back into the loop for final aggregation. This is particularly useful for highly parallelizable tasks.
Electronic table format architecture:
Source:
A large-scale group architecture for managing multiple agents working simultaneously. It can manage thousands of agents at the same time, each running on its own thread. It is an ideal choice for supervising large-scale agent outputs.
Swarms is not only an Agent framework, but also compatible with the above Eliza, ZerePy and Rig frameworks, with a modularized approach, to maximize the performance of the Agent in different workflows and architectures, to solve corresponding problems. There is no problem with the conception and developer community progress of Swarms.
Eliza: The easiest to use, suitable for beginners and rapid prototyping development, especially suitable for AI interaction on social media platforms. The framework is simple and easy to integrate and modify, suitable for scenarios that do not require excessive performance optimization.
ZerePy: A one-click deployment AI Agent application suitable for rapid development of Web3 and social platforms. It is suitable for lightweight AI applications with a simple framework, flexible configuration, and suitable for rapid construction and iteration.
Rig: Focuses on performance optimization, especially excelling in high-concurrency and high-performance tasks, suitable for developers who require detailed control and optimization. The framework is more complex, requiring a certain level of Rust knowledge, and is suitable for more experienced developers.
Swarms: Suitable for enterprise applications, supporting multi-agent collaboration and complex task management. The framework is flexible, supports large-scale parallel processing, and provides a variety of architectural configurations, but due to its complexity, it may require a stronger technical background for effective application.
Overall, Eliza and ZerePy have advantages in ease of use and rapid development, while Rig and Swarms are more suitable for professional developers or enterprise applications that require high performance and large-scale processing.
This is why the Agent framework has the characteristic of ‘industry hope’. The above framework is still in its early stages, and the most urgent task is to seize the first-mover advantage and establish an active developer community. The performance of the framework itself and whether it is lagging behind popular Web2 applications are not the main contradictions. Only by attracting a continuous stream of developers can the framework eventually win, because the Web3 industry always needs to attract market attention. No matter how strong the framework’s performance is and how strong its fundamentals are, if it is difficult to get started and no one is interested, it is a case of putting the cart before the horse. On the premise that the framework itself can attract developers, a framework with a more mature and complete token economic model will stand out.
And the agent framework’s ‘Memecoin’ feature is easy to understand. The above framework tokens lack a reasonable token economic design, have no use cases or very limited use cases, lack validated business models, and lack effective token flywheels. The framework is just a framework and is not organically integrated with the tokens. The growth of token prices, apart from FOMO, is difficult to obtain fundamental support and lacks sufficient moat to ensure stable and sustainable value growth. At the same time, the above-mentioned framework itself appears to be relatively rough, and its actual value does not match the current market value, thus having strong ‘Memecoin’ characteristics.
It’s important to note that the “wave-particle duality” of the Agent framework is not a shortcoming, and it should not be crudely understood as a half-jar of water that is neither pure Memecoin nor a token use case. As I mentioned in my previous article, lightweight agents are covered with an ambiguous memecoin veil, community culture and fundamentals will no longer be a contradiction, and a new asset development path is gradually emerging; Despite the initial bubble and uncertainty of the Agent framework, its potential to attract developers and drive applications cannot be ignored. In the future, a framework with a sound token economic model and a strong developer ecosystem may become a key pillar of this track.