Below are my personal thoughts:
1) MCP (Model Context Protocol) is an open-source, standardized protocol designed to enable seamless connections between various AI LLMs (Large Language Models) and Agents with diverse data sources and tools. Think of it as a plug-and-play “universal” USB interface, replacing the old, rigid end-to-end “specific” packaging methods.
In simple terms, there have been clear data silos between AI applications. For interoperation between Agents/LLMs, each needed to develop its own API interfaces. This made the process complex and lacked two-way interaction capabilities. Furthermore, these models often had limited access and permissions.
The arrival of MCP provides a unified framework, enabling AI applications to break free from past data silos and dynamically access external data and tools. This drastically reduces development complexity and improves integration efficiency, especially in automating tasks, querying real-time data, and enabling cross-platform collaboration. As soon as I mentioned this, many immediately thought: if Manus, an innovation in multi-Agent collaboration, integrates MCP—a framework designed to boost such collaboration—wouldn’t it be unstoppable?
Indeed, Manus + MCP is the key factor behind the current disruption in the Web3 AI Agent space.
2) However, what’s truly perplexing is that both Manus and MCP are frameworks and protocol standards designed for web2 LLM/Agent, solving issues related to data interaction and collaboration between centralized servers. Their permissions and access control still rely on the “active” opening of each server node. In other words, they function more as open-source tool attributes rather than fully embracing decentralized principles.
By rights, this runs counter to the core values of web3 AI Agent, such as “distributed servers, distributed collaboration, and distributed incentives.” How could a centralized Italian cannon take down a decentralized fortress?
The issue stems from the fact that, in its early stages, web3 AI Agent has been too “web2-centric.” Many of the teams involved come from a web2 background and lack a deep understanding of the native needs of web3. Take the ElizaOS framework, for example—originally created to help developers quickly deploy AI Agent applications. It integrated platforms like Twitter and Discord, as well as APIs like OpenAI, Claude, and DeepSeek, providing frameworks for memory and character development to help speed up AI Agent deployment. But when scrutinized, how does this service framework differ from web2 open-source tools? What unique advantages does it offer?
The supposed advantage lies in its tokenomics incentive system. But essentially, it’s a framework that could easily be replaced by web2, driving AI Agents who are primarily focused on issuing new coins. This is concerning. If you follow this logic, you’ll understand why Manus + MCP can disrupt web3 AI Agents: many existing web3 AI Agent frameworks simply replicate the rapid development and application needs of web2 AI Agents without advancing in technical services, standards, or differentiation. As a result, the market and capital have revalued and recalibrated the earlier web3 AI Agents.
3) Now, having identified the crux of the problem, what can be done to solve it? The answer is simple: focus on creating truly web3-native solutions. The unique advantage of web3 lies in its distributed systems and incentive structures.
Consider distributed cloud computing, data, and algorithm service platforms. While on the surface it may appear that aggregating idle resources to provide computational power and data won’t satisfy immediate engineering innovation needs, the reality is that as many AI LLMs engage in a performance arms race, the idea of offering “idle resources at low cost” becomes an attractive service model. Initially, web2 developers and VCs might dismiss this, but as the web2 AI Agent innovation moves past performance and enters vertical application expansion, fine-tuning, and model optimization, the advantages of web3 AI resources will become clear.
In fact, once web2 AI has reached the top through resource monopolies, it will be increasingly difficult to reverse and use a “rural-surrounds-city” strategy to tackle segmented, niche applications. That’s when an abundance of web2 AI developers, combined with web3 AI resources, will truly drive forward.
Thus, the opportunity for web3 AI Agents is clear: before the web3 AI resource platform is flooded with web2 developers seeking solutions, we need to focus on developing a set of feasible, web3-native solutions. Beyond just web2-style rapid deployment, multi-agent collaboration, and tokenomics-based currency models, there are numerous innovative web3-native directions for web3 AI Agents worth exploring:
For example, a distributed consensus collaboration framework would be needed, considering the characteristics of LLM large model off-chain computing and on-chain state storage. This requires many adaptable components:
A Decentralized DID Identity Verification System: This would allow Agents to have verifiable on-chain identities, similar to how a unique address is generated for a smart contract by an executing virtual machine. This system is mainly used for the continuous tracking and recording of subsequent statuses;
A Decentralized Oracle System: This system is responsible for the trusted acquisition and verification of off-chain data. Unlike traditional Oracles, this system adapted to AI Agents might require a combined architecture, including data collection layers, decision consensus layers, and execution feedback layers. This ensures that the data needed by the agent on-chain, and off-chain computations and decisions, can be accessed in real time;
A Decentralized Storage DA System: Since the knowledge base state during an AI Agent’s operation is uncertain, and reasoning processes tend to be temporary, it’s necessary to record the key state library and reasoning paths behind the LLM. These should be stored in a distributed storage system with a cost-controlled data proof mechanism to ensure data availability during public chain verification;
A Zero-Knowledge Proof (ZKP) Privacy Computing Layer: This can integrate with privacy computing solutions like TEE (Trusted Execution Environment) and FHE (Fully Homomorphic Encryption), enabling real-time privacy computing and data proof verification. This allows Agents to access a wider range of vertical data sources (e.g., medical, financial), leading to the emergence of more specialized, customized service Agents;
A Cross-Chain Interoperability Protocol: This would resemble the framework defined by the MCP open-source protocol. However, this interoperability solution requires relay and communication scheduling mechanisms that adapt to Agent operations, transmission, and verification. It ensures asset transfers and status synchronization across different chains, especially for complex states such as Agent context, Prompts, knowledge base, Memory, etc.
……
In my view, the core challenge for Web3 AI Agents is to align the “complex workflows” of AI Agents with the “trust verification flow” of the blockchain as closely as possible. These incremental solutions could either emerge from upgrading existing projects or be reimagined within new projects in the AI Agent narrative track.
This is the direction Web3 AI Agents should aim to develop, aligning with the fundamental innovative ecosystem under the macro narrative of AI + Crypto. If there’s no innovation or establishment of differentiated competitive barriers, every shift in the Web2 AI track could disrupt the Web3 AI landscape.
Share
Content
Below are my personal thoughts:
1) MCP (Model Context Protocol) is an open-source, standardized protocol designed to enable seamless connections between various AI LLMs (Large Language Models) and Agents with diverse data sources and tools. Think of it as a plug-and-play “universal” USB interface, replacing the old, rigid end-to-end “specific” packaging methods.
In simple terms, there have been clear data silos between AI applications. For interoperation between Agents/LLMs, each needed to develop its own API interfaces. This made the process complex and lacked two-way interaction capabilities. Furthermore, these models often had limited access and permissions.
The arrival of MCP provides a unified framework, enabling AI applications to break free from past data silos and dynamically access external data and tools. This drastically reduces development complexity and improves integration efficiency, especially in automating tasks, querying real-time data, and enabling cross-platform collaboration. As soon as I mentioned this, many immediately thought: if Manus, an innovation in multi-Agent collaboration, integrates MCP—a framework designed to boost such collaboration—wouldn’t it be unstoppable?
Indeed, Manus + MCP is the key factor behind the current disruption in the Web3 AI Agent space.
2) However, what’s truly perplexing is that both Manus and MCP are frameworks and protocol standards designed for web2 LLM/Agent, solving issues related to data interaction and collaboration between centralized servers. Their permissions and access control still rely on the “active” opening of each server node. In other words, they function more as open-source tool attributes rather than fully embracing decentralized principles.
By rights, this runs counter to the core values of web3 AI Agent, such as “distributed servers, distributed collaboration, and distributed incentives.” How could a centralized Italian cannon take down a decentralized fortress?
The issue stems from the fact that, in its early stages, web3 AI Agent has been too “web2-centric.” Many of the teams involved come from a web2 background and lack a deep understanding of the native needs of web3. Take the ElizaOS framework, for example—originally created to help developers quickly deploy AI Agent applications. It integrated platforms like Twitter and Discord, as well as APIs like OpenAI, Claude, and DeepSeek, providing frameworks for memory and character development to help speed up AI Agent deployment. But when scrutinized, how does this service framework differ from web2 open-source tools? What unique advantages does it offer?
The supposed advantage lies in its tokenomics incentive system. But essentially, it’s a framework that could easily be replaced by web2, driving AI Agents who are primarily focused on issuing new coins. This is concerning. If you follow this logic, you’ll understand why Manus + MCP can disrupt web3 AI Agents: many existing web3 AI Agent frameworks simply replicate the rapid development and application needs of web2 AI Agents without advancing in technical services, standards, or differentiation. As a result, the market and capital have revalued and recalibrated the earlier web3 AI Agents.
3) Now, having identified the crux of the problem, what can be done to solve it? The answer is simple: focus on creating truly web3-native solutions. The unique advantage of web3 lies in its distributed systems and incentive structures.
Consider distributed cloud computing, data, and algorithm service platforms. While on the surface it may appear that aggregating idle resources to provide computational power and data won’t satisfy immediate engineering innovation needs, the reality is that as many AI LLMs engage in a performance arms race, the idea of offering “idle resources at low cost” becomes an attractive service model. Initially, web2 developers and VCs might dismiss this, but as the web2 AI Agent innovation moves past performance and enters vertical application expansion, fine-tuning, and model optimization, the advantages of web3 AI resources will become clear.
In fact, once web2 AI has reached the top through resource monopolies, it will be increasingly difficult to reverse and use a “rural-surrounds-city” strategy to tackle segmented, niche applications. That’s when an abundance of web2 AI developers, combined with web3 AI resources, will truly drive forward.
Thus, the opportunity for web3 AI Agents is clear: before the web3 AI resource platform is flooded with web2 developers seeking solutions, we need to focus on developing a set of feasible, web3-native solutions. Beyond just web2-style rapid deployment, multi-agent collaboration, and tokenomics-based currency models, there are numerous innovative web3-native directions for web3 AI Agents worth exploring:
For example, a distributed consensus collaboration framework would be needed, considering the characteristics of LLM large model off-chain computing and on-chain state storage. This requires many adaptable components:
A Decentralized DID Identity Verification System: This would allow Agents to have verifiable on-chain identities, similar to how a unique address is generated for a smart contract by an executing virtual machine. This system is mainly used for the continuous tracking and recording of subsequent statuses;
A Decentralized Oracle System: This system is responsible for the trusted acquisition and verification of off-chain data. Unlike traditional Oracles, this system adapted to AI Agents might require a combined architecture, including data collection layers, decision consensus layers, and execution feedback layers. This ensures that the data needed by the agent on-chain, and off-chain computations and decisions, can be accessed in real time;
A Decentralized Storage DA System: Since the knowledge base state during an AI Agent’s operation is uncertain, and reasoning processes tend to be temporary, it’s necessary to record the key state library and reasoning paths behind the LLM. These should be stored in a distributed storage system with a cost-controlled data proof mechanism to ensure data availability during public chain verification;
A Zero-Knowledge Proof (ZKP) Privacy Computing Layer: This can integrate with privacy computing solutions like TEE (Trusted Execution Environment) and FHE (Fully Homomorphic Encryption), enabling real-time privacy computing and data proof verification. This allows Agents to access a wider range of vertical data sources (e.g., medical, financial), leading to the emergence of more specialized, customized service Agents;
A Cross-Chain Interoperability Protocol: This would resemble the framework defined by the MCP open-source protocol. However, this interoperability solution requires relay and communication scheduling mechanisms that adapt to Agent operations, transmission, and verification. It ensures asset transfers and status synchronization across different chains, especially for complex states such as Agent context, Prompts, knowledge base, Memory, etc.
……
In my view, the core challenge for Web3 AI Agents is to align the “complex workflows” of AI Agents with the “trust verification flow” of the blockchain as closely as possible. These incremental solutions could either emerge from upgrading existing projects or be reimagined within new projects in the AI Agent narrative track.
This is the direction Web3 AI Agents should aim to develop, aligning with the fundamental innovative ecosystem under the macro narrative of AI + Crypto. If there’s no innovation or establishment of differentiated competitive barriers, every shift in the Web2 AI track could disrupt the Web3 AI landscape.