📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
AI Layer1 Research Report: Analyzing the Exploration of Six Major Projects to Build a Decentralized AI Ecosystem
AI Layer1 Research Report: Searching for On-Chain DeAI's Fertile Ground
Overview
In recent years, leading technology companies such as OpenAI, Anthropic, Google, and Meta have been driving the rapid development of large language models (LLM). LLMs have demonstrated unprecedented capabilities across various industries, greatly expanding the realm of human imagination and even showing potential to replace human labor in certain scenarios. However, the core of these technologies is firmly controlled by a few centralized tech giants. With substantial capital and control over expensive computing resources, these companies have established insurmountable barriers that make it difficult for the vast majority of developers and innovation teams to compete.
At the same time, in the early stages of the rapid evolution of AI, public opinion often focuses on the breakthroughs and conveniences brought by technology, while there is relatively insufficient attention to core issues such as privacy protection, transparency, and security. In the long run, these issues will profoundly affect the healthy development of the AI industry and social acceptance. If not properly addressed, the controversy over whether AI is "for good" or "for evil" will become increasingly prominent, and centralized giants, driven by profit motives, often lack sufficient motivation to proactively tackle these challenges.
Blockchain technology, with its decentralized, transparent, and censorship-resistant characteristics, offers new possibilities for the sustainable development of the AI industry. Currently, numerous "Web3 AI" applications have emerged on mainstream blockchains. However, a deeper analysis reveals that these projects still face many issues: on one hand, the degree of decentralization is limited, with key links and infrastructure still relying on centralized cloud services, making it difficult to support a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still shows limitations in model capabilities, data utilization, and application scenarios, and the depth and breadth of innovation need to be improved.
To truly realize the vision of decentralized AI, enabling the blockchain to securely, efficiently, and democratically support large-scale AI applications while competing in performance with centralized solutions, we need to design a Layer 1 blockchain specifically tailored for AI. This will provide a solid foundation for open innovation, democratic governance, and data security in AI, promoting the prosperous development of the decentralized AI ecosystem.
Core Features of AI Layer 1
AI Layer 1, as a blockchain specifically tailored for AI applications, is designed with its underlying architecture and performance closely aligned with the needs of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:
Efficient Incentives and Decentralized Consensus Mechanism The core of AI Layer 1 lies in building an open network for sharing resources such as computing power and storage. Unlike traditional blockchain nodes that mainly focus on ledger bookkeeping, AI Layer 1 nodes need to undertake more complex tasks. They must not only provide computing power and complete AI model training and inference but also contribute diverse resources such as storage, data, and bandwidth, thus breaking the monopoly of centralized giants in AI infrastructure. This raises higher requirements for the underlying consensus and incentive mechanisms: AI Layer 1 must be able to accurately assess, incentivize, and verify the actual contributions of nodes in AI inference, training, and other tasks to ensure network security and efficient resource allocation. Only in this way can the stability and prosperity of the network be guaranteed, effectively reducing overall computing power costs.
Exceptional high performance and heterogeneous task support capabilities. AI tasks, especially the training and inference of LLMs, place extremely high demands on computational performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support a diverse range of heterogeneous task types, including various model structures, data processing, inference, storage, and other multi-faceted scenarios. AI Layer 1 must deeply optimize its underlying architecture for requirements such as high throughput, low latency, and elastic parallelism, while also providing native support for heterogeneous computing resources to ensure that various AI tasks can operate efficiently, achieving a smooth transition from "single-type tasks" to "complex and diverse ecosystems."
Verifiability and Trustworthy Output Guarantee AI Layer 1 not only needs to prevent model malice, data tampering, and other security risks, but also must ensure the verifiability and alignment of AI output results from the underlying mechanism. By integrating cutting-edge technologies such as Trusted Execution Environment (TEE), Zero-Knowledge Proof (ZK), and Multi-Party Computation (MPC), the platform allows every model inference, training, and data processing process to be independently verified, ensuring the fairness and transparency of the AI system. At the same time, this verifiability can help users clarify the logic and basis of AI output, achieving "what you get is what you wish for", thereby enhancing user trust and satisfaction with AI products.
Data Privacy Protection AI applications often involve sensitive user data, and in sectors such as finance, healthcare, and social networking, data privacy protection is particularly critical. AI Layer 1 should utilize encrypted data processing technologies, privacy computing protocols, and data permission management methods while ensuring verifiability, to guarantee the security of data throughout the entire process of inference, training, and storage, effectively preventing data leakage and misuse, and alleviating users' concerns about data security.
Powerful ecological carrying and development support capabilities As an AI-native Layer 1 infrastructure, the platform must not only possess technological leadership but also provide comprehensive development tools, integrated SDKs, operational support, and incentive mechanisms for ecosystem participants such as developers, node operators, and AI service providers. By continuously optimizing platform usability and developer experience, it promotes the implementation of diverse AI-native applications, achieving sustained prosperity of a decentralized AI ecosystem.
Based on the above background and expectations, this article will provide a detailed introduction to six representative AI Layer 1 projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G, systematically sorting out the latest developments in the field, analyzing the current status of project development, and discussing future trends.
Sentient: Building a Loyal Open Source Decentralized AI Model
Project Overview
Sentient is an open-source protocol platform that is building an AI Layer 1 blockchain. The initial phase is Layer 2, which will later migrate to Layer 1. By combining AI Pipeline and blockchain technology, it aims to construct a decentralized artificial intelligence economy. Its core goal is to address model ownership, invocation tracking, and value distribution issues in the centralized LLM market through the "OML" framework (Open, Monetizable, Loyal), enabling AI models to achieve on-chain ownership structure, transparent invocation, and value sharing. The vision of Sentient is to empower anyone to build, collaborate, own, and monetize AI products, thus driving a fair and open AI Agent network ecosystem.
The Sentient Foundation team brings together top academic experts, blockchain entrepreneurs, and engineers from around the world, dedicated to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI safety and privacy protection, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecosystem layout. Team members have backgrounds spanning well-known companies such as Meta, Coinbase, and Polygon, as well as top universities like Princeton University and the Indian Institutes of Technology, covering fields such as AI/ML, NLP, and computer vision, working together to push the project forward.
As a second entrepreneurial project of Polygon co-founder Sandeep Nailwal, Sentient carried an aura from its inception, possessing rich resources, connections, and market recognition, providing strong backing for the project's development. In mid-2024, Sentient completed a $85 million seed round financing, led by Founders Fund, Pantera, and Framework Ventures, with other investment institutions including dozens of well-known VCs such as Delphi, Hashkey, and Spartan.
( Design Architecture and Application Layer
)# Infrastructure Layer
Core Architecture
The core architecture of Sentient consists of two parts: AI Pipeline and on-chain system.
The AI pipeline is the foundation for developing and training "Loyal AI" artifacts, consisting of two core processes:
The blockchain system provides transparency and decentralized control for protocols, ensuring ownership of AI artifacts, usage tracking, revenue distribution, and fair governance. The specific architecture is divided into four layers:
![Biteye and PANews jointly released AI Layer1 research report: Seeking on-chain DeAI fertile ground]###https://img-cdn.gateio.im/webp-social/moments-a70b0aca9250ab65193d0094fa9b5641.webp###
(## OML Model Framework
The OML framework (Open, Monetizable, Loyal) proposed by Sentient is a core concept aimed at providing clear ownership protection and economic incentives for open-source AI models. By combining on-chain technology with AI-native cryptography, it has the following characteristics:
)## AI-native Cryptography
AI-native cryptography leverages the continuity, low-dimensional manifold structure, and differentiable properties of AI models to develop a "verifiable but non-removable" lightweight security mechanism. Its core technology is:
This method enables "behavior-based authorization calls + ownership verification" without the cost of re-encryption.
![Biteye and PANews jointly released AI Layer1 research report: Seeking fertile ground for on-chain DeAI]###https://img-cdn.gateio.im/webp-social/moments-cf5f43c63b7ab154e2201c8d3531be8c.webp###
(## Model Rights Confirmation and Security Execution Framework
Sentient currently adopts Melange mixed security: combining fingerprint rights confirmation, TEE execution, and on-chain contract profit sharing. The fingerprint method is implemented through OML 1.0 as the main line, emphasizing the "Optimistic Security" concept, which assumes compliance by default and allows for detection and punishment of violations.
The fingerprinting mechanism is a key implementation of OML, which generates unique signatures during the training phase by embedding specific "question-answer" pairs. Through these signatures, the model owner can verify ownership and prevent unauthorized copying and commercialization. This mechanism not only protects the rights of model developers but also provides traceable on-chain records of the model's usage.
In addition, Sentient has launched the Enclave TEE computing framework, utilizing Trusted Execution Environments (such as AWS Nitro Enclaves) to ensure that models respond only to authorized requests, preventing unauthorized access and use. Although TEE relies on hardware and has certain security risks, its high performance and real-time advantages make it a core technology for current model deployment.