Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google proposes continuous evaluation engineering methods to address the challenges of AI agent deployment environment assessment
ME News update, April 4 (UTC+8). Recently, GoogleCloudTech published a post stating that, in production environments, relying on manual chat and subjective impressions (i.e., “vibe checks”) to evaluate AI agents is not reliable and may lead to disaster. The article argues that, due to the probabilistic nature of generative AI, even small changes in prompts or model weights can cause a significant decline in performance. To address this problem, the article proposes an engineering approach using continuous evaluation (CE). This method distinguishes two modes of AI engineering: the exploration mode (laboratory) and the defense mode (factory). The exploration mode focuses on finding model potential through a small number of examples and vibe checks; the defense mode, in contrast, focuses on stability by using dataset-based evaluations, strict gating, and automated metrics to ensure the system meets service level objectives (SLO). The article warns that many teams remain in the exploration mode for the long term. It also provides an example of a distributed multi-agent system (the course creator system) built based on Cloud Run and the Agent2Agent protocol, illustrating the practice of the defense mode for reliable, scalable, production-grade AI deployments by emphasizing the principle of separation of concerns and specialized agents (such as researchers, judges, content builders, and coordinators). (Source: InFoQ)