Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Just saw something interesting about Mem0's latest research—they're making serious waves in how AI actually remembers things over long conversations.
So here's the deal: Mem0 just dropped their findings on the LOCOMO benchmark, and the numbers are pretty wild. Their long-term memory algorithm is hitting 26% higher accuracy compared to OpenAI's built-in memory setup. That's not a small gap. But what really caught my attention is the efficiency side—they're cutting P95 inference latency by 91% and slashing token consumption by 90%. We're talking about solving the classic AI problem where these systems just... forget things when conversations get long.
The approach is clever too. Instead of just throwing more context at the problem like most people do, Mem0 uses this two-stage system. First, they extract the actual facts from your latest messages, summaries, and history. Then they compare that against a vector database and update accordingly—add new stuff, update conflicts, delete irrelevant data. Keeps everything clean and consistent. They even built an enhanced version called Mem0ᵍ that uses graph structures to map out complex relationships between entities across multiple sessions.
What really matters though? Speed. In production, Mem0 handles the whole cycle—pulling memory, generating response, everything—in 0.71 seconds. Traditional methods are still stuck at nearly 10 seconds. That's the kind of difference that actually matters for real-world applications.
The research got accepted by ECAI and they open-sourced it on GitHub, so people can actually dig into how it works. This feels like one of those incremental but important steps forward in making AI agents less forgetful. Worth keeping an eye on if you're following the memory and reasoning side of AI development.