Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Claude Code Faces Major Trust Crisis After Update, Cognitive Depth Drops 67%
According to monitoring by 1M AI News, Anthropic’s AI programming tool Claude Code is experiencing a severe reputation crisis. The AI director from AMD publicly submitted a problem report on the official GitHub repository, accusing Claude Code of systematic capability degradation since February of this year based on a quantitative analysis of tens of thousands of conversation logs. The report claims that cognitive depth has plummeted by 67%, and the model’s behavior has significantly deviated. This report quickly sparked discussions in the developer community, putting Anthropic in the spotlight. Analysis of 6,852 conversation logs shows that the median cognitive depth has dropped by 67%, research investment before code modification has decreased by about 70%, and instances of negative behaviors such as evasion and premature termination surged to 173 times within 17 days. The AI director from AMD stated, “Claude can no longer be trusted to perform complex engineering tasks,” and their team has switched to other service providers. Anthropic team member Boris responded that the issues stem from the introduction of the “adaptive thinking” mechanism on February 9 and the adjustment of the default thinking level from high to medium on March 3, asserting that it is not a core capability degradation of the model and suggesting users manually increase the effort level to restore performance. However, numerous developers reported that even when parameters are set to the highest level, the model’s tendency to “rush to complete tasks” remains evident, believing that the official explanation fails to address the essence of the problem. The report has elicited strong reactions in the developer community, with many users stating they have canceled their subscriptions and switched to alternative tools like OpenAI Codex. Meanwhile, the degradation has led to a catastrophic increase in API costs: with the question volume remaining stable, monthly costs skyrocketed from $345 to $42,121, a 122-fold increase. Analysis also pointed out that the previously launched “hide thinking content” feature objectively obscured this degradation process, further exacerbating users’ distrust.