After Anthropic's AI model Claude Opus 4.5 outperformed top human candidates, the company redesigned its at-home coding test for job seekers. Led by Tristan Hume, the AI laboratory performance optimization team found that without on-site proctoring, they could not distinguish the best candidates from AI-assisted work. The new test focuses on a novel hardware optimization problem designed to stump existing AI tools. Hume also released the old version of the test and invited anyone who can surpass Claude Opus 4.5 to contact the company.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)