Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI launches pilot program for safety researchers to support independent AI safety and alignment research
ME News update, April 7 (UTC+8) — OpenAI recently announced the launch of a “Safety Researcher” pilot program, aimed at supporting independent security and alignment research and cultivating the next generation of talent. The program is for external researchers, engineers, and practitioners, encouraging them to conduct rigorous, high-impact research on safety and alignment issues in advanced AI systems. The program runs from September 14, 2026 to February 5, 2027. Applicants must focus on security issues that are critical to existing and future systems. Priority research areas include security evaluation, ethics, robustness, scalable mitigation measures, secure privacy-preserving methods, agent oversight, and high-severity abuse areas, among others. Researchers will work closely with OpenAI’s mentors, and the work location can be set at Constellation in Berkeley or remotely. At the end of the program, substantive research outputs are required, such as papers, benchmark tests, or datasets. The program provides a monthly stipend, computing resources support, and ongoing guidance. The application is now open, with a deadline of May 3. Review results will be notified by July 25. (Source: InFoQ)