OpenAI launches pilot program for safety researchers to support independent AI safety and alignment research

robot
Abstract generation in progress

ME News update, April 7 (UTC+8) — OpenAI recently announced the launch of a “Safety Researcher” pilot program, aimed at supporting independent security and alignment research and cultivating the next generation of talent. The program is for external researchers, engineers, and practitioners, encouraging them to conduct rigorous, high-impact research on safety and alignment issues in advanced AI systems. The program runs from September 14, 2026 to February 5, 2027. Applicants must focus on security issues that are critical to existing and future systems. Priority research areas include security evaluation, ethics, robustness, scalable mitigation measures, secure privacy-preserving methods, agent oversight, and high-severity abuse areas, among others. Researchers will work closely with OpenAI’s mentors, and the work location can be set at Constellation in Berkeley or remotely. At the end of the program, substantive research outputs are required, such as papers, benchmark tests, or datasets. The program provides a monthly stipend, computing resources support, and ongoing guidance. The application is now open, with a deadline of May 3. Review results will be notified by July 25. (Source: InFoQ)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin