OpenAI releases a 13-page policy white paper proposing a robot tax, a national AI wealth fund, and a four-day workweek

robot
Abstract generation in progress

According to 1M AI News monitoring, OpenAI released a 13-page policy white paper titled “Industrial Policy for the Intelligence Age,” laying out a package of reforms to address the economic and social order in the wake of the arrival of superintelligence. When Altman was interviewed by Axios, he said that superintelligence is right around the corner, and that incremental policy adjustments are far from enough—what’s needed is a “new social contract on the scale of a progress era and a New Deal.”

The white paper is built around three goals: broadly shared prosperity, reducing risk, and expanding access to AI. Key proposals include:

  1. Tax reform: As AI replaces human labor, payroll taxes (currently the main funding source for programs such as social security and medical assistance) will gradually shrink. The focus should shift to taxing capital gains and corporate income, and to exploring an “automated labor tax”—what others call a robot tax.

  2. A universal AI wealth fund: Following the model of Alaska’s Permanent Fund (which distributes oil revenues to residents), establish a public wealth fund at the national level so that each citizen directly holds a share of the AI-driven economic growth. Funding would come in part from contributions by AI companies.

  3. A four-day workweek: Turn the productivity gains brought by AI into an “efficiency dividend.” The government should pilot a 32-hour workweek, with wages unchanged, and output maintained at the current level.

  4. Automated safety-net trigger mechanism: When unemployment indicators caused by AI reach a predefined threshold, automatically increase unemployment benefits, wage insurance, and cash assistance, and then phase them out gradually after the job market recovers.

  5. Positioning access to AI as a “fundamental right to participate in the modern economy.”

The white paper also acknowledges scenarios where dangerous AI systems are “not easily recalled,” because they have autonomy and self-replication capabilities, and it includes an emergency response plan for uncontrolled AI that requires coordinated government action. Altman warns that major cyberattacks achieved by AI in the near term are “absolutely possible,” and that using AI to create novel pathogens is “no longer a theoretical assumption.”

Altman said: “Some ideas will be good, and some will be bad—but we do feel urgency.” OpenAI simultaneously launched a supporting research grant program, providing up to $100k in scholarships and up to $1 million in API credits for research into relevant policies, and will hold an OpenAI Workshop in Washington in May to discuss these issues.

OpenAI is preparing for an IPO, and Congress is moving to draft AI legislation. The white paper is being released at this time. While its own technology could potentially upend the job market, it proactively proposes taxing AI companies and establishing redistribution mechanisms—both a preemptive gesture toward regulation and an added layer of a “responsible AI” narrative to its roadshow for going public.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin