Investing.com — According to The Wall Street Journal, several federal agencies have expressed concerns about the safety and reliability of Elon Musk’s xAI artificial intelligence tools over the past few months.
These warnings came before the Pentagon decided this week to allow the xAI chatbot Grok to be used in classified environments, making it a core part of some of the U.S.'s most sensitive and secret operations.
The Wall Street Journal reports that Ed Foster, the top official at the General Services Administration (GSA), has been warning White House officials about potential security issues with Grok over the past few months. Other GSA officials under him also raised safety concerns, believing that Grok is overly submissive and could be manipulated or corrupted by incorrect or biased data, posing potential system risks.
The matter reached White House Chief of Staff Susie Wiles, who called a senior executive at xAI to inquire about these concerns. The executive told her that xAI is working to address the security issues causing Grok’s excessive compliance. Josh Glenbaum, a senior GSA procurement officer recruited through Musk’s Department of Government Efficiency, assured government officials that Grok on government platforms is separate from the public platform. Wiles expressed satisfaction with this response.
Glenbaum stated in a release that the agency takes AI safety very seriously. “We rigorously evaluate cutting-edge AI models, including xAI, through a comprehensive internal review process. In this case, we followed established procedures and proceeded as planned.”
Two weeks ago, Matthew Johnson, responsible for AI at the Pentagon, resigned partly due to concerns that safety and governance had become afterthoughts amid efforts to expand AI capabilities within the Department of Defense.
Johnson’s team circulated memos highlighting security issues with Grok and questioning whether it meets government ethics and standards. These memos have been submitted to the Pentagon’s chain of command.
This article was translated with the assistance of AI. For more information, please see our Terms of Use.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The Pentagon adopts xAI's Grok, despite prior federal security warnings — The Wall Street Journal
Investing.com — According to The Wall Street Journal, several federal agencies have expressed concerns about the safety and reliability of Elon Musk’s xAI artificial intelligence tools over the past few months.
These warnings came before the Pentagon decided this week to allow the xAI chatbot Grok to be used in classified environments, making it a core part of some of the U.S.'s most sensitive and secret operations.
The Wall Street Journal reports that Ed Foster, the top official at the General Services Administration (GSA), has been warning White House officials about potential security issues with Grok over the past few months. Other GSA officials under him also raised safety concerns, believing that Grok is overly submissive and could be manipulated or corrupted by incorrect or biased data, posing potential system risks.
The matter reached White House Chief of Staff Susie Wiles, who called a senior executive at xAI to inquire about these concerns. The executive told her that xAI is working to address the security issues causing Grok’s excessive compliance. Josh Glenbaum, a senior GSA procurement officer recruited through Musk’s Department of Government Efficiency, assured government officials that Grok on government platforms is separate from the public platform. Wiles expressed satisfaction with this response.
Glenbaum stated in a release that the agency takes AI safety very seriously. “We rigorously evaluate cutting-edge AI models, including xAI, through a comprehensive internal review process. In this case, we followed established procedures and proceeded as planned.”
Two weeks ago, Matthew Johnson, responsible for AI at the Pentagon, resigned partly due to concerns that safety and governance had become afterthoughts amid efforts to expand AI capabilities within the Department of Defense.
Johnson’s team circulated memos highlighting security issues with Grok and questioning whether it meets government ethics and standards. These memos have been submitted to the Pentagon’s chain of command.
This article was translated with the assistance of AI. For more information, please see our Terms of Use.