A group of existing and former staff from major AI businesses like OpenAI, Google DeepMind and Anthropic has signed an open letter asking for higher transparency and protection from retaliation for these who speak out about the prospective issues of AI. “So lengthy as there is no efficient government oversight of these corporations, existing and former staff are amongst the couple of persons who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our issues, except to the pretty businesses that may possibly be failing to address these concerns.”
The letter comes just a couple of weeks soon after a Vox investigation revealed OpenAI had attempted to muzzle not too long ago departing staff by forcing them to chose amongst signing an aggressive non-disparagement agreement, or danger losing their vested equity in the organization. Following the report, OpenAI CEO Sam Altman mentioned that he had been genuinely embarrassed” by the provision and claimed it has been removed from current exit documentation, although it is unclear if it remains in force for some staff.
The 13 signatories contain former OpenAI staff Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo mentioned that he resigned from the organization soon after losing self-confidence that it would responsibly construct artificial basic intelligence, a term for AI systems that is as clever or smarter than humans. The letter — which was endorsed by prominent AI authorities Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave issues more than the lack of efficient government oversight for AI and the monetary incentives driving tech giants to invest in the technologies. The authors warn that the unchecked pursuit of highly effective AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human handle more than autonomous systems, potentially resulting in human extinction.
“There is a lot we do not fully grasp about how these systems function and regardless of whether they will stay aligned to human interests as they get smarter and possibly surpass human-level intelligence in all locations,” wrote Kokotajlo on X. “Meanwhile, there is small to no oversight more than this technologies. As an alternative, we rely on the businesses developing them to self-govern, even as profit motives and excitement about the technologies push them to ‘move quickly and break factors.’ Silencing researchers and generating them afraid of retaliation is risky when we are presently some of the only persons in a position to warn the public.”
OpenAI, Google and Anthropic did not quickly respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson mentioned the organization is proud of its “track record supplying the most capable and safest AI systems” and it believes in its “scientific strategy to addressing danger.” It added: “We agree that rigorous debate is important provided the significance of this technologies and we’ll continue to engage with governments, civil society and other communities about the globe.”
The signatories are calling on AI businesses to commit to 4 essential principles:
-
Refraining from retaliating against staff who voice security issues
-
Supporting an anonymous program for whistleblowers to alert the public and regulators about dangers
-
Enabling a culture of open criticism
-
And avoiding non-disparagement or non-disclosure agreements that restrict staff from speaking out
The letter comes amid developing scrutiny of OpenAI’s practices, such as the disbandment of its “superalignment” security group and the departure of essential figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company’s prioritization of “shiny items” more than security.











