
OpenAI has been facing significant safety concerns and internal upheaval recently. Reports indicate that OpenAI employees have raised alarms over rushed launches and inadequate safety protocols, questioning the company’s commitment to AI safety. This has been compounded by the dissolution of the safety team following the departure of key figures like co-founder Ilya Sutskever and safety researcher Jan Leike, who cited the company’s shift in focus from safety to product launches.
In response to these issues, OpenAI has formed a new Safety and Security Committee, led by CEO Sam Altman and other board members. This committee is tasked with making critical safety and security decisions for all OpenAI projects, aiming to restore confidence in the company’s safety protocols.
Additionally, OpenAI has introduced a five-tier system to internally track its progress towards achieving AGI (Artificial General Intelligence), reflecting its ongoing efforts to manage and monitor the development of its AI systems.
Senator Angus King, along with colleagues, has raised concerns about OpenAI’s safety and secrecy practices. They have sent a letter to OpenAI CEO Sam Altman following reports from whistleblowers about the company’s focus on product development over safety, insufficient cybersecurity measures, and retribution against former employees. The senators demand information on OpenAI’s commitment to safety standards, the use of non-disparagement agreements, and cybersecurity protocols. They seek answers by August 13, 2024, to ensure the company’s alignment with its public promises on AI safety.
OpenAI is currently under scrutiny after whistleblowers filed a complaint with the U.S. Securities and Exchange Commission (SEC). The complaint alleges that OpenAI used restrictive nondisclosure agreements (NDAs) to prevent employees from voicing concerns about the safety risks associated with its AI technology. The whistleblowers claim that these agreements could stifle important disclosures about potential harms, such as the misuse of AI to create bioweapons or enable cyberattacks.
OpenAI has responded by stating that their whistleblower policy is designed to protect employees’ rights to make protected disclosures. They also emphasized that they have revised their departure processes to remove nondisparagement clauses and stressed the importance of rigorous debate about AI safety
The SEC has been asked to investigate these practices and ensure that OpenAI’s NDAs do not violate federal laws, including protections established for whistleblowers under the Dodd-Frank Act. The outcome of this investigation could have broader implications for the AI industry, highlighting the tension between corporate confidentiality and public safety.
OpenAI has promised the US AI Safety Institute early access to its upcoming AI model amid criticisms of prioritizing advanced technology over safety. This partnership aims to address safety concerns and includes allocating 20% of OpenAI’s resources to safety research. OpenAI’s move aligns with its support for the Future of Innovation Act, which could establish AI safety standards. The company is also increasing its influence in federal AI policy, reflected in its heightened lobbying efforts and CEO Sam Altman’s role on a DHS advisory board.
On April 26, 2024, an advisory board meeting, where over 20 executives from technology and critical infrastructure sectors, along with civil rights leaders, came together.
The primary focus of this gathering was to discuss strategies to enhance the resilience of critical infrastructure against emerging threats while ensuring the protection of civil liberties and privacy rights. The meeting aimed to foster collaboration between the tech industry, infrastructure operators, and civil rights advocates to address challenges in cybersecurity and data protection.
References:
https://www.techedt.com/safety-concerns-continue-to-plague-openai
https://www.stripes.com/theaters/us/2024-07-13/openai-prohibited-staff-safety-risks-14475707.html
https://ground.news/article/openai-blocked-staff-from-airing-security-concerns-whistleblowers_5d24ea
