More than a dozen current and former employees from OpenAI, Google’s Deep Mind, and Anthropic have come together and posted an open letter on Tuesday, drawing attention to the serious risks posed by continuously and rapidly developing this technology without having a proper and effective oversight framework in place.
The group of researchers contends that this technology has the potential to be misused in ways that could exacerbate existing inequalities, manipulate information and spread disinformation, and even lead to a situation where the autonomous AI systems could potentially lose control, potentially resulting in the extinction of humanity.
The signatories firmly believe that these risks can be effectively mitigated through the combined efforts of the scientific community, legislators, and the public. However, they are worried that AI companies have strong financial incentives that might prevent them from adhering to effective oversight and cannot be relied upon to fairly oversee the development of this technology impartially.
Since the release of ChatGPT in November 2022, generative AI technology has taken the computing world by storm. Hyperscalers like Google Cloud, Amazon AWS, Oracle, and Microsoft Azure are leading the way in what is expected to become a trillion-dollar industry by 2032. A recent study by McKinsey shows that as of March 2024, nearly 75% of the organizations surveyed had incorporated AI in at least one capacity. Meanwhile, in its annual Work Index survey, Microsoft found that 75% of office workers are already using AI at work.
However, as Daniel Kokotajlo, a former employee at OpenAI, told The Washington Post, “They and others have adopted the’move fast and break things’ approach, and that is the exact opposite of what is needed for a technology of such power and with such limited understanding.” AI startups such as OpenAI and Stable Diffusion have repeatedly run afoul of U.S. copyright laws, for instance. Additionally, publicly available chatbots are routinely encouraged to repeat hate speech, conspiracy theories, and spread misinformation.
The objecting AI employees argue that these companies possess substantial non-public information about the capabilities and limitations of their products, including the potential risks of harm that the models might cause and the actual effectiveness of their protective safeguards. They point out that only a portion of this information is made available to government agencies through weak sharing obligations, and none of it is accessible to the general public.
“So long as there is no effective government oversight over these corporations, current and former employees are among the few who can hold them accountable to the public,” the group stated. They argue that the widespread use of confidentiality agreements in the industry and the weak implementation of existing whistleblower protection measures are hindering these crucial matters.
The group has called on AI companies to cease entering into and enforcing non-disparagement agreements, establish an anonymous process for employees to address their concerns with the company’s board of directors and government regulators, and refrain from retaliating against public whistleblowers should the internal processes prove insufficient.