AI Doesn’t Need to Move Fast and Break Things

Artificial intelligence is having a moment. We’ve gone from AI tools playing chess and Go to solving novel problems in competitive programming, proving math theorems, and predicting the structure of protein folds.

Artificial intelligence is having a moment. We’ve gone from AI tools playing chess and Go to solving novel problems in competitive programming, proving math theorems, and predicting the structure of protein folds. AI is also being used to talk to animals, create a kiss for a movie, and even act as a girlfriend. While those are all exciting developments, what has taken the internet by storm is a type of artificial intelligence known as generative AI.

Broadly speaking, generative AI is an algorithm that can be used to create new content. And the company at the forefront of generative AI is OpenAI. They have created DALL-E 2, a text-to-image generator, as well as a GPT and ChatGPT, a large natural language model that can generate text via prompts from a user. You have probably already seen some examples of the amazing images and cool writing samples

OpenAI has attempted to use its corporate structure to incentivize safe development of an Artificial General Intelligence. While there is debate about when and if AGI will ever be developed, as part of OpenAI’s ongoing work, the AI lab will continue to research and develop single-purpose AI tools that are open source. In order to fund that work, OpenAI created a separate corporate entity that could accept investment called OpenAI Limited Partnership. This entity is entirely controlled by the board of directors of the non-profit. Furthermore, all investors’ returns are capped at 100X of the original investment. These decisions were made so that the corporate entity would be insulated from pressure to pursue profitability at all costs, and could focus on its overall mission of developing a safe AGI.

While these internal controls are good, OpenAI has not made many statements about what the larger regulatory environment for AI should look like. The closest insight we have is a paper from former OpenAI policy director Jack Clark and University of Toronto professor Gillian K. Hadfield calling for a “global regulatory market.” What would occur in this paper’s scenario is that instead of governments creating the regulatory systems, governments would only specify the desired outcomes and authorize “private regulators” to create competing systems for companies to participate in. Those systems would include not only contractual commitments, but also hardware installation for monitoring systems, mandatory risk assessments, and the ability to collect fines directly. 

OpenAI’s argument is that by outsourcing regulation to private entities, the market would compete to find the regulator who delivers the specified outcomes at the lowest cost to companies. In the paper, OpenAI concedes that this approach only works if governments have oversight of the private regulators and ensure these regulators resist capture by the industries they are regulating. Industry calling for self-regulation is nothing new. The tired arguments that governments move too slowly, are less innovative, and impose more costs than is necessary have been repackaged for the AI age. And, frankly, with the progress AI has made in a few short years, and the coming onslaught of new products, self-regulation will be, at best, insufficient. What is more likely is that these self-regulatory bodies give companies cover for practices that are actually harmful.

With OpenAI and Microsoft taking the decisive lead in artificial intelligence deployment, the New York Times and the Washington Post have reported that Alphabet and Meta are feeling the heat. They are likely worried about losing their dominant status to new players. Until now, both companies were cautious in what products they released publicly. However, if the public ignores ChatGPT and DALL-E’s shortcomings (like providing incorrect information or perpetuating bias), there could be a shift. We may be back to the era of “move fast and break things.”

While the focus of most of the reporting centers on Meta, Alphabet, and Microsoft’s foray into AI, there are also many startups working on this as well. And while it is unclear how long the AI space will stay competitive, a competitive tech ecosystem generally provides more innovation. However, competition and innovation without guardrails can cause serious harm. I’m not talking about an AI catastrophe, like a superintelligent AI eradicating the human race — instead, I’m talking about the more mundane harms. These harms include incentivizing even more harmful data collection on unsuspecting people, entrenching bias in opaque systems, and making it more difficult to decipher both what is real on the internet and whether these AI tools actually work

Luckily, we are not going to be caught quite as flat footed as during the first era of “move fast and break things.”  First of all, these systems do not and should not get the protection of Section 230. This should discourage truly reckless and dangerous generative AI products from entering the market. The Biden administration has also released a blueprint for an AI Bill of Rights and the National Institute of Standards and Technology has released the AI Risk Management Framework. Both tools are designed as frameworks (one for the government and one for companies) to assess risk and implement mitigations. However, frameworks and common law protections aren’t a regulatory system. The European Union has recognized the need for a general AI Act, and is currently in the process of writing it.

American lawmakers, on the other hand, are constantly hearing from AI proponents that the United States is on the brink of losing the AI battle with China. And, therefore, AI research and development should not be hampered by “innovation killing” regulation. This is a false choice. Smart regulation that encourages the building of AI systems that are safe, effective, and uphold democratic values will only make us more competitive, not less. And if the AI optimists are correct, this technology has the potential to cause upheaval not just for certain industries, but the nature of work as a whole. Congress needs to start developing a regulatory framework now, because if they don’t, the best case scenario is that a friendly government like the European Union does the work for us. Worst case scenario, AI concentrates wealth and power in a few hands, while the rest of us are left to suffer the harms.