AI Policy and the Uncanny Valley Freakout

We have been debating, on and off, about the issues around artificial intelligence and AI governance for some time now. Here at Public Knowledge, we published our first white paper on the subject in 2018. But the last few months have seen an explosion of interest and a sudden consensus that powerful AI tools require some sort of regulation.

We have been debating, on and off, about the issues around artificial intelligence and AI governance for some time now. Here at Public Knowledge, we published our first white paper on the subject in 2018. But the last few months have seen an explosion of interest and a sudden consensus that powerful AI tools require some sort of regulation. Hardly a day goes by without a new editorial calling for regulation of AI, or a high-profile story on the potential threat of AI to jobs (ranging from creative jobs such as Hollywood writers or musicians to boring lawyers), or a story on new AI threats to consumers, or even how AI poses an existential threat to our democracy. A recent Senate Hearing produced a rare bipartisan consensus on the need for new laws and federal regulation to mitigate the threats posed by AI technology. In response, technology giants such as Google and Microsoft have published new proposed codes of conduct and regulatory regimes that include not merely the usual calls for self-regulation, but also actually invite government regulation as well.

Or, to quote my colleague Sara Collins, “AI is having a moment.”

Policy and the Uncanny Valley Freakout.

As everyone knows, what triggered this sudden, massive interest in regulating AI after years of low-level discussion was the release to the public of several AI tools with natural language interfaces such as ChatGPT and DALL-E. The new generation of AI tools mimics human behaviors and responses at an entirely new level of believability. We have grown accustomed to phone trees and robocalls with poorly imitated human voices, and laughed at how AI translation and writing programs produced incomprehensible results. Suddenly (from a public perspective) we seem on the cusp of AIs that can persuasively mimic human activities. Even more alarming, these AIs are not limited to dull and repetitive tasks or creating an overlay on a human model (such as deep fakes or de-aging technology used by Hollywood). These AIs now appear capable (at least to some degree) of mimicking the kind of creative activities that most of us have felt distinguished AIs from human beings — at a level nearly indistinguishable (at least to the casual observer) from that of actual human beings. In a blink, the prospect of AI tools capable of replacing human writers and artists went from “someday, maybe” to “if not today, then tomorrow.”

The result has been what I can best describe as an “uncanny valley freakout” — or, to give in to the Washington, D.C. love of three letter acronyms, a “UVF.” For those not familiar with the term, the uncanny valley refers to the emotional response to things that look almost-but-not-quite human. Things that look entirely different from human beings elicit a particular response. Our fellow human beings elicit another set of responses. But something close enough to human that it does not fit into either category falls into the “uncanny valley” between the two and prompts a response ranging from unease to downright revulsion depending on the person and the circumstances.

Mix this uncanny valley response with the standard Silicon Valley and media hype about how this technology is totally reality altering, and we have a new moment of cultural zeitgeist in which all the science fiction scenarios, ranging from the destruction of the human race to AIs eliminating our jobs to robots manipulating our emotions, all seem a lot less like fantastic fiction and much more plausible. It did not take long for the discussion AI researchers would like to have about how these tools can improve our lives to shift to the consideration of the myriad ways people could abuse these new tools or the damage they might do to society at large. Hence the current rush to develop policies to govern things AI related — from the data and methods used to train AIs to the application of AI tools in society at large. Or, put more bluntly, after years of society at large shrugging off warnings that we needed to think seriously about managing AI development and application, we are now having a full uncanny valley freakout.

In some ways, this new energy to debate regulating AIs is a good and healthy thing. As noted above, many in the computer research community and public advocacy community have warned about the potential consequences of unregulated AI for a decade now, if not longer. The modern internet has taught us the danger of relying on techno-utopianism and self-regulation. Even so, freakouts do not generally produce good policy outcomes.  

In particular, because the current energy in the AI governance debate comes from our uncanny valley freakout, the regulatory proposals focus on AI tools that train on analysis of human activity and produce human-mimicking outputs. Thus, as my colleague Nicholas Garcia has observed, one of the first reactions we see is a clamor for more copyright protection for training data. Broader regulatory proposals, like OpenAI’s, give in to this uncanny valley freakout by aiming primarily at aligning, restraining, or forestalling Artificial General Intelligence or superintelligence. Regulatory proposals like these assume that future AIs will require the vast datasets and computational resources of the most (in)famous generative AIs of today, and therefore call for content licensing, extensive safety and testing obligations, or other modes of restrictive oversight that assume vast networks maintained by multibillion dollar companies.

But many of the AI technologies that are driving the UVF also have myriad valuable, specialized uses that do not use human-created data or have human-mimicking output. For the sake of discussion, I will refer to these AIs as “insanely boring technical” (IBT) AIs. Again, it is important to recognize that we are not necessarily talking about a difference in underlying technology, but a difference in the training data and the outputs. These IBT AIs do not necessarily require the same vast resources as AIs designed to replicate human beings. They do not use creative human outputs such as text or art to train. As a result, regulatory regimes designed solely for AI with human-mimicking outputs risk either crushing the development of these potentially valuable IBT AIs or missing the different, but still serious risks these systems pose. For example, we do not want unsupervised AI tools to mimic human doctors, but we do want to use AI tools to analyze cancer tumors so we can develop new and more effective treatments.

The UVF being allowed to drive AI policy raises two significant dangers when it comes to IBT AI. First, we are in danger of losing the enormous potential benefits of AI tools that produce these insanely boring but tremendously valuable outputs by trapping them in a regulatory regime that imposes unrealistic and unnecessary burdens given the specialized applications in question. On the other hand, we cannot assume that simply because these specialized “boring” applications do not raise the same concerns that they do not require regulatory oversight. We need a more nuanced approach. Or, as Professor Mark McCarthy recently wrote, we need to focus less on the dramatic but highly unlikely AI apocalypse scenarios and more on the real potential benefits and potential problems of the new generation of powerful AI tools. 

Some Examples To Illustrate the Different Issues Between UVFs and IBTs.  

I will provide three examples of what could be described as IBT AIs (though we find them exciting and maybe you will, too) that rely on inputs and produce outputs not associated with human creativity, and are not designed to mimic human behavior. These examples illustrate how these applications may raise similar problems to UVF AI, such as privacy or fairness or accuracy concerns, but they both require very different regulatory regimes.

Enhancing Wireless Network Efficiency. 

The demand for wireless services continues to rise exponentially, and virtually all projections show it continuing to do so. Since we cannot simply grow more spectrum (and clearing spectrum for licensed, unlicensed, or other types of shared uses takes years), we need to improve the efficiency of how we use spectrum. As discussed here and here, using deep neural networks embedded in wireless networks can increase the accuracy of predictions with regard to spectrum allocation and general resource management to dramatically increase the number of devices that can use wireless networks — especially when combined with self-configuring software-based virtual radio networks such as O-Ran. For mobile networks, these neural networks can learn how variations in temperature, humidity, sunlight, and other environmental factors create tiny changes in the behavior of wireless “reflection paths” that — in aggregate for millions of mobile phones — can produce huge increases in wireless capacity. In the heart of the network itself, vendors have touted AI tools that dramatically increase energy efficiency or optimize routing and network performance.

This use of AI clearly does not raise any issues of copyright for either the training sets or the outputs. Regulations designed on the assumption that all data used for training AIs or outputs must pay royalties to someone will severely hinder the development of these network tools. Nothing in the networks raises concerns about discrimination or replacing human jobs. Nor do these networks necessarily require the same scale of concentrated resources as human-mimicking AIs, and regulations that require these AI tools to train and deploy in specific ways based on inapplicable assumptions will severely undermine, if not entirely eliminate, their utility.

At the same time, these uses do potentially raise privacy concerns. These networks are attached to human activity, whether it is mobile phones attached to humans, networks of devices linked to various sorts of human activities, or even home use patterns. These AIs may also raise cybersecurity questions, or even national security issues if they can predict use of classified government networks based on activity patterns in adjacent federal spectrum. Pattern analysis used to enhance network efficiency can also be used by bad actors to determine how best to disrupt networks.

History shows that these problems are often relatively straightforward to prevent in the design phase, but incredibly difficult to correct after the fact. Solutions designed for UVF AI will map poorly, if at all, to spectrum networks built to improve performance of device and inventory networks, or to improve wireless capacity. But without some consideration of necessary safeguards, we invite developers to track highly sensitive geolocation information, or create opportunities for malicious actors to analyze ways to disrupt network traffic.

Medical Diagnostics and Treatment Development

The use of AI for medical practice and research has been of interest for years, and is one of the most positive uses for IBT AI. The New England Journal of Medicine, one of the premier journals of medicine in the United States and the world, has announced its plan to launch the New England Journal of Medicine AI “to identify and evaluate state-of-the-art applications of artificial intelligence to clinical medicine.” IBM touts the use of its AI products for medical research, drug development and creating individualized treatments tailored to a patient’s personal medical condition. Some uses — such as replacing radiologists analyzing medical images or using chatbots to diagnose patients —  move us into the uncanny valley and will require regulation designed to ensure human oversight and accountability. But we also have a wealth of insanely boring and technical AI applications that we want to see developed. Importantly, we want low enough barriers to entry that we can see these specialized IBTs developed by universities and medical researchers. As we can see from the current proposals, licensing regimes and regulations designed for general AI tools designed to mimic human behavior and interact with the public will shut out all but the largest companies.

But, again, these medical IBT AIs have their own set of issues that require careful oversight. Obviously AIs trained on patient information raise privacy concerns in addition to concerns about fundamental fairness and representation. Using hospital records and patient histories to train medical AIs introduces questions of classism, as these records are largely available for patients sufficiently well off to have medical insurance. As a result, datasets will miss potentially crucial differences in treatment based on gender, ethnicity, or life history. Since a huge potential advantage of using AIs for medical purposes is to allow for individualized treatment based on correlating precisely such factors, using AIs in this situation threatens to aggravate an already existing and persistent problem in medical research and treatment. 

The key point is that AI oversight in medicine can’t be driven by what is, or is not, in the uncanny valley. Right now, imitative generative AI systems seem the most risky, in large part thanks to the UVF. But some of the most viscerally unsettling technologies, like a chatbot receptionist that intakes patients, could be the safest with proper oversight and accountability, while some of the most boring and technical might pose serious risks of invisible discrimination. We need rules and regulations that account for different use cases, and their different potentials and risks. Oversight must be based on clear-eyed understanding, rather than allowing concerns about one set of technologies to constrain the potential of another. 

Environmental Studies and Earth Science.

Advances in sensor technology allow us to collect increasing amounts of data about our planet and near space. This can help us identify everything from the impact of solar fluctuations to effective techniques for managing environmental resources to offset global climate change. Again, we see research carried out by government agencies and university consortia rather than giant corporations with far greater resources. As time goes on, we will increasingly see this kind of research from environmental start ups. Given the increasing urgency of our global climate crisis and resource management, these tools offer enormous benefits to mankind. Even small gains in predicting the likelihood of dangerous weather phenomena or predicting the likely pathways of wildfires can save lives.

Here the chief problem is likely accuracy and access to data. Can actors with ideological agendas bias the outcomes? What rules will we have for access to necessary underlying data? What confidence can we have in AI tools whose outputs may mean the difference between life and death for entire communities, or outputs which influence policy on a global scale. The value of earth science AIs is that they can help us make sense of vast and complex systems. But by the same token, how can we confidently rely on these systems — or prevent them from being corrupted and misused as sources of disinformation?

An Agency Watchdog Rather Than a Legislative Cure.

As we have urged in the context of digital platforms, this is precisely the kind of situation that calls out for an expert administrative agency. The need for flexibility makes drafting legislation designed to consider all possible uses virtually impossible. We can, through targeted legislation, address targeted problems — and should not hesitate to do so when appropriate. But the rise of AI ultimately requires a broader solution. 

We will need humans to balance the enormous potential benefits of AI tools in technical fields against the potential risks. We will need humans to respond to the cleverness of other humans in finding unforeseen ways to use these technologies for nefarious purposes. We will need humans to respond to situations no one could have anticipated until we gained more experience. Laws of general applicability work well when we can determine bright-line rules, or where we can leave decisions to generalist judges to develop law over time. They do not work nearly as well in situations that require nuanced decisionmaking and expertise.