Public Knowledge President and CEO Chris Lewis Testifies Before Senate AI Insights Forum on Privacy and Liability

His testimony urges Congress to establish a regulatory framework – including a comprehensive federal privacy law – to ensure that AI transforms society for the better.

Public Knowledge President and CEO Chris Lewis testified before the U.S. Senate’s bipartisan AI Insights Forum on privacy and liability Wednesday, November 8 at 2:30 p.m. His testimony urges Congress to establish a regulatory framework – including a comprehensive federal privacy law – to ensure that AI transforms society for the better.

His testimony specifically argues for passing the “American Data Privacy and Protection Act,” a bipartisan bill establishing a national standard to protect consumer data privacy. Such a bill would minimize the amount of personal data collected, give people rights over their data, encourage competition, and integrate important civil rights protections. As the testimony explains, a digital regulator equipped with the expertise and authority over algorithmic decision-making, AI systems, and the digital platforms that incorporate them would also help ensure that any regulation of AI remains effective.

The following is an excerpt from the testimony:

“We know that AI has the potential to transform society for the better; but that can only happen if an appropriate regulatory framework is in place. The first step of that framework should be to enact a comprehensive federal privacy law. 

“Luckily, Congress does not need to start from scratch here. The House of Representatives, under Reps. McMorris Rodgers and Pallone’s leadership, has created a bipartisan proposal [called ADPPA]… that would minimize the amount of personal data collected, would give people rights over their data, encourage competition, and integrate important civil rights protections. A well-crafted comprehensive data privacy law, like ADPPA, would also encourage more competition in AI.

“[We] know that this particular Insight Forum is also interested in questions of who bears responsibility when these systems cause harm. First, we should look to the harm to see if there is an existing liability regime that applies. AI is found in so many systems and products that we should avoid creating ‘AI policy’ where existing policy is sufficient. Second, end users cannot and should not be responsible for structural or endemic harms caused by these systems. Given the opacity of AI, end users will only sometimes be in a position to mitigate harms, therefore, liability should rest with the developers of, and platforms deploying, these AI. 

“Finally, any regulation of AI will only be as effective as the government’s sustained ability to understand and enforce it. An expert regulatory agency for digital platforms… can and should include expertise and authority over algorithmic decision-making, AI systems, and the digital platforms that incorporate them. A digital regulator could use its expertise to support cross-government understanding of the development of AI as other agencies apply their existing authority to the use of AI systems. A digital regulator could also develop liability safe harbors for AI systems, when appropriate.”

You may view the testimony.

Members of the media may contact Communications Director Shiva Stella with inquiries, interview requests, or to join the Public Knowledge press list at shiva@publicknowledge.org or 405-249-9435.