Public Knowledge Joins More than 100 Consumer Groups Against Algorithmic Risk Assessment Tools
Public Knowledge Joins More than 100 Consumer Groups Against Algorithmic Risk Assessment Tools
Public Knowledge Joins More than 100 Consumer Groups Against Algorithmic Risk Assessment Tools

    Get Involved Today

    Today, Public Knowledge joins more than 100 civil rights, digital justice, consumer advocacy, and community-based organizations in a statement opposing the adoption of algorithmic risk assessment tools, which use artificial intelligence to determine an accused individual’s bail. Public Knowledge contends that these tools often rely on biased data to forecast an individual’s likelihood of appearance at trial and/or risk to public safety.

    The following can be attributed to Allie Bohm, Policy Counsel at Public Knowledge:

    “Artificial intelligence, big data, and algorithmic decision-making play increasingly large roles in our lives. Unfortunately, these systems give the appearance of neutral decision-making when in reality they perpetuate and, in some cases, magnify institutional racism, because they often incorporate biased data sets and practices from the start.

    “It is imperative that all algorithmic decision-making tools are transparent, independently validated, developed with community input and oversight, and designed to reduce racial disparities, but these requirements become even more vital when AI is used to determine whether an accused individual receives bail. AI probably should not be used to deprive an individual of his or her freedom at all, but if it is, these requirements — as well as other criminal justice system-specific safeguards — must be in place.”

    You may view the statement here.

    Members of the media may contact Communications Director Shiva Stella with inquiries, interview requests, or to join the Public Knowledge press list at shiva@publicknowledge.org or 405-249-9435.