On September 3, Congress held a hearing with an alarming title: “Europe’s Threat to American Speech and Innovation.” The premise was that the European Union’s (EU) Digital Services Act (DSA) and the United Kingdom’s Online Safety Act (OSA) poses an existential threat to American free expression. Yet the evidence presented reveals a different story entirely – one where speech safeguards abroad are stronger than the actual speech threats advancing here at home, and where Congress is sounding alarms about unsubstantiated European censorship while ignoring real threats to the First Amendment in their own backyard.
The hearing followed the release of the House Judiciary Committee Republicans’ interim staff report entitled “The Foreign Censorship Threat: How The European Union’s Digital Services Act Compels Global Censorship And Infringes On American Free Speech” (hereafter, “the HJC report”). While Congress certainly has the authority to investigate how foreign regulations might affect American rights and companies, this report is riddled with conjecture, mischaracterizations, and inflammatory rhetoric found in similar “censorship cartel” materials that are readily debunked (and we have, here, here, and here). House Judiciary Democrats also did their own debunking of the Republicans’ “Misleading Report on the EU’s Digital Services Act.”
Ironically, a handful of bills introduced in Congress with bipartisan backing reflect some elements of the DSA – including requirements for transparency in content moderation decisions and redress in case users feel their content has been mistakenly moderated. That’s not to say the DSA and OSA are perfect laws, but framing these laws as “censorship” misrepresents their intentional design as a balance between free expression and online safety – a balance we are slow to figure out here in the U.S.
Clarifying the DSA’s “Red Line” Against Censorship
What both the HJC report and Republicans in the hearing failed to understand is that the DSA contains what European legal scholars refer to as a “red line,” preventing the kind of arbitrary censorship the HJC report claims. While the EU does not have an American-style First Amendment, it does have a Charter of Fundamental Rights that protects free expression. The EU also has the European Convention on Human Rights, an older treaty which contains similar protection and applies to the individual member countries. As a result, EU regulators cannot restrict speech unless the law clearly specifies what regulators can and cannot do. In other words, the DSA cannot authorize content-specific restrictions using broad terms that lend themselves to abuse; restricted speech must be explicitly spelled out. And the EU can only act within powers specifically granted by member countries. It cannot claim authority over speech that member states have not delegated to it. So an individual EU commissioner cannot unilaterally decide a platform must deal with content in a certain manner.
As more than 30 leading digital rights scholars recently explained in a letter to Judiciary Committee Chairman Representative Jim Jordan (R-Ohio), such principles create multiple independent grounds for European courts to strike down any attempt to use the DSA for viewpoint-based censorship. The DSA must be “content-agnostic” – meaning that for lawful content, regulators can only enforce content-neutral measures, like how platforms design their systems or how they empower users to control their own experience.
In fact, the HJC report points to an attempt by EU officials to overstep the DSA’s authority. However, rather than proving an incident of unilateral censorship power, the circumstance demonstrates how checks and balances work when officials overstep their bounds. In the lead-up to the 2024 US presidential election, Commissioner Thierry Breton threatened Elon Musk with DSA action for hosting an interview with President Trump, claiming it could incite violence, hate, and racism. He asserted broad authority to regulate “harmful content” and “amplification,” but those terms are not included in the DSA. He confused lawful but controversial speech with illegal content. Yet, European institutions have safeguards: Breton was condemned by civil society groups, his colleagues distanced themselves, and within two weeks, he resigned to avoid dismissal.
The Brussels Effect and Localized Compliance
It is true that European Union regulations on tech companies can have a global impact. Known as the “Brussels Effect,” the EU enjoys a large and wealthy consumer market, characterized by strong regulatory institutions. If a non-EU company wants to access such a large and wealthy consumer market, it must comply with EU rules. And those rules often influence corporate behavior beyond the EU’s boundaries. An example that has personally benefited me here in the U.S.: thanks to the EU-mandated standard of USB Type-C ports, consumers everywhere no longer need to buy new charging cables and adapters with each new Apple product.
The House interim report attempts to apply the Brussels Effect phenomenon to the DSA, predicting that online content originating from the U.S. would be moderated according to EU standards, thereby “censoring” the American user if said users run afoul of EU hate speech laws, for example. This prediction isn’t grounded in legal reality or current practice. The European Commission has explicitly clarified that “where content is illegal only in a given Member State, as a general rule it should only be removed in the territory where it is illegal.” The EU’s highest court backed this principle in Google v. CNIL (2019), ruling that EU privacy regulations didn’t require Google to block search results worldwide, only within the EU.
There is nothing in the DSA that requires platforms to moderate content that users in America can access. Reiterating this point, Henna Virkkunen, the EU’s Executive Vice President for Tech Sovereignty, Security, and Democracy, clarified in a letter to House Judiciary Chairman Jim Jordan that the DSA is “the sovereign legislation of the European Union, adopted with overwhelming majorities” and “applies exclusively with the European Union to all services provided therein, irrespective of the location of the provider’s headquarters.”
A Briefer on the DSA’s Requirements
The DSA does not require platforms to remove content outright. Instead, it mandates Very Large Online Platforms (VLOPs) to have an accessible reporting system for users to flag suspected violative content. When notices are received, VLOPs (platforms with more than 45 million users in the European Union) must promptly assess whether the content is illegal (Article 16). They are also required to regularly evaluate systemic risks, including the spread of illegal content, and implement proportionate mitigation strategies, such as their own content moderation processes. VLOPs should prioritize notices from “trusted flaggers” – vetted third-party experts in identifying illegal content – without delay (Article 22). When content is removed, users must be clearly informed of the reasons for the action, the legal basis, whether automation was involved, and how they can seek redress (Article 17). In emergency situations affecting public safety or health, the Commission may instruct VLOPs to undertake urgent measures, including enhanced content removal procedures (Article 36).
The system emphasizes due process, requiring VLOPs to strike a balance between the effective removal of illegal content and the protection of fundamental rights, particularly freedom of expression. Users have various redress mechanisms if they believe content was wrongfully removed, including internal complaint systems and out-of-court dispute settlement – a redress system similar to one outlined in the Internet PACT Act, supported by Senators John Thune (R-SD) and Bill Cassidy (R-LA), among other lawmakers. In fact, as Public Knowledge noted, this redress system is one that could better facilitate free expression, giving users more agency in challenging content moderation decisions. Moreover, there are aspects of the DSA that would receive bipartisan backing if introduced in the U.S., including greater user agency over how platforms collect and utilize personal data, as well as the use of algorithms to target customers with advertising. In fact, COPPA 2.0 – a bill cosponsored by Senator Chuck Grassley (R-Iowa) – would make it unlawful for platforms to target users under 18 with advertising using personal data.
Addressing Misconceptions in the House Report and the Hearing
In both the HJC report and Rep. Jordan’s remarks, it was stated that “even the New York Times” pointed out that the DSA addresses online speech in a way that would be “off limits in the United States” due to the First Amendment. Such framing misses the point. The U.S. and Europe have categories of speech with limited or no protection based on their respective histories. Neither tradition is “more democratic” or “more censorial” than the other.
America’s First Amendment was born from a revolution against colonial authorities that restricted assembly, censored publications, and punished dissent. Europe’s approach reflects different lessons. In the wake of fascism, genocide, and mass propaganda campaigns that dehumanized entire groups, European societies became more willing to regulate hate speech to protect vulnerable communities’ ability to participate in public life. This is notably the case for Holocaust denial – a belief not uncommonly found on “free speech” platforms like X here in the U.S., which would be a criminal offence in many European countries. For EU regulators, dignity and equal participation are co-equal democratic values, meaning that persistent harassment directed at marginalized groups is understood as a threat to their free expression. By contrast, the U.S.’s First Amendment law gives the highest protection to political speech, lesser protection to speech that is ‘purely commercial,’ and no protection to obscenity or other forms of speech deemed harmful under the common law, such as defamation or fraud.
This difference is not about one side embracing “free speech” and the other rejecting it. It is about where each system draws the line between individual expression and collective harm. The U.S. system treats content-based restrictions as likely to violate the First Amendment, but allows varying degrees of “content neutral” restrictions based on a complicated balancing of what type of speech is regulated (e.g., commercial speech), the purpose of the content neutral restriction (for example, disclosing side effects of medications) and whether the regulation restricts more speech than necessary to achieve the purpose. The EU, on the other hand, based on its history, views certain categories of hateful speech as corrosive to democracy itself.
Public Knowledge’s view is that there are lessons to be learned from this for U.S. policymakers. Allowing platforms to become channels for sustained harassment does not create a true marketplace of ideas. It drives targeted voices offline, chilling their ability to speak. Gamergate is a high-profile example where women in the gaming industry here in the U.S. faced sustained harassment campaigns online, including coordinated abuse, that pushed these women off platforms and silenced their voices. The DSA’s provisions that platforms must respond to illegal hate speech and harassment are not simply censorship, but a recognition that unrestricted hate speech can constitute harassment, and that addressing it is one way of preserving a broader range of voices online. By contrast, the U.S. punishes speech designed to harass individuals (such as personal threats) after the fact rather than attempting to prevent it in the first place.
This nuance often gets lost in political rhetoric. In a May 2025 multi-stakeholder workshop hosted by the European Commission, participants from government, civil society, academia, and industry explored various scenarios to determine whether a flagged post qualifies as illegal hate speech. Both the HJC report and Jim Jordan, during the September 3rd hearing, referred to a hypothetical involving Amira, a “16-year-old Muslim girl.” She sees a post from @Patriot90 featuring a meme of a woman in a hijab with the caption ‘terrorist in disguise,’ accompanied by a comment saying, “We need to take back our country.” The report and Rep. Jordan object to labeling “we need to take back our country” as hate speech, deeming it “common political rhetoric.” However, the report and Rep. Jordan leave out additional context: that “the posts from @Patriot90 start to be more frequent and directed specifically at Amira.” The harm comes not from one slogan, but from the cumulative targeting of a young person based on her faith. In that context, ignoring harassment serves the powerful while silencing the marginalized. This is similar to the U.S. criminalizing harassing someone by telephone – except that rather than prevent the harassment, we punish the harasser.
Clarifying DSA’s Requirements for Elections Monitoring and Fact Checking
The HJC report points to how the European Commission has “initiated formal proceedings against Meta for the ‘non-availability of an effective third-party real-time civic discourse and election-monitoring tool’,” and describes the move as punishing Meta “for failure to adequately censor election-related content.” However, this claim fundamentally misinterprets provisions of the DSA. Firstly, the Commission launched “formal proceedings” because Meta failed to provide a third-party tool for monitoring election-related content, as required by the DSA, after the tech company decommissioned CrowdTangle, a tool used for real-time monitoring of online content. Secondly, the DSA doesn’t specifically define an “election monitoring tool,” but it does require VLOPs to address systemic risks related to electoral processes while protecting freedom of expression. It never states that such a tool be used to flag and remove content; instead, platforms are expected to allow third-party access for monitoring election-related content and ensure they follow their own content policies. The regulation aims to increase transparency in how platforms evaluate election-related risks.
Given the influence of foreign actors, particularly Russia, it’s understandable that the EU wants to protect its democratic processes, just as the U.S. does. In fact, recently, Republican members of the House Oversight Committee expressed concern in a letter to the Wikipedia Foundation about whether the platform is effectively tracking and addressing foreign interference, including content from pro-Kremlin sources. If our own government is asking tech platforms to assess foreign influence operations aimed at manipulation, why should we criticize our EU counterparts for doing the same?
Similarly, the HJC report criticizes the Commission for opening “formal proceedings against X for choosing to use Community Notes rather than allow third-party fact-checkers to censor content.” For one, the Commission investigated X to ensure the then-new Community Notes system was effective in addressing illegal content and to verify its compliance with the DSA’s requirements. Further, fact checkers do not censor content. Fact checkers add context to content, essentially expanding speech. The difference between a community note and a fact check is that community notes use a bridging algorithm to add context to posts that are provided and agreed upon by a representative sample of users with different political views. Fact checkers are usually third-party services, often from traditional media, dedicated organizations, or academia, that identify and flag content that cannot be verified. They do not remove or downrank content (although platforms can voluntarily decide to moderate content based on a fact check or content flag). Fact-checking and community notes can work together to provide helpful clarification, especially during election season, when online grifters exploit inflammatory and false content to boost engagement and when foreign adversaries increase their influence operations to flood feeds with false information and propaganda.
The Censorship Call is Coming From Inside the House
Nothing in the DSA requirements compels platforms to globalize their content moderation policies to comply with the DSA. Platforms can and do apply content policies based on geographic location, in accordance with local laws. European regulators cannot fine platforms for failing to moderate content for users based in the U.S. However, it can request that platforms moderate a U.S. user’s content that is presented in Europe. It’s not like Americans don’t fret about foreign speech being spread in the U.S. It’s a contributing factor to why the TikTok ban was passed with such bipartisan support – over a panic that the Chinese Communist Party has undue influence over how content is presented to American users.
Ironically, some of the real speech restrictions the U.K. and EU are implementing have found bipartisan purchase here in the U.S. Namely, the U.K. Online Safety Act (OSA) began requiring platforms to verify ages of users in order for those users to access online content. As we wrote in August, the rollout of OSA has been far from perfect, with platforms blocking access to broad swaths of content that, if you squint, may be inappropriate for some kids – but inevitably blocks adults from accessing content unless they submit privacy-invasive information to confirm their age. Such age verification mandates are finding purchase here in the U.S. too. Just recently, the U.S. Supreme Court gave the green light to a Texas law requiring age verification to access pornography, and declined to block a Mississippi law requiring strict age verification to use social media at all (although Justice Kavanaugh wrote a concurrence asserting that the law itself is “likely unconstitutional”).
If First Amendment rights are genuinely a top priority for Republican members of the House Judiciary Committee, they should focus on the numerous efforts from the Trump administration that suppress free speech. For example, Ranking Member Representative Jamie Raskin (D-MD), in his opening statement, highlighted President Trump’s frivolous and excessive lawsuits against disliked media outlets, the withdrawal of hundreds of millions of dollars in university grants due to ideological disagreements, the defunding of public broadcasting over its reporting content, the installation of a “bias monitor” in the newly merged Skydance/Paramount company, the Trump-directed Federal Trade Commission disallowing the newly merged Interpublic and Omnicom from refusing to advertise on platforms based on political content – and the list continues. As federal court Judge Sooknanan stated in her decision granting a preliminary injunction against the FTC’s investigation into liberal watchdog Media Matters for America: “It should alarm all Americans when the Government retaliates against individuals or organizations for engaging in constitutionally protected public debate. And that alarm should ring even louder when the Government retaliates against those engaged in newsgathering and reporting.”
Conclusion
Instead of trying to influence laws across the Atlantic, Congress would serve American speech rights better by tackling the real censorship happening at home.
Experts in platform regulation contend, “nothing about the EU’s Digital Services Act (DSA) requires platforms to change the speech that American users can see and share online.” While it’s true some elements of the DSA – specifically in terms of what is considered illegal content – would be barred here in the U.S. by the First Amendment, the U.S. cannot override EU laws. The HJC report’s concerns stem from fundamental misunderstandings of the DSA’s constitutional constraints and territorial limitations. Instead, Congress might consider how the DSA’s transparency requirements, due process protections, and limits on targeted advertising to children reflect principles that already have bipartisan support here — from the Internet PACT Act to COPPA 2.0.