Privacy-Protective Contact Tracing Depends on More Than an API
Privacy-Protective Contact Tracing Depends on More Than an API
Privacy-Protective Contact Tracing Depends on More Than an API

    Get Involved Today

    This blog post is part of a series on communications policies Public Knowledge recommends in response to the pandemic. You can read more of our proposals here and view the full series here.

    Many observers in the era of coronavirus are advocating that in order for life to “go back to normal,” contact tracing applications will have to become part of our daily life. Even the democratic presidential nominee has endorsed this line of thinking. But what even is contact tracing?

    How Contact Tracing Works:

    Contact tracing is defined as the identification and follow-up of persons who may have come into contact with a person infected with, in this case, the coronavirus causing COVID-19. A contact tracing application is just a smartphone application that can record and log those interactions digitally. One of the hurdles to developing contact tracing applications is that Apple and Google’s systems are generally not interoperable, meaning they don’t generally work well together, which makes it more difficult to know when an Apple user has interacted with an Android user and vice versa. But with Apple and Google’s announcement that they will be providing public health authorities with an API (application programming interface) that makes contact tracing possible across devices, this hurdle seems to have been overcome.

    This contact tracing API uses a decentralized system, like the Decentralized Privacy Preserving Proximity Tracing Protocol proposed in the Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project. A decentralized approach means that Google, Apple, and the application running the API does not have access to all users’ bluetooth IDs. Additionally, bluetooth IDs change every 15-30 minutes, making it more difficult to associate a single bluetooth ID with a specific person. These random bluetooth IDs are not uploaded to the cloud unless a user voluntarily reports a positive COVID-19 test. As devices come into proximity with each other, they exchange bluetooth IDs every five minutes. This list of contacts is also never uploaded to the cloud. If there is a positive test, the application uploads the user’s device IDs for the past 14 days. A list of bluetooth IDs with positive tests is then sent to each user. Applications then check to see if they have come near any of those IDs. 

    No one — not Apple, not Google, not a wireless carrier, and not the application developer — has any way of knowing what contacts have occurred, since all the processing, checking, and data storage happens on each user’s individual device. The only information that leaves the device is the random IDs associated with positive tests. This level of decentralization makes it more difficult to re-identify people’s data because no single entity has all the information. Beyond these technical measures, Apple and Google have both committed to only sharing this API with public health authorities or applications that are operating on behalf of public health authorities. This makes it significantly less likely that the API will be used for commercial surveillance purposes. These promising administrative and technical steps demonstrate that Apple and Google have approached this project with a privacy-by-design mentality.

    How Contact Tracing Applications Might Risk Our Privacy:

    While much ink has been spilled about whether these steps are enough to be privacy protective, there has been much less scrutiny on the applications that would be using this technology. One reason for that disparity is no contact tracing applications have ever been released in the United States. It’s difficult to scrutinize what only exists in theory. However, we should consider the possible ways this technology may be misused so we can prevent the harms before they happen.

    As of today, there is nothing in Apple or Google’s announcement that would prevent public health applications from collecting other pieces of information from users, which increases the privacy risk. This information exists in two buckets: information the user actively gives to the application, and information the application collects from the device passively. The main difference between these two types of information is that users know what information is being collected from them when they are asked to provide it directly. That gives users an opportunity to judge whether they want to share that information. This actively collected information ranges from name and address to medical files. The risks of collecting this information varies depending on the security protocols of the application, and some of this information may be necessary to more effectively operate and manage a contact tracing application. 

    Passively collected information, on the other hand, presents additional concerns because the user doesn’t know exactly what is being collected. While the application may ask for permission to access certain types of information, like geolocation data, the user doesn’t know precisely what is being shared. Users can also be coerced into sharing their geolocation data; this coercion can be in the form of a pop-up notification saying that the application is less reliable without this information or the application won’t install without that permission being granted. The whole point of using bluetooth data, rather than geolocation data, is that bluetooth is both more accurate at determining who has been in proximity to one another and more privacy protective because it does not rely on knowing a user’s precise location. Collecting location data makes it easier for public health authorities, or anyone with access to the application’s data, to identify which users are which bluetooth identities. This makes the application more likely to be used for more generalized surveillance of the public, rather than as a contact tracing tool.

    Absence of Privacy Protections in Federal Law:

    Unfortunately, there are no privacy laws protecting Americans’ rights regarding contact tracing applications created by public health authorities in the United States. HIPAA, the national health privacy law, only applies to covered entities, i.e. doctors, hospitals, health insurers, etc. — not public health authorities. This means that the contact tracing apps Apple and Google envision public health authorities would create would not be bound by law, and their users would not be protected. And most state health privacy laws are modeled after HIPAA, so that same blindspot exists at the state level, as well. Even if these applications have privacy policies and terms of service that could be used to hold them accountable under UDAP (unfair and deceptive practices) statutes (the very limited way privacy is currently protected in industries without a specific privacy law), enforcement is unlikely since the Federal Trade Commission and many attorneys general don’t have the authority to prosecute governmental or non-profit actors. This means that, for now at least, both Apple and Google have a responsibility to do more than just determine if an application is associated with a public health authority.

    Apple and Google should restrict all applications using their bluetooth API from collecting geolocation data. Even if applications ask for a user’s permission and the application remains functional even if the user declines to give permission, the risk of re-identification is too great. Furthermore, geolocation data is not needed for the key functionality of identifying contacts. There should be strict data sharing prohibitions on any data collected through the application. This includes a blanket ban on any sale, licensing, or rental of application data, as well as a prohibition against sharing with governmental or non-profit entities who don’t have an explicit public health mission, like law enforcement. Additionally, Apple and Google should require applications to regularly delete the bluetooth IDs of those who have tested positive for COVID-19. The application only needs the IDs so they can be pushed out to alert others; once the alerts have gone out, the IDs should be deleted. In addition, apps should be required to provide their source code as well as undergo regular security and privacy audits. Finally, Apple and Google should commit to turning off the API and removing these applications once the public health crisis has passed. Apple and Google could also publicly commit to offering technical assistance to public health authorities that emphasizes privacy and security as the basis for application design. Apple and Google cannot just rely on their normal review process for listing applications in their app store. The privacy harms that can arise from a poorly designed public health app warrant more than a cursory review of their terms of service.

    Congress Still Has a Role to Play:

    These actions from Apple and Google can happen much quicker than the speed at which our elected officials move, but industry action does not nullify the responsibility of policymakers to create rules and systems that protect privacy. As a first step, the U.S. Department of Health and Human Services should publish rules for public health applications. This not only provides an accountability framework for the states and the federal government, but also offers public health authorities clarity about what is and is not permissible. Congress should also take this opportunity to extend HIPAA’s privacy provisions to public health authorities. Congress should conduct oversight to monitor the development of applications and either set privacy rules for them or empower an agency (FTC or HHS) to do so. Typically, this would be the responsibility of the House Energy and Commerce Committee, as well as Senate Commerce Committees and the Senate Committee on Health, Education, Labor, and Pensions. Speaker Pelosi has already created an oversight committee to monitor the spending under the latest coronavirus relief package, so adding additional responsibilities to the existing committee or creating a new committee is reasonable also. Congress should not only monitor applications to see if they secure and protect users’ privacy, but also serve as the forum for larger policy discussions around the use of these applications. There’s no reason why the sense of urgency around developing this tool to enable opening up society should not be matched by the responsibility of our elected officials to preserve our right to privacy while doing so. Such actions could also demonstrate the ability for Congress to meet the long-standing fears of the public around privacy and data collection, at least in a narrow context, while Congress continues to consider the long-awaited comprehensive privacy law.

    Right now, all proposed applications of this type are presumed to be voluntary — users decide whether or not they want to participate. But it isn’t hard to see a world in which using these applications becomes mandatory or functionally mandatory. A person may be required to produce proof that they have downloaded a contact tracing app before going to work or showing a grocery store that they haven’t been in contact with an infected person for 14 days. There is also the risk that these applications are hijacked or co-opted by law enforcement for broader surveillance purposes as “mission creep” sets in. To prevent these outcomes, Congress should provide oversight, draft laws that prohibit such practices, and empower enforcers to go after bad behavior. Congressional action should include a real update to the Electronic Communications Privacy Act which governs when, where, and how the government can access information about your communications. It is undeniable now that data collection like this is a critical part of our lives.

    We Need Policymakers and Big Tech to Do the Right Thing:

    During this pandemic, protecting users’ privacy requires a two-step solution. First, Apple and Google must take immediate and decisive action with respect to any applications who plan on using their API. In the meantime, policymakers should fill in the gaps in existing law and create an environment of transparency and accountability. If this does not occur, it won’t matter what privacy protections Apple and Google built into their API. Even if these applications don’t turn into dystopian surveillance devices, the threat that they could means less people will use them, thereby limiting the API’s effectiveness as a tool in the fight against the coronavirus. Apple and Google should ensure that applications built using their API contain strong guardrails and accountability measures — for all our sakes.