fbpx
Features Hub Opinion

Tracking Covid-19 effectively rests on transparency

Wed 15 Apr 2020 | Dr (Berndt) Bertie Müller

With a pandemic like Covid-19 currently affecting people worldwide, the question arises of how technology might be able to help contain the virus, help people recover, and help the economy rebound after lockdown. Finding the technology is easy. How we use the tools at our disposal responsibly and ethically is thorny and complex. 

Contact tracing technology is widely believed to be central to containing the spread of Covid-19. Some technology has been developed in the past weeks (or in some cases re-purposed) to be used alongside traditional contact tracing to track encounters had by positively tested subjects.

These ‘contact cases’ potentially carry the virus unbeknownst to them. Locating them is key so further testing can be carried out that limits the spread of the virus. To this end, contact cases need to be made aware of their previous exposure to the virus. If they subsequently test positive, they will become index cases themselves, triggering another level of contacts who need to be notified of their exposure to the virus and the risk they themselves have been infected.

This presents a need for a data-driven tool that performs effective contact tracing. But such a tool faces problems that can only be tackled with diverse domain knowledge: When does a person become a contact case? Is casually passing someone sufficient to make them a contact case or should we impose a time and proximity threshold for contacts? How can proximity be measured reliably? Who needs to be notified and how? Does the tool facilitate data according to the law and widely accepted ethical standards? Is this kind of surveillance ethically acceptable? Can privacy be preserved to an extent that individuals put sufficient trust in the solution to achieve an uptake that makes it reliable? What identifiers are used, are they unique and permanent? Who has access to the data?

Identifying contact cases

The identifier in most cases will be a mobile phone number unique to an individual. However, there is no guarantee that the phone will be carried at all times. In particular, if there is a level of distrust in the surveilling technology or the authorities/organisations carrying out the surveillance, individuals may opt to disassociate themselves from the identifier, that is, the mobile phone SIM card, e.g., by switching off the phone or leaving it at home.

This leads to the question of resilience: what could interfere with the intended operation and how interference can be mitigated against to achieve the required level of reliability. In the case of Covid-19 contact tracing, one could argue that it is in the public and individual interest to support attempts to track down infectious individuals, hence making deliberate interference unlikely. However, this relies heavily on trust, which can be undermined by lack of transparency about the technology and the processes used for the solution.

Trust

If ethics and trust are so important for acceptance and therefore reliability, what needs to be done to make systems trustworthy? The system provider needs to have answers to at least the following questions:

  • What is the provenance of the data and algorithms? Some of the systems considered for Covid-19 tracking have military or counter-terrorist origins. Governments might not be willing to disclose this and may even fear that the widespread use of these tools could lead to increased distrust in the surveillance state.
  • Are the systems compliant with legal requirements, such as DPA 2019 and GDPR? These are legally binding in the UK and the EU, but some individual rights can be overridden in the national interest. The government needs to work towards a relationship of trust with the people by proactively being transparent in all cases of comprehensive surveillance. “Openness is fundamental to the political health of a modern state”, a motto for the introduction of the Freedom of Information Act 2000, needs to be revived for modern surveillance.
  • What are thresholds of circumstances under which privacy may be overridden in the name of the greater good or individual wellbeing and in what context is this acceptable?
  • Is the solution based on full intelligence or is an anonymised contact-tracing approach used? Is it imposed on us or are we asked to voluntarily cooperate?
  • How reliable is the data upon which decisions are made? Telecom data is known to be flakey and to a certain extent inaccurate. What is done to mitigate against these issues?
  • How will the solution help prioritise in the presence of multiple parties competing for resources?

The above questions will have a profound effect on the uptake of any proposed solution for smart social distancing in the absence of mandatory use, as is the case in China.

Even the transparent approach taken by Singapore using the TraceTogether app has not led to a sufficient uptake to make the solution work reliably. The app exchanges proximity information whenever the Bluetooth Relative Signal Strength Indicator (RSSI) detects another device with the TraceTogether app installed and records – for up to 3 weeks – the proximity and duration of an encounter without collecting location data.

A positive tested user is then asked to assist the Ministry of Health (MOH) to map out their recent activity, with an option of granting the MOH access to their TraceTogether Bluetooth proximity data to contact people who had close contact with the infected individual 14 days prior to testing positive. This allows the MOH to provide timely guidance and care.

A similar approach driven by Europe is PEPP-PT
 (Pan-European Privacy Preserving Proximity Tracing). Any participating app has to meet a number of requirements:

  • Reliable procedures for proximity measurement across popular mobile operating systems and devices
  • Enforcement of data protection, anonymisation, GDPR compliance, and security
  • International interoperability
  • Feasible technology that does not require new specialised IT infrastructure
  • Security and interoperability certification of local implementations using the PEPP-PT mechanisms.

It remains to be seen whether this initiative will have a greater success rate than TraceTogether in Singapore and how a similar initiative in the UK will work alongside it. Regardless of these technology solutions, authorities will be faced with the additional question of how we legally treat cases of proof of infection versus mere evidence of infection.

Shining a light

All of the above questions about reliability, responsibility, and resilience, need to be answered for all systems employing any kind of automated or data-driven/AI-based decision making — whether they are used for a pandemic or not.

Business processes ought to be led by these questions, and transparency should play a key role in re-defining these processes as well as all computationally-assisted tools.

The same holds, of course, for political and societal processes. We need to achieve human-centred design, development, and operation of the data-driven tools used every day to ease our lives (e.g., search engines, recommender systems, electronic maps, personal communication, personal banking), to drive business (e.g., targeted advertising, B2B, electronic contracts, blockchain, business communication and banking, stock market solutions), and to work together towards a common good (e.g., healthcare, law enforcement, community support, leisure and culture).

But what does transparency mean to different stakeholders and how can it be conveyed in a meaningful way? For end users, the goal should be to develop a food-labelling-type system to convey the ‘ingredients’ and processes of data-driven solutions. Data and AI can and should be used for the good of individuals, organisations, and society, but the success will ultimately be reliant on the approach to transparency in the design and deployment of such systems.

This includes any approaches to nationally or internationally tackle the Covid-19 pandemic. Smart social distancing supported by contact tracing applications can flag people at risk and create priority lists based on a history of meeting with other people. The effectiveness of this approach in the fight against the coronavirus still requires humans. The human resources per capita that China was able to swiftly transfer to the most affected regions are beyond what the UK would have at its disposal.

Ultimately, the success of contact tracing in the UK will depend on the trust instilled by transparency, the proportionality of the technology used, as well as legality, ethics, and interoperability with other European and global solutions. Governments and global players like Google and Apple have started working on joint solutions. We all have a social responsibility to contribute to these efforts to contain the spread of the virus that – just like data – does not obey borders.

Experts featured:

Dr (Berndt) Bertie Müller

Senior Lecturer in Computer Science Chair, Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB)
Swansea University

Tags:

Coronavirus healthcare privacy surveillance
Send us a correction Send us a news tip

Do NOT follow this link or you will be banned from the site!