The Covid-19 pandemic set off a global quest to harness technology to keep us connected and curb the spread of the virus. Schools, offices and vital public services adapted to lockdowns by going online. Governments worldwide tried to chart the virus’ trajectory from broad swaths of personal data. In some respects, the pandemic affirmed technology’s indispensable role in our daily lives. But it also exposed and widened longstanding gaps in protecting human rights, both online and offline.

When the pandemic hit, a hit number of governments rolled out or extended surveillance programs of unprecedented scale and intrusiveness, in the belief, however misguided, that perpetual monitoring would help restrict people’s movements and therefore the spread of the virus. In Moscow, the government tapped the city’s vast network of facial recognition cameras and a “social monitoring” tracking app to enforce quarantine orders. In China, an app uses opaque algorithms to assess a user’s risk of contagion and restrict their movements. Bahrain’s “BeAware” app, mandatory for people in self-isolation or quarantine, sends location information to a central government server and alerts authorities if a person strays from their phone.

Elsewhere, the specter of mass surveillance sparked robust discussions about mitigating the most serious human rights risks of digital surveillance to slow the spread of Covid-19. Governments, companies, and some privacy advocates sought to develop “privacy preservingapproaches to digital contact tracing.

In April 2020, Google and Apple launched software to support mobiles apps that would attempt the track exposure to Covid-19 positive users through Bluetooth signals, drawing inspiration from an open-source protocol designed by a Europe-based collective of academics and technologists. Both companies pledged to provide robust protection for user privacy and security. Under their rules, contact tracing apps would only be approved if they are developed by a government entity or its agents, are voluntary, don’t collect location data, and keep most of the minimal data they do collect away from centralised databases. The fact that private companies were setting these terms underscored tech companies’ immense power.

But these design choices demonstrate a limited understanding of how marginalized groups use technology, and the threats they face online. The assumed logic of mobile tracking – that users are uniquely linked to their phones – is out of step with the experience of people who share devices, cannot afford regular connections, or suffer frequent internet outages, such as migrant workers or refugees. Activists and technologists in Bahrain and the Philippines also told Human Rights Watch that the promised privacy safeguards may not fully mitigate the security vulnerabilities on cheap mobile devices, particularly in countries with pervasive digital surveillance. 

Today, there is mixed evidence about the extent to which contact tracing apps may have significantly contributed to preventing the spread of Covid-19. In Australia, the government rolled out a Covid-19 tracking app that also relies on Bluetooth scanning, but with fewer privacy safeguards than what Apple and Google require. The government configured the app to provide some user data to public health agencies, reasoning that this was critical for contact tracing. Health officials have since admitted that the app, which cost US $12.4million to develop, is not as effective as they hoped in uncovering close contacts and stemming outbreaks. Lower-tech solutions, like handwritten records, have proven more useful.

In its first nine months, the app identified 17 close contacts in New South Wales that were not found through manual contact tracing and identified a further 544 contacts in an outbreak. Of these 544, two tested positive. The government has not published usage data, and denied a newspaper’s freedom of information request on the grounds that it would compromise public safety.

The rush to embrace digital contact tracing has also opened a Pandora’s box of privacy and security woes.

Nearly a third of the over seven million Australians who downloaded the country’s app are using older versions with bugs and security flaws. In Singapore, the government admitted that data collected through its “Trace Together” contact tracing app and Bluetooth token is also available to police for some criminal investigations, despite promising otherwise. The government later passed a new law to formalise the use of data from Trace Together for police investigations for certain criminal offenses. Other contact tracing technologies have left the sensitive personal data of people in Indonesia and the Philippines, including their location, exposed to the government and unauthorised users.

Government reliance on risky, intrusive, and unproven technologies has also threatened access to social protection measures. In a bid to prevent benefits fraud, 21 unemployment insurance agencies in the United States are reportedly using facial recognition software to screen applicants for unemployment insurance and other benefits to support people during the pandemic. This forces applicants to give up their right to privacy in exchange for lifesaving support. People have also complained on social media that difficulties in verifying their identity using the software are delaying their claims.

These setbacks are the latest in a string of technology-related problems marrying access to benefits in the US. Since the beginning of the pandemic, many people reported frequent website glitches and shutdowns, excessive wait times on state unemployment agency helplines, and password reset protocols that force them to wait for their login credentials by mail. Delays in obtaining benefits have forced families to cut back on food, fall behind on their bills, and skip essential medical care.

Migrating social protection systems online has also exacerbated the effects of the digital divide. The United Kingdom has 5.3 million adults that the government considers “internet non-users.” People on low income or living with disabilities, older people, women, and some ethnic minorities are less likely to have internet or smartphone access. Despite these stark inequalities, the government requires most people to apply online for Universal Credit, the country’s social security payment system, and offline alternatives are extremely limited. As a result, people without internet access or digital literacy skills are struggling to prove their eligibility online. 

The global effort to “build back better” includes calls to embrace technology and digitzation that are likely to grow louder. But without real reflection on the rights implications, there’s a real risk of deepening inequality and vesting considerable power to coerce and control people in governments and the private sector.

It’s important to ask when technology adds value, and for whom. If technology can indeed aid in pandemic response and recovery, it is essential to have open, inclusive, transparent, and honest public discussions on the appropriate type of public digital infrastructure people need to thrive. This will require a reset form viewing technology as a tool of coercion and control, and the pandemic as an opportunity to harvest copious amounts of people’s data. Any deployment of technology should be rooted in human rights standards, centered on enabling people to live a dignified life.