Protecting digital human rights in the post-COVID-19 era

From

Masja Zandbergen

Digital data has expanded beyond imagination, and can create some serious issues for privacy and human rights, especially when it comes to the sensitive health data now being used in the battle against Covid-19. The amount of data in the world will reach about 40 trillion gigabytes in 2020, or more than 5,000 gigabytes for every person on the planet. That’s five times the capacity of the average personal computer… and much of it is about YOU.

As with all rapidly developing trends, the lack of a decent regulatory framework leads to new risks and new opportunities for all the stakeholders involved. At Robeco, we recognized many years ago that data privacy posed a material business risk for internet and telecom companies. Some business models are completely built around gathering, using or selling consumer data. Cybersecurity and the social risks of artificial intelligence (AI) are subsequently issues that we focus our attention on when it comes to digital human rights.

Covid-19 response tests digital human rights

In the Covid-19 crisis – as with many other material ESG topics, by the way – digital human rights are being put to the test. On the one hand, apps to track down infected people can save lives and help unlock societies, in turn helping the economy. On the other hand, if it is not done carefully, people’s privacy is at risk.

Some people might argue that privacy should be sacrificed for this purpose. However, AccessNow, a leading not-for-profit organization in this space, claims that strong digital rights can actually help improve public health.

We fully support that notion. If digital rights are not properly protected, the voluntary uptake of apps to track down infected people will be refused, and they will then only be successful if they are made mandatory. In many countries this will not be accepted, and it would leave governments unable to successfully implement digital health tools.

Issues around digital rights

But there are also other, less obvious issues. Health data is one of the most sensitive types of data that exists. Data protection is paramount. Tracking health data can be necessary for authorities to respond to a fast-moving outbreak. However, mismanagement of this data can lead to mistrust and the lower use of digital health tools.

Another relevant topic is surveillance. Governments may use this crisis to implement surveillance tools on a wide scale that could be considered controversial. Facial recognition, for example, is already playing a role in the surveillance, monitoring and control of people’s movements in the coronavirus outbreak. China is using it to track infected individuals and identify those not wearing masks.

In Moscow, Russian authorities are reportedly using surveillance cameras, facial recognition systems and geolocation to enforce its quarantine regime and track infected individuals and their family members.  Even though we understand that individualism is valued more in Western countries than elsewhere, we are of the opinion that these practices run the risk of severely violating the basic human right to privacy without a clear benefit to the people.

Censorship and disinformation

ICT companies face an increasing number of orders from governments around the world that seek to restrict access to services and to disrupt networks. The consequences of disruptions include restricting internationally recognized rights to free expression, preventing access to vital emergency, payment and health services, and disrupting contact with family members and friends.

In some cases, these mandates pose an additional risk of human rights breaches when they restrict the free flow of information in the run-up to elections, or are used to target particular regions, districts or ethnic groups. In this crisis we have seen authorities in China, Iran and even the US trying to control the information that was shared on social media by journalists and health officials.

On the other hand we have seen a lot of misinformation being published, ranging from holding your breath to all kinds of medicines being hailed as the solution to this pandemic. In response to this, large platforms such as Facebook, Google and Twitter are pointing consumers proactively to reliable sources like health authorities.

What can companies do?

In the absence of a good regulatory framework, we see digital human rights as presenting risks to the companies we invest in. Exposure to issues such as data privacy, cybersecurity or social impacts of AI can badly impact their business, so good management of them can set companies apart.

Therefore, in our fundamental investment processes, we systematically analyze how companies are managing these issues. To assess these risks, we examine not only the strength of the editorial and information security policies, but also the processes and outcomes when it comes to breaches and fines. Some companies are also more transparent on these outcomes than others.

Combining this analysis with the other material issues such as corporate governance and human capital management, we assess the impact on the value drivers of the companies in these sectors. Data privacy, cybersecurity or social risks in artificial intelligence are often difficult to quantify in terms of revenue or cost drivers. So, we mostly adjust our figure for the company’s cost of capital based on our analysis as an estimate for reputational, legal and business risks.

The tip of the iceberg

In our investment analysis and engagement efforts, we focus on imminent risks or challenges, such as algorithm bias leading to discrimination or breaches of privacy. We also look at the effectiveness of corporate governance around digital human rights, their implications for human capital, and also the opportunities that AI offers. All these issues are financially material to a company’s business.

We ask companies to build knowledge at board level, and to implement robust policies and processes that will respect human rights in this increasingly digitized world. We also ask them to be transparent if they encounter issues or breaches. And let’s be clear, of course an app that could help track down and contain Covid-19 is a good idea, but its success will depend on how well human rights are protected.

———-

Protecting digital human rights in the post-COVID-19 era was authored by Robeco’s head of sustainability integration, Masja Zandbergen.

You must be logged in to post or view comments.