2020年5月13日 星期三

It Is Becoming Much Harder to Access Mental Health Support Anonymously


Getty Images Plus

This article is part of Privacy in the Pandemic, a Future Tense series.

The COVID-19 pandemic isn’t just a physical health crisis—it’s also a mental one. Millions of people face the prospect of infection and death, as well as job losses, social isolation, and a fracturing of community life on a massive scale.

Spikes in suicide-related calls in the U.S. reflect a growing sense of anxiety and dread as the virus spreads. And some health workers are warning of large-scale psychological trauma of both health care providers and patients as hospitals become overwhelmed. While it would be simplistic to suggest that social distancing itself causes increasing rates of suicide or drug use, online support for people in distress is about to become more important than ever. This urgency is reflected in the unprecedented waiving of rules and regulations for telehealth by the U.S. government on March 27. It’s likely that infrastructure for digital mental health care will accelerate rapidly in coming weeks and months—as well it should. But this acceleration also comes with steep risks.

Mental health data is incredibly personal, and much of the current internet ecosystem is designed to vacuum it up and monetize it. If you browse an addiction support website, could your browser history be used against you? And if a doctor prescribes you with a mental health app, who could access the data it generates? These are valid questions that have been percolating among the experts who design and research these systems, as well as mental health activists and service user advocates.  Now, as the tech is suddenly scaling up, is the ideal time to ensure the apps and platforms are designed with the highest ethical standards, and in ways that respect users’ digital rights.

Mental health data and the internet already have a troubled history. Unprecedented volumes of sensitive personal information exist today, flowing through a digital ecosystem with little oversight or restrictions. This dilution of individual privacy did not emerge through broad social consensus. Instead, it arrived suddenly with the expansion of a vast and lucrative trade in human tracking. For mental health and addiction services, these developments have serious—and rarely discussed—side effects.

In 2019, Privacy International analyzed more than 136 popular webpages related to depression in the European Union. The websites were chosen to reflect those that people would realistically find when searching for help online. The authors found that about 98 percent of the pages contained a third-party element, which enabled targeted advertising from large companies like Google, Amazon, and Facebook. While there was no evidence of specific abuse, knowledge of a user’s distress could allow companies to advertise specific treatments, services, or financial products. Most websites failed to comply with the EU General Data Protection Regulation, which is often hailed as a gold standard for data protection law. A follow-up study this year found that although some website operators reconsidered their practices and now limited the data they shared, overall, “very little ha[d] changed.” Most websites still appear to share information about users’ visits with hundreds of third parties with no clear indication of the potential consequences.

Privacy International’s investigation highlights a striking fact: It is becoming harder to access mental health support anonymously. Once a core tenet of mental health care in general—and addiction support in particular—the ability to access services discreetly may well become the exception.

Apps have also raised concerns about data misuse. In 2015, the National Health Service of England closed its App Library after a study found that 28 percent of the apps lacked a privacy policy and one even transmitted personally identifiable data that its policy claimed would be anonymous. Today, there are more than 10,000 mental health apps available worldwide. They constitute the largest group of “condition-specific” apps in the overall health app market. Some mental health apps, commendably, use transparent platforms and open source code. But two recent studies found that just under half of the popular mental health apps surveyed had a privacy policy that informed users about how and when personal information would be collected or shared with third parties. There are other considerations here as well. For instance, some researchers have criticized mental health apps for unduly medicalizing what are normal responses to stressful situations. The tendency to individualize problems that are social in nature is particularly important to consider in relation to COVID-19.

Online mental health resources may still actually support anonymous help-seeking. Confidential access to trustworthy online resources may be particularly useful for people whose biggest privacy concern is within their own home or community. Examples could include victims of family violence, LGBTQ young people, or even individuals in tightknit, rural, or religious communities. But despite these opportunities, for most people, our lives—both online and off—are being drawn into an opaque set of algorithmically determined and (often) profit-driven data flows.

For example: In 2017, Australian media reported that Facebook’s algorithmic systems could target Australians and New Zealanders as young as 14 years old, and help advertisers to exploit them when they’re most vulnerable. This included identifying when users felt “worthless” and “insecure,” and in “moments when young people need a confidence boost.” While Facebook denied that it let advertisers target children and young people based on their emotional state, it clearly has the capacity to do so. Facebook has maintained a policy against advertising to vulnerable users. Others will be less scrupulous. Predictive models built by data-targeting companies may well be using distress and ill health to profile and prey on users.

Another example? In the U.K., insurers have reportedly deprived access to coverage to people with depression and anxiety. According to advocates, like the Consumer Policy Resource Centre, citizens “may well start to avoid accessing important healthcare services and support if they feel that companies or governments cannot be trusted with that information, or that they may be disadvantaged by that information in future.” Location- and web-tracking, they warn, “may provide insights into the frequency and types of healthcare services that an individual might be accessing, regardless of whether formal medical records are being accessed.”

These developments spotlight the need for proponents and users of digital mental health care to remain vigilant amid the growing surveillance economy. From “digital pills” to “machine counsellors,” there is an expanding array of digital efforts to address mental distress. While there are enormous opportunities, there are also serious risks—and these risks need to be openly addressed.

First, mental health websites or apps must stop treating the personal data of their users as a commodity. Websites dealing with such sensitive topics should not be tracking their users for marketing purposes. Where needed, data protection and privacy laws should prohibit the commodification of sensitive user data concerning mental health and improve data governance standards. Strengthening nondiscrimination rules, in areas like insurance and migration, can also prevent harms caused by leaked, stolen, or traded mental health data. Efforts must also be taken to actively involve those most affected in the development of online tools to alleviate people’s distress—such as mental health service users and their representative organizations.

Although the burden should fall on websites and app developers to manage data responsibly, individuals can take several precautions when seeking support. These include: blocking third-party cookies on your browsers, using ad-blockers and anti-tracking add-ons, and checking to see if a particular mental health app or website is trustworthy.

When help-seeking is leveraged against a person’s interest, people will be less likely to seek help. For better or worse, the COVID-19 crisis is pushing more of our lives online. Now is the time to ensure people can access online support safely and discreetly. Efforts to promote innovation in digital mental health must feed into broader efforts to create an open and transparent internet. This is in the interest of people experiencing distress, as well as the professionals who assist them, and the general public who benefit from a basic guarantee of social security.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/3btvGGX
via IFTTT

沒有留言:

張貼留言