Project Description

Digital practices in mental health and crisis support are expanding rapidly and urgent public attention is needed to make sense of these developments. New practices include:

  1. Digital phenotyping’, in which machine learning is used analyse physiological and biometric data gathered by smartphone;
  2. AI-based suicide alerts which involve analysis of social media posts for the deployment of first responders during apparent crises;
  3. the 10,000+ ‘Mental health Apps’ currently available, of which the most popular 'fail spectacularly at privacy and security' according to a Mozilla Foundation report;
  4. Digital pills’, which combine pharmaceutical pills with sensor and tracking technology;
  5. Monitoring and surveillance of students in schools and universities to detect distress, mental health crises and disability;
  6. Online self-assessment and self-diagnostic tools, often with hidden data-selling practices.

As with other areas of technological change, these developments need to be governed in ways for which there may be no precedents. Alternatively, it may not be immediately clear how accountability can be enforced, or whether existing or proposed tools are even appropriate.

People concentrating on phones, Source: Pexels

This report emerged from a two-year exploration conducted throughout 2020 and 2021. The work was undertaken by the authors, who hold various forms of expertise in data ethics, media studies, programming, policymaking, law, engineering and so on, and most of whom have had firsthand encounters with mental health services, distress or disability. Piers Gooding, who co-ordinated author input, received funding as a Mozilla Fellow in 2020. Piers and Simon Katterl led the drafting of Part 1 and 2 of the report, with advisory input from their co-authors. The report recommendations were jointly and equally authored.

Our reflections on technological experiments in mental health and crisis support aim to promote responsible public governance of algorithmic and data-driven systems in these contexts. The report aims to chart the expansion of these technologies and asks how they can be used responsibly, when they should be permitted or when they should be discouraged and even forbidden. Emphasis was placed on the knowledge of groups who are most affected, particularly people who have experienced profound distress, mental health crises, psychosocial disability and so on—a group who often have been excluded from developing digital, data-driven technologies in mental health and crisis support settings.

Importantly, our focus began as an exploration of legal and regulatory issues in the EU, US and Australia, so our focus is quite narrowly confined to high-income countries. We give some attention to activities in middle- and low-resource settings, where some similar issues are emerging, but we could not - and would not presume to - substantially discuss this topic, which requires serious and sustained attention led by people in those countries (see e.g. Sachin Pendse et al. 2022).

This resource is meant for diverse audiences, including advocates and activists concerned with mental health and disability, service users and those who have experienced mental health interventions and their representative organisations, clinical researchers, technologists, service providers, policymakers, regulators, private sector actors, academics and journalists. Algorithmic and data-driven technologies will continue expanding in mental health and crisis support work. We hope our efforts here help ensure that data-driven technologies are used selectively to augment, rather than threaten, co-operative relationships of support and care.