Description

These broad recommendations are based on the discussion in our report.*

ONE

The well documented negative impacts of algorithmic and data-driven technology on people in extreme distress, persons with psychosocial disability, people with lived experience of mental health issues and so on, need to be openly acknowledged and rectified by governments, business, national human rights institutions, civil society and people with lived experience of distress and disabilities working together. 1

TWO

Authentic, active and ongoing engagement with persons with lived experience of distress and disability and their representative organisations is required at the earliest exploratory stages in the development, procurement and deployment of algorithmic and data-driven technology that directly impacts them. This engagement is required under the Convention on the Rights of Persons with Disabilities, and is key to technology being a force for good in the mental health context. Instead of more technology ‘for’ or ‘about’ distressed and disabled people and the collection of vast amounts of data to be fed into opaque processes, these groups themselves should be steering discussions on when and how emerging technologies should be integrated into mental health and crisis responses – if at all. True partnership and engagement with people with lived experience should include compensation for their time and true decision-making power which counteracts tokenisation and minimal involvement.

THREE

‘Techno-solutionism’2 must be resisted, in which digital initiatives in the mental health context are presented as self-evidently virtuous and effective, and a simple fix to the complex issues of human distress, anguish, illness and existential pain. Not only must proven and potential harms be squarely acknowledged, but so must unproven benefits. Technology is not neutral. When new technologies are presented as technocratic and apolitical, this overlooks the significant role of human decision-making, power, finance and social trust, which should be part of public discussion. Fundamental questions must be asked as to whether certain systems should be built at all, whether proposals are technically feasible (or merely unrealistic and over-hyped) and – if they are to be pursued – who should govern them.

FOUR

Given the limited (and sometimes highly limited) evidence-base for many algorithmic and data-driven technologies in the mental health context, standards are required that are developed with active involvement of people with lived experience and disability, for use as a mechanism for consensus on scientific integrity standards. This involvement can help limit the many sensational and misleading claims about what AI and other algorithmic technology can achieve and curb their use as cheap alternatives to well-resourced face-to-face support. Government funding for digital initiatives in the mental health and disability context should be dependent upon submissions regarding stringent evidence of safety and efficacy, and in accordance with disability-inclusive public-procurement standards.

FIVE

There should be an immediate cessation to all algorithmic and data-driven technological interventions in the mental health context that have a significant impact on individuals’ lives that are imposed without the free and informed consent of the person concerned. Regarding algorithmic forms of diagnosis or proxy-diagnosis, the consequences of being diagnosed and pathologised in the mental health context, whether accurately or not, are often profound. Such measures should never be undertaken without the free and informed consent of the person. Among other things, informed consent processes should provide explicit details of data safety and security measures, and clarify who shall monitor compliance.

SIX

Governments, private companies, not-for-profits and so on, must at a minimum eliminate forms of mental health- and disability-based bias and discrimination from algorithmic and data-driven systems, particularly in areas such as employment, education and insurance. Such steps should extend to preventing discrimination against people who are marginalised across intersecting lines of race, gender, sexual orientation, class and so on. Those facing discrimination must have access to an effective and accessible remedy, such as a clear source for complaints and legal review.

SEVEN

Ethical standards will never be enough. Robust legal and regulatory frameworks are required that acknowledge the risks of employing algorithmic and data-driven technologies in response to distress, mental health crises, disability support needs and so on. As part of this, a legal and regulatory framework is required that effectively prohibits systems that by their very nature will be used to cause unacceptable individual and social harms. This could include:

A. mandatory, publicly accessible and contestable impact assessments for forms of automation and digitisation to determine the appropriate safeguards, including the potential for prohibiting uses that infringe on fundamental rights;

B. proportionality testing of risks against any potential benefits to ensure opportunities to interrogate the objectives, outcomes and inherent trade-offs involved in using algorithmic systems, and doing so in a way that centres the interest of the affected groups, not just the entity using the system such as a healthcare service or technology company;3

C. strengthening non-discrimination rules concerning mental health and psychosocial disability to prevent harms caused by leaked, stolen, or traded data concerning mental health and disability.4

EIGHT

Public sector accountability needs to be strengthened, including adequately resourcing relevant institutions, which will be vital to addressing the dangers of private sector actors, not-for-profits and government agencies that (mis)use people’s data concerning mental health. This includes developing a willing and empowered state-sponsored regulatory framework as well as resources for affected people and civil society organisations to proactively contribute to enforcement. This also includes supporting the capacity-building of representative organisations of service users and persons with disabilities to effectively monitor the impact of data-driven technology on persons with lived experience of mental health crises or disability. Monitoring could include: advocating for responsible and inclusive data-driven technology, interacting effectively with all key actors including the private sector and highlighting harmful or discriminatory uses of the technology.5

NINE

Robust civil society responses are more likely where lived experience groups and disabled people and their representative organisations connect with other activists at the intersections of race, gender, class and other axes of oppression, rather than viewing algorithmic and data-driven injustices purely through a mental health or disability lens. This could include collectives, nonprofit technology organisations, free and open source projects, philanthropic funders and activists with data practices and skills that help them more fully realise their missions. Those working for economic, social, racial and climate justice can share digital tools, resources and practices to help maximize their effectiveness and impact and, in turn, change the world.6

TEN

Interdisciplinary academic input is needed beyond disciplines like medicine, psychology, computer science and engineering, to include researchers from the humanities and social sciences. This will help address the common presentation of algorithmic and data-driven technologies as neutral—as facilitating factual, un-mediated, digital processing. This technocratic framing neglects matters including the significant role of power, the social and economic underpinnings of distress, unjust macroeconomic structures and Big Tech hegemony.

ELEVEN

Steps must be taken to prevent the undercutting of face-to face encounters of care and support, particularly where private sector interests are expanding into digitised responses to distress or care, and particularly where governments may be pursuing digital options as cheap alternatives to well-resourced forms of support. Relations of care and support must be adequately recognised and protected. The over-emphasis of metrics and computational approaches should be resisted in appreciation of the virtues that make for a truly human life.

*An important caveat to these recommendations is that we are not a representative body. The authors are based in high-income countries, as the initial scope of this project looked to regulatory arrangements in the EU and US. The recommendations are not meant to be exhaustive and nor should they foreclose other strategies and recommendations, particularly by persons with lived experience of mental health crises, disabled people and their representative organisations.


  1. This recommendation draws from the 2021 thematic report on artificial intelligence and disability by the UN Special Rapporteur for the Rights of Persons with Disabilities, Gerard Quinn. Human Rights Council, Report of the Special Rapporteur on the Rights of Persons with Disabilities (UN Doc A/HRC/49/52, 28 December 2021) para 73 link: https://undocs.org/pdf?symbol=en/A/HRC/49/52

  2. Evgeny Morozov coined this term to describe a pervasive ideology that recasts complex social phenomena like politics, public health, education and law enforcement as “neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!” Evgeny Morozov, To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems That Don’t Exist (Penguin UK, 2013) 5. 

  3. Alexandra Givens, ‘Algorithmic Fairness for People With Disabilities: The Legal Framework’ (Georgetown Institute for Tech Law & Policy, 27 October 2019) https://docs.google.com/presentation/d/1EeaaH2RWxmzZUBSxKGQOGrHWom0z7UdQ/present?ueb=true&slide=id.p17&usp=embed_facebook

  4. Mason Marks, ‘Algorithmic Disability Discrimination’ in Anita Silvers et al (eds), Disability, Health, Law, and Bioethics (Cambridge University Press, 2020) 242 https://www.cambridge.org/core/books/disability-health-law-and-bioethics/algorithmic-disability-discrimination/AE6E6348E5513D8E8122765F5BA5D517

  5. Human Rights Council, ‘Report of the Special Rapporteur on the Rights of Persons with Disabilities’ (n 1). Para 76(g). 

  6. https://aspirationtech.org/publications/manifesto Some of this language is borrowed from ‘Aspiration Manifesto | Aspiration’ We call for caution about the potential misalignment of non-profit organisations with desired aims. Some have called for a break from the non-profit industrial complex, including turning toward potential alternatives such as grassroots movements and worker self-directed non-profits that aim to improve the accountability and participatory nature of social movements. Jake Goldenfein and Monique Mann, ‘Tech Money in Civil Society: Whose Interests Do Digital Rights Organisations Represent?’ (2022) Cultural Studies (online first)