Abstract
Background: The emergence of generative artificial intelligence (GenAI) presents unprecedented opportunities to redefine conceptions of personhood and cognitive disability, potentially enhancing the inclusion and participation of individuals with cognitive disabilities in society.
Objective: We aim to explore the transformative potential of GenAI in reshaping perceptions of cognitive disability, dismantling societal barriers, and promoting social participation for individuals with cognitive disabilities.
Methods: This study is a critical review of current literature in disability studies, artificial intelligence (AI) ethics, and computer science, integrating insights from disability theories and the philosophy of technology. The analysis focused on 2 key aspects: GenAI as a social mirror reflecting societal values and biases, and GenAI as a cognitive partner for individuals with cognitive disabilities.
Results: This paper proposes a theoretical framework for understanding the impact of GenAI on perceptions of cognitive disability. It introduces the concepts of GenAI as a “social mirror” that reflects and potentially amplifies societal biases and as a “cognitive copilot” providing personalized assistance in daily tasks, social interactions, and environmental navigation. This paper also presents a novel protocol for developing AI systems tailored to the needs of individuals with cognitive disabilities, emphasizing user involvement, ethical considerations, and the need to address both the opportunities and challenges posed by GenAI.
Conclusions: Although GenAI has great potential for promoting the inclusion and empowerment of individuals with cognitive disabilities, realizing this potential requires a change in societal attitudes and development practices. This paper calls for interdisciplinary collaboration and close partnership with the disability community in the development and implementation of GenAI technologies. Realizing the potential of GenAI for promoting the inclusion and empowerment of individuals with cognitive disabilities requires a multifaceted approach. This involves a shift in societal attitudes, inclusive AI development practices that prioritize the needs and perspectives of the disability community, and ongoing interdisciplinary collaboration. This paper emphasizes the importance of proceeding with caution, recognizing the ethical complexities and potential risks alongside the transformative possibilities of GenAI technology.
doi:10.2196/64182
Keywords
Introduction
In the era of generative artificial intelligence (GenAI), traditional notions of personhood and normality are being challenged [
- ]. Technological advances are blurring the boundaries between human and machine capabilities, offering an opportunity to expand the limits of social inclusion and promote change in attitudes toward people with disabilities [ ]. As artificial intelligence (AI) systems demonstrate increasingly sophisticated cognitive abilities, they prompt us to reconsider what qualities define personhood and human intelligence. This paper examines the potential of GenAI to disrupt limiting conceptions of morality and humanity, focusing on the implications of GenAI for the social status of people with cognitive disabilities. This paper also proposes a practical toolkit for GenAI development and engineering professionals—product managers, data scientists, and developers—to help incorporate these insights into their work.Cognitive disability refers to a wide range of impairments affecting cognitive functions such as learning, problem-solving, judgment, communication, and social interaction [
]. Examples of cognitive disabilities include intellectual disability, attention-deficit/hyperactivity disorder, autism spectrum disorders, specific learning disabilities (such as dyslexia), and brain injuries (such as traumatic brain injury or stroke) [ - ]. It is important to emphasize the variety of individuals with cognitive disabilities, each one possessing a unique combination of strengths, impairments, and potential, which means that cognitive disabilities require personalized approaches to intervention. While recognizing the diverse nature of cognitive disabilities and the need for tailored solutions, this paper focuses on the general potential of GenAI to improve the lives of people across the spectrum of cognitive disabilities.Engaging with the integration of GenAI and individuals with cognitive disabilities is a new direction in the use of technology in the field of disability. The potential for AI to support and empower this population lies in its ability to perform cognitive tasks such as reasoning, planning, decision-making, and communication—areas that are challenging for people with cognitive disabilities [
- ]. The ability of AI to remove barriers and open new paths for inclusive and equitable participation makes it especially relevant for this population [ ]. An in-depth analysis of this ability requires examining the philosophical and ethical implications of AI for conceptions of humanity and morality, questions that directly determine how society views and accommodates individuals with cognitive disabilities. These are fundamental inquiries into the nature of intelligence, personhood, consciousness, and human agency, which largely determine the degree of participation and inclusion for this group.Personhood and AI: An Opportunity for Paradigm Shift
The concept of personhood, which emerged as a central topic in bioethical debates surrounding topics such as abortion, stem cell research, and euthanasia, has evolved into a complex and multifaceted construct that now spans multiple disciplines [
]. Inherently normative in nature, personhood involves value judgments and ethical considerations regarding how we ought to treat and perceive others rather than merely describing observable facts. Personhood is not rooted exclusively in our biology and experiences but in our essence and identity. This identity, however, is not formed in isolation; it is dynamically shaped in an intricate interaction between self-perception and the perception of others and interaction with them. Rosfort [ ] argued that this conceptualization of personhood reveals its profoundly relational and social nature, demonstrating how identity and perception of self-worth are inextricably woven into interactions and the broader human context.The concept of “personhood” has long served as a central criterion in bioethical discussions, determining which entities deserve moral consideration and rights [
]. As a result, this notion has also functioned as a mechanism of exclusion, denying basic rights and opportunities to those deemed cognitively “abnormal” [ ].For example, historically, people with cognitive disabilities were excluded from the public sphere and denied the right to make decisions for themselves [
, ]. Even today, despite significant progress in discourse and work based on the “social model” (an approach that views disability as created by societal barriers rather than by individual impairments alone) [ ] and the “minority group model” (which recognizes people with disabilities as a marginalized minority group) [ ], exclusion still exists in various aspects of life. People with cognitive disabilities still face barriers to accessing higher education and vocational training because of preconceived notions about their abilities [ ]. Despite relevant skills, they have difficulties securing meaningful employment and career advancement opportunities because of social stigma and prejudice [ ]. Participation in political or civic decision-making processes, such as voting or community involvement, is limited by discriminatory perceptions of the competence of individuals with cognitive disabilities [ ]. They are also excluded from leisure, social, and cultural activities because of a lack of access or restrictive attitudes toward their participation [ ].These exclusion examples illustrate how, as a result of conceptualizing what constitutes a person of merit, individuals with cognitive disabilities are often excluded in the deepest and broadest ways from society. This mechanism is difficult to identify because it operates through our language and the most basic organized mechanisms of any society: law, health care system, education system, and more [
].Breaking entrenched concepts and perceptions of personhood is challenging because they are deeply embedded in societal structures and norms, but emerging technologies are beginning to challenge these long held beliefs. GenAI offers an opportunity to challenge the definition of personhood perceptions by demonstrating skills previously considered unique to humans [
, ]. Although these capabilities are not yet perfect in AI, their very existence challenges the idea that such traits belong exclusively to the “normal” cognitive function of humans and that social participation is conditional on the presence of these abilities.The revolutionary potential of GenAI invites us to reexamine the criteria for membership in the moral community and expand them beyond limiting standards. Instead of relying on a narrow model of “correct” cognitive abilities as a prerequisite for rights and participation in society [
], we may adopt, with the assistance of GenAI, a more inclusive view that recognizes human diversity and the inherent value of all individuals, regardless of their abilities [ ]. By showcasing the potential of machines to exhibit complex cognitive traits, GenAI challenges the notion that certain abilities are essential for personhood and moral status. It initiates a discourse on the need to redefine our understanding of what it means to be human and to have moral worth, moving away from a focus on cognitive benchmarks and toward a more encompassing vision of human dignity and rights [ , ].Although AI presents opportunities to challenge our understanding of personhood, there are legitimate concerns about its potential to exacerbate exclusion and narrow definitions of “normal” human cognition. The inherent biases in AI systems, stemming from their training data and algorithmic design [
- ], risk reinforcing and amplifying existing societal prejudices [ ]. As AI increasingly influences decision-making processes in areas such as employment, health care, and criminal justice, there is a danger that it could lead to more stringent and narrow criteria for what constitutes “normal” human functioning. This could inadvertently heighten barriers for individuals with cognitive differences, further marginalizing them from full societal participation [ ]. Moreover, as AI systems become more sophisticated in mimicking certain human cognitive abilities, there is a risk that societal expectations of human performance might be unrealistically elevated, potentially creating an even more exclusionary standard of “normal” [ ]. Thus, while AI challenges our notions of personhood, it simultaneously risks entrenching and exacerbating existing forms of exclusion, highlighting the critical need for ethical AI development and deployment considering diverse human experiences and capabilities. In the following sections, we will explore 2 key areas where GenAI has the potential to drive significant change: GenAI as a social mirror and GenAI as a cognitive partner. These 2 domains highlight the multifaceted impact that GenAI can have on reshaping perceptions, removing barriers, and promoting participation of individuals with cognitive disabilities on the one hand, and exacerbating existing biases and exclusions in society on the other.Generative AI as a Social Mirror: Opportunity and Challenge
Overview
Vallor’s [
] conceptualization of AI as a societal mirror provides a compelling framework for understanding the role of AI in reflecting and potentially amplifying societal biases, particularly concerning cognitive disabilities. This mirror metaphor can be understood as follows: just as a physical mirror reflects the image of what stands before it, AI systems reflect the data, values, and biases present in the society that created them. However, unlike a simple reflection, AI systems can amplify and distort these reflections, much as a funhouse mirror might exaggerate certain features.This mirror effect illuminates how AI systems, trained on biased data, risk perpetuating existing prejudices against individuals with cognitive differences. AI essentially learns from and then projects back the biases inherent in its training data, potentially reinforcing and spreading these biases further. Paradoxically, this same reflective quality presents a unique opportunity to identify and address longstanding societal biases, rendering implicit prejudices explicit and subject to scrutiny. By closely examining what the AI “reflects back” to us, we can gain insights into biases that might otherwise remain hidden or unacknowledged in society.
Vallor [
] posits that AI systems in general, and GenAI systems in particular, are not merely neutral technological tools but mirrors reflecting the values, norms, and biases prevalent in human society. Given that these systems are constructed upon data and content created by humans, they inherently risk replicating and perpetuating prejudices and discrimination against marginalized groups, including people with cognitive disabilities [ , ].A study by Gadiraju et al [
] demonstrated this mirroring effect in action. They conducted 19 focus groups with 56 participants with various disabilities who interacted with a dialog model based on a large language model. The researchers found that the model frequently perpetuated harmful stereotypes and narratives about disability. For example, the model often fixated on physical disabilities, particularly wheelchairs, while neglecting other types of disabilities. It also tended to portray people with disabilities as passive, sad, and lonely, reinforcing the misconception that disability is inherently negative. Additionally, the model sometimes produced what participants referred to as “inspiration porn,” objectifying people with disabilities as sources of inspiration for nondisabled people.For example, if the information used to train AI systems contains stereotypical or derogatory expressions toward people with cognitive disabilities, there is a significant risk that these systems might “learn” to adopt discriminatory attitudes. The potential consequences are severe: AI systems could rank individuals with cognitive disabilities as having lower potential in employment or educational contexts, limit their access to certain services, or make biased decisions about them in critical areas such as insurance or credit [
].When we look into the societal mirror reflected by AI, several possible human responses can be identified. One metaphorical response is “breaking the mirror,” representing human resistance to AI use and the insights it presents [
]. While this approach attempts to avoid the uncomfortable truths AI exposes, it risks missing out on the potential benefits and insights AI can offer. Another metaphorical strategy is “cleaning the mirror,” where humans attempt to eliminate biases through AI alignment processes [ ]. This approach aims to create AI systems aligned with human values and intentions, striving for a bias-free environment. However, it risks producing an artificially sterile system that fails to reflect the complexities of human cognition and interaction, potentially making AI less relevant and less capable of addressing real-world complexities.The third and most promising approach involves using reflection as a call to action in the real world. This method requires humans to acknowledge the biases reflected by AI and use this awareness as a catalyst for societal change. It demands active engagement and concrete actions from us as humans to address these issues, both in our AI systems and in society at large [
]. This approach recognizes that if such action is taken, over time, the reflection in the AI mirror itself can change, not as a result of erasing biases in the machine as in the second option, but as a consequence of real societal change that is then differently reflected in the AI mirror.To implement this approach specifically within the realm of AI development and deployment, we must adopt advanced techniques and ensure inclusive human involvement. As contemporary AI systems increasingly incorporate vast datasets populated from the internet, traditional methods of addressing biases through direct data manipulation, such as the “datasheets” approach proposed by Gebru et al [
], while still valuable in certain contexts, have become more challenging to implement comprehensively. This shift has led to the adoption of complementary techniques that can handle the scale and complexity of modern AI systems such as self-supervised learning [ ] and reward modeling [ ]. Crucially, these techniques still require human decision-making at key junctures. To truly address biases and create more equitable AI systems, particularly regarding cognitive disabilities, we must ensure that people with cognitive disabilities are actively involved in these decision-making processes. This collaborative approach aligns with our third strategy, emphasizing real-world action and societal change. By critically examining the biases revealed in AI outputs and involving diverse perspectives in the development process, we can work toward creating more inclusive AI systems. This approach not only helps in developing fairer algorithms and more representative models but also contributes to broader societal change [ , ]. In this way, the AI mirror becomes not just a reflection of our current culture, but a catalyst for the more inclusive society we aspire to create [ , ].In conclusion, as illustrated in
, GenAI has the potential to promote social justice and shift perceptions regarding cognitive disabilities. To harness this potential, collaborative work and ongoing effort are required to embed values of accessibility, inclusion, and respect for diversity at the core of technological development. These steps can transform the “reflection in the mirror” into a positive and inclusive image for people with cognitive disabilities, potentially leading to broader societal changes in perception and inclusion.While this mirror metaphor provides valuable insights, it is important to recognize its limitations. Vallor’s conceptualization, though powerful, doesn’t fully capture the multifaceted potential of AI, particularly for people with disabilities. It overlooks its capability to actively solve previously intractable problems and enhance accessibility. To provide a more comprehensive understanding, we must expand our view beyond the perception of AI as a mere reflective tool. In the following section, we propose considering AI not only as a mirror but also as a cognitive partner for people with disabilities, emphasizing its potential to actively support and empower individuals with cognitive differences in navigating the world.
Generative AI as a Cognitive Partner for People With Disabilities
Beyond Vallor’s mirror metaphor for AI and its contingent inference on social change for people with cognitive disabilities, a significant potential of GenAI lies in its ability to serve as a “cognitive partner,” empowering participation of these people in life domains that were previously blocked or limited for them [
- ]. This partnership can be metaphorically described as a “cognitive copilot” (an AI assistant for complex cognitive tasks), assisting and empowering the individual with tasks requiring complex cognitive functions. For example, GenAI can help a person with cognitive disabilities manage daily tasks such as scheduling, budgeting, or navigating urban spaces by providing personalized reminders, recommendations, and guidance [ , ]. Additionally, it can serve as an advisor in complex social situations, such as interpreting body language [ ], suggesting appropriate responses to expressions of anger or mockery from others, or assisting in decision-making [ , ]. In this way, GenAI may act as a kind of “social copilot,” providing real-time support and feedback, allowing persons with cognitive disability to expand their circle of social interactions, inclusion, and activities.One of the outstanding strengths of GenAI is its ability to function as a translator and mediator between languages, concepts, and realities. For people with cognitive disabilities, translation and mediation pose a central challenge in daily life, both in understanding the environment and in expressing themselves in a way others can understand [
]. With its natural learning and processing capabilities, GenAI can bridge these gaps and make information and communication more accessible.The application of GenAI as a cognitive copilot can focus on 3 main areas (
):- Translating and making the inner world of people with cognitive disability accessible to themselves: GenAI can help people with cognitive disabilities better understand themselves, their thoughts, emotions, and needs. This is achieved by providing explanations and conceptualizations in clear and accessible language, identifying and interpreting emotional states, and suggesting strategies for coping with challenges [ ]. GenAI can serve as an “internal translator” that through a process of assistive conceptual scaffolding and cognitive structuring [ ] assists individuals in accurate self-understanding and self-expression.
- Bidirectional translation and mediation in interpersonal communication: By analyzing interpersonal and social information, GenAI can mediate interactions with other people, making it possible to negotiate the complexities inherent in human communication more successfully. The unique contribution of GenAI in this area lies in its ability to bridge the communication gap in both directions, helping the person with cognitive disability understand the social environment, the intentions of others, and the implicit messages in discourse, and making the person’s wants, needs, and emotions more accessible to the social environment [ ]. For example, on one hand, GenAI can offer interpretations of social cues and recommend appropriate responses, and on the other assist individuals in articulating their thoughts more clearly and presenting their unique perspectives. The technology can serve as a “two-way social translator,” enabling people with disability and their environment to better understand each other and promote respectful and equitable communication.
- Making the physical environment and public spaces accessible: GenAI can act as an “environmental translator,” converting complex information about the world into a clear and disability-friendly format. This can include, for example, simplifying official texts, graphically converting numeric data, or creating interactive guides for navigating public spaces [ ]. Thus, GenAI models that are open to the public can “see” and “understand” photos and videos and describe their content [ ], so that people with cognitive disabilities may gain greater access and independence in managing their lives.
The goal is not to “normalize” individuals with cognitive disabilities or to erase their disability. The cognitive partner metaphor, similar to Vallor’s mirror metaphor, can show how the use of AI might exacerbate exclusionary attitudes and further marginalize individuals with disabilities. Therefore, using AI for social change in our attitude toward people with cognitive disabilities means that the aim of this technology should be to enable access to environments and spaces that were previously closed or socially inaccessible to them, while also facilitating the accessibility of these environments to the individuals themselves. The approach should be person-centered, respecting diversity, and tailored to the unique aspirations and needs of everyone, rather than imposing a uniform standard of “proper” functioning.
Serious consideration must be given to the ethical implications of such a close integration between humans and machines, particularly in the areas of autonomy and responsibility. Questions of privacy, data security, and people’s ownership of decisions made by AI systems need to be thoroughly examined [
, ]. Robust oversight and regulatory mechanisms must be in place to ensure the responsible and ethical use of AI, safeguarding the rights and well-being of users. This is especially critical when working with vulnerable populations such as people with cognitive disabilities, where protecting individual autonomy is important [ , ].In conclusion, although AI-based “cognitive copilot” applications for people with cognitive disabilities have the potential to remove barriers, increase participation, and promote equal opportunities across various domains of life, it is essential to proceed with caution. This technology must function as a “translator” to contribute to a more inclusive and equitable society, and we must remain vigilant to its risks. Ensuring that AI development is person-centered, ethically sound, and involves active participation from the disability community is crucial for harnessing its benefits without worsening existing biases and systemic barriers.
Implication for AI Developers and Technologists
GenAI has immense potential to promote inclusion and equality for people with cognitive disabilities but to realize this potential requires a perceptual shift on the part of developers, engineers, researchers, and product managers. Instead of focusing narrowly on “fixing” certain impairments, they must adopt a more holistic approach that views technology as a lever for social integration and broad improvement in quality of life [
- ]. This involves a transition from regarding GenAI as a mere technical solution to perceiving it as a tool for effecting social change for the population with cognitive disabilities.In practice, close and ongoing collaboration with people with cognitive disabilities throughout all stages of development is important [
]. Development teams must learn from the unique experiences and needs of individuals with cognitive disability and meaningfully integrate them into the design and construction of GenAI systems and prompts.Recent research has demonstrated the feasibility and importance of this approach. For example, Newbutt et al [
] conducted a systematic review of studies involving autistic individuals in the design of extended reality technologies. They found that out of 20 studies published between 2002‐2022, several successfully engaged autistic individuals as active co-designers and cocreators, allowing them to shape the final products according to their needs and preferences. This highlights the growing trend and importance of including the target users in the design process.This requires a joint definition of goals, adapting user interfaces and user experience to their modes of thinking and communication, and clearly formulating principles of cognitive accessibility from the earliest planning stages [
]. The aspiration is for the empowerment and inclusion of people with cognitive disabilities to be embedded in the core of the technology and in the layer of its use.Bircanin et al [
] presented a practical approach to including adults with severe intellectual disabilities in co-design through active support. They demonstrated how principles such as “every moment has potential,” “graded assistance,” “little and often,” and “maximizing choice and control” can be applied in design contexts to ensure meaningful participation of individuals with severe cognitive disabilities. This approach provides concrete strategies for AI developers to engage with this population during the development process.For example, it is important to examine how the prompt-based user interface can be made accessible and adapted to the cognitive and communication characteristics of people with different types of cognitive disabilities. Consideration should be given to whether the development of dedicated products is the right direction or whether personal adaptation at the level of the individual user is preferable [
]. Answering such questions requires ongoing discourse and feedback from the community itself.Dirks [
] explored the ethical challenges in inclusive software development projects with people with cognitive disabilities. The study emphasized the importance of maximizing choice and control for participants, using a graded assistance approach, and ensuring every moment has potential for meaningful engagement. These principles can guide AI developers in creating more inclusive design processes.To assist developers and researchers in implementing the principles presented in this paper, we propose a working protocol specifically tailored to the development challenges of GenAI technologies aimed at people with cognitive disabilities. The protocol (
) is based on the model developed by Amershi et al [ ], which was formulated following comprehensive research, including a review of academic and industry literature, interviews with experts, and an examination of a wide range of AI-based products. The original model defines 18 general guidelines for designing human-AI interactions across different time frames and stages of interaction. In practice, these guidelines serve as a framework for developing human-centered AI systems, focusing on aspects such as transparency, fairness, reliability, safety, privacy, security, and accountability. Developers and designers use these guidelines to enhance human-AI interaction by implementing practices such as explaining AI decisions to users, designing interfaces that enable user control and feedback, and incorporating mechanisms to identify and mitigate biases [ ].Stage and dimension | Guidelines for AI interaction with people with cognitive disabilities | Implementation examples | |
Initial | |||
Personal | I1. Identify and adapt to the user’s unique cognitive and emotional needs. | I1. Create a personal profile including preferences, abilities, and challenges. | |
Interpersonal | I2. Show awareness of the social and cultural context of system use. | I2. Consider the human environment (eg, caregivers or family members) as part of system definition. | |
During interaction | |||
Personal | D1. Provide custom-tailored, gradual, and structured responses to personal needs during use. | D1. Identify difficulties and adapt the level of assistance and feedback in real time. | |
Interpersonal | D2. Promote positive and reciprocal communication with the human environment. | D2. Mediate social interactions by simplifying and explaining social cues. | |
Environmental | D3. Assist in orientation, navigation, and independent functioning in complex spaces. | D3. Provide detailed instructions and cues on proper conduct in different places. | |
When the system errs | |||
Personal | E1. Handle errors respectfully and in an empowering way, with emphasis on learning and progress. | E1. Provide repeated opportunities to try again, together with verbal encouragement. | |
Interpersonal | E2. Involve support persons in the process of learning and correction. | E2. Provide a possibility for a caregiver to assist in problem-solving or making necessary adjustments. | |
Environmental | E3. Avoid placing responsibility on the user in complex or unexpected situations. | E3. Make human backup available by default in case of significant problems. | |
Over time | |||
Personal | T1. Continually adapt to the pace of development, learning, and changes in personal needs. | T1. Track progress and adapt tasks and goals accordingly. | |
Interpersonal | T2. Show sensitivity to changes in relationships and roles within the support circle. | T2. Update user profiles and access settings based on feedback from the environment. | |
Environmental | T3. Show flexibility and adaptability to changing environments and transitions between contexts. | T3. Automatically detect location changes and provide relevant recommendations. | |
Collaboration | T4. Actively involve users and stakeholders in the ongoing development of the system. | T4. Provide mechanisms for receiving feedback and involving users in decisions about updates and improvements. |
aThe model for this protocol by Amershi et al [
] is based on extensive research and analysis of a range of artificial intelligence products and defines 18 general guidelines across different stages of interaction. We adapted and extended this model to address specifically the needs and challenges of designing artificial intelligence technologies for people with cognitive disabilities. The protocol incorporates 4 key dimensions: personal, interpersonal, environmental, and collaborative, and provides concrete examples of how these considerations can be integrated throughout the life cycle of the artificial intelligence system. By implementing this protocol, developers can create artificial intelligence tools that empower and enhance the lives of individuals with cognitive disabilities.Building on the analysis presented in this paper, we expand the model of Amershi et al [
] and adapt it to the 4 central dimensions in which AI systems can assist people with cognitive disabilities: the personal, the interpersonal, the environmental, and the collaborative. For each of these dimensions, we propose guidelines and offer practical examples of how the relevant considerations can be embedded at different stages of the system life cycle, from defining the initial requirements, through ongoing interaction, to continuous adaptation and improvement. The proposed protocol serves as a foundation that requires further development, testing, and investigation, but it can serve as a starting point for discourse and the advancement of best practices in designing AI systems for individuals with cognitive disabilities.Conclusion
The emergence of GenAI technologies represents a pivotal moment in reconceptualizing disability and personhood. We suggest that the advent of GenAI challenges assumptions about what qualifies an individual as a “person” and questions the notion that cognitive abilities are the sole determinant of one’s rights and societal participation.
In this paper, we explored the transformative potential of GenAI in reshaping perceptions, dismantling barriers, and empowering individuals with cognitive disabilities. By serving as a social mirror [
], AI systems can expose and challenge deeply ingrained biases and prejudices, compelling us to confront the ways we have historically marginalized and excluded the population with cognitive disabilities. Simultaneously, by functioning as a cognitive partner, GenAI may provide unprecedented opportunities for individuals with cognitive disabilities to participate in society.Realizing this vision requires more than technological innovation, however. It demands a gradual shift in societal attitudes and a sincere effort to involve people with cognitive disabilities in the AI development process, granting them autonomy and recognizing and valuing their abilities. This is where the role of technology professionals and GenAI developers becomes crucial.
The importance of designing AI thoughtfully lies in the understanding that whether we consider AI as a mirror or as a cognitive partner, both metaphors indicate that AI will increasingly mediate how we perceive the world, ourselves, and others, confirming once again McLuhan’s [
] statement that “the medium is the message.” This means that the significant effect of AI lies not merely in the content we explore through it but in how its very use changes us. Therefore, the design and development of AI tools will profoundly influence the future of human society, how we perceive individuals with disabilities, as well as the rights and social positions they will attain. Therefore, how AI is being shaped now will determine its role in reinforcing existing biases or promoting a more inclusive and equitable society.The proposed protocol, based on the work by Amershi et al [
], offers a practical framework for implementing these principles as part of GenAI development for people with cognitive disabilities. This paper marks only the beginning of the discussion about GenAI and developmental disabilities, therefore we must remain vigilant regarding the ethical and social implications of GenAI and continue to engage in open, multidisciplinary dialogue about how to harness its potential for the greater good.The path ahead is complex and challenging, but it is also filled with immense possibilities. As we look toward the future, the evolution of AI from reactive, prompt-based systems to proactive, autopilot models promises to further expand these possibilities, particularly for individuals with cognitive disabilities. These advanced systems, capable of learning user needs and initiating interactions without explicit prompts, could provide more seamless and intuitive support, potentially revolutionizing the way we approach cognitive assistance.
Technological progress also involves an ongoing need for ethical and inclusive development. We must prioritize user autonomy and privacy while maximizing the benefits of technological assistance. This balance is important not only for protecting individual rights but also for ensuring that AI serves the needs of those it aims to support.
By embracing the potential of GenAI while remaining vigilant regarding its ethical implications, researchers, developers, and policy makers can create technologies that not only uplift those who have been historically marginalized but enrich the human experience for us all. In doing so, we may take a step toward a future where technology serves as a platform for inclusivity and empowerment.
Acknowledgments
The authors would like to express their gratitude to the Artificial Third community for promoting multidisciplinary discourse on artificial intelligence in mental health. This community has made possible valuable interactions between researchers in the fields of psychology, disability studies, and artificial intelligence, contributing to the development of this theoretical study.
Conflicts of Interest
The author TS is the chief scientist of R&D at Microsoft Israel. The views and opinions expressed here are those of the authors and do not reflect the official policy or position of Microsoft. TS received no financial compensation for his contribution to this work.
References
- Elyoseph Z, Shoval DH, Levkovich I. Beyond personhood: ethical paradigms in the generative artificial intelligence era. Am J Bioeth. Jan 2024;24(1):57-59. [CrossRef] [Medline]
- Holm S, Lewis J. The ends of personhood. Am J Bioeth. Jan 2024;24(1):30-32. [CrossRef] [Medline]
- Blumenthal-Barby J. The end of personhood. Am J Bioeth. Jan 2024;24(1):3-12. [CrossRef] [Medline]
- Haber Y, Levkovich I, Hadar-Shoval D, Elyoseph Z. The artificial third: a broad view of the effects of introducing generative artificial intelligence on psychotherapy. JMIR Ment Health. May 23, 2024;11:e54781. [CrossRef] [Medline]
- Sutcliffe MS, Radonovich K. Cognitive disabilities in children and adolescents. In: Halpern-Felsher B, editor. Encyclopedia of Child and Adolescent Health. Academic Press; 2023:11-21. [CrossRef]
- Houwen S, Visser L, van der Putten A, Vlaskamp C. The interrelationships between motor, cognitive, and language development in children with and without intellectual and developmental disabilities. Res Dev Disabil. Jun 2016;53-54:19-31. [CrossRef]
- Dev P. Introduction to different kinds of cognitive disorders. In: Gupta S, editor. Bio-Inspired Algorithms and Devices for Treatment of Cognitive Diseases Using Future Technologies. IGI Global; 2022:39-55. [CrossRef]
- Peltokorpi J, Hoedt S, Colman T, Rutten K, Aghezzaf EH, Cottyn J. Manual assembly learning, disability, and instructions: an industrial experiment. Int J Prod Res. Nov 17, 2023;61(22):7903-7921. [CrossRef]
- Heuser P, Letmathe P, Vossen T. Skill development in the field of scheduling: a structured literature review. Eur J Oper Res. Mar 2025;321(3):697-716. [CrossRef]
- Rojas M, Balderas DC, Maldonado J, Ponce P, Lopez-Bernal D, Molina A. Lack of verified inclusive technology for workers with disabilities in Industry 4.0: a systematic review. Int J Sustainable Eng. Dec 31, 2024;17(1):1-21. [CrossRef]
- Litwin P, Antonelli D, Stadnicka D. Employing disabled workers in production: simulating the impact on performance and service level. Int J Prod Res. Jun 17, 2024;62(12):4530-4545. [CrossRef]
- Young G. Personhood across disciplines: applications to ethical theory and mental health ethics. Ethics Med Public Health. Jul 2019;10:93-101. [CrossRef]
- Rosfort R. Personhood. In: Stanghellini G, Broome M, Fernandez AV, FusarPoli P, Raballo A, Rosfort R, editors. The Oxford Handbook of Phenomenological Psychopathology. Oxford University Press; 2019:335-343. [CrossRef]
- Asch A. Disability, bioethics, and human rights. In: Albrecht GL, Seelman KD, Bury M, editors. Handbook of Disability Studies. Sage Publications; 2001. [CrossRef]
- Shakespeare T. The social model of disability. In: Davis LJ, editor. The Disability Studies Reader. Routledge; 2006:197-204.
- Davis LJ, editor. The Disability Studies Reader. 5th ed. Routledge; 2016. [CrossRef]
- Iacovou M. A contribution towards a possible re-invigoration of our understanding of the social model of disability’s potential. Disabil Soc. Aug 9, 2021;36(7):1169-1185. [CrossRef]
- Brinkman AH, Rea-Sandin G, Lund EM, et al. Shifting the discourse on disability: moving to an inclusive, intersectional focus. Am J Orthopsychiatry. 2023;93(1):50-62. [CrossRef] [Medline]
- Cinquin PA, Guitton P, Sauzéon H. Designing accessible MOOCs to expand educational opportunities for persons with cognitive impairments. Behav Inf Technol. Aug 18, 2021;40(11):1101-1119. [CrossRef]
- Fuentes K, Hsu S, Patel S, Lindsay S. More than just double discrimination: a scoping review of the experiences and impact of ableism and racism in employment. Disabil Rehabil. Feb 2024;46(4):650-671. [CrossRef] [Medline]
- Schidel R. Philosophy universal enfranchisement for citizens with cognitive disabilities – a moral-status argument disabilities – a moral-status argument. Crit Rev Int Soc Polit Philos. 2023;26(5):658-679. [CrossRef]
- Zhu D, Al Mahmud A, Liu W. Social connections and participation among people with mild cognitive impairment: barriers and recommendations. Front Psychiatry. 2023;14:1188887. [CrossRef] [Medline]
- Foucault M. Discipline and Punish: The Birth of the Prison. Vintage; 1995.
- Noy S, Zhang W. Experimental evidence on the productivity effects of generative artificial intelligence. Science. Jul 14, 2023;381(6654):187-192. [CrossRef] [Medline]
- Chan CKY, Hu W. Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ. 2023;20(1):43. [CrossRef]
- Yusuf A, Pervin N, Román-González M. Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives. Int J Educ Technol High Educ. 2024;21(1):21. [CrossRef]
- Hadar-Shoval D, Asraf K, Mizrachi Y, Haber Y, Elyoseph Z. Assessing the alignment of large language models with human values for mental health integration: cross-sectional study using Schwartz’s theory of basic values. JMIR Ment Health. Apr 9, 2024;11:e55988. [CrossRef] [Medline]
- Hadar-Shoval D, Asraf K, Shinan-Altman S, Elyoseph Z, Levkovich I. Embedded values-like shape ethical reasoning of large language models on primary care ethical dilemmas. Heliyon. Sep 30, 2024;10(18):e38056. [CrossRef] [Medline]
- Cath C, Wachter S, Mittelstadt B, Taddeo M, Floridi L. Artificial Intelligence and the “Good Society”: the US, EU, and UK approach. Sci Eng Ethics. Apr 2018;24(2):505-528. [CrossRef] [Medline]
- Whittaker M, Alper M, Bennett CL, et al. Disability, bias, and AI. AI Now Institute. 2019. URL: https://ainowinstitute.org/publication/disabilitybiasai-2019 [Accessed 2024-12-24]
- Cave S, Dihal K. The whiteness of AI. Philos Technol. Dec 2020;33(4):685-703. [CrossRef]
- Vallor S. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press; 2024.
- Hadar-Shoval D, Elyoseph Z, Lvovsky M. The plasticity of ChatGPT’s mentalizing abilities: personalization for personality structures. Front Psychiatry. 2023;14:1234397. [CrossRef] [Medline]
- Gadiraju V, Kane S, Dev S, et al. “I wouldn’t say offensive but...”: disability-centered perspectives on large language models. Presented at: FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency; Jun 12-15, 2023; Chicago, IL, United States of America. [CrossRef]
- Kidd C, Birhane A. How AI can distort human beliefs. Science. Jun 23, 2023;380(6651):1222-1223. [CrossRef] [Medline]
- Vallor S. The AI mirror: reclaiming our humanity in an age of machine thinking. Presented at: AIES ’22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society; May 19-21, 2022; Oxford, United Kingdom. [CrossRef]
- Sætra HS, Coeckelbergh M, Danaher J. The AI ethicist’s dirty hands problem. Commun ACM. Jan 2023;66(1):39-41. [CrossRef]
- Chien J, Danks D. Beyond behaviorist representational harms: a plan for measurement and mitigation. Presented at: FAccT ’24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency; Jun 3-6, 2024; Rio de Janeiro, Brazil. [CrossRef]
- Gebru T, Morgenstern J, Vecchione B, et al. Datasheets for datasets. Commun ACM. Dec 2021;64(12):86-92. [CrossRef]
- Kommrusch S, Monperrus M, Pouchet LN. Self-supervised learning to prove equivalence between straight-line programs via rewrite rules. IEEE Trans Softw Eng. 2023;49(7):3771-3792. [CrossRef]
- Da Costa L, Sajid N, Parr T, Friston K, Smith R. Reward maximization through discrete active inference. Neural Comput. Apr 18, 2023;35(5):807-852. [CrossRef] [Medline]
- Canton E, Hedley D, Spoor JR. The stereotype content model and disabilities. J Soc Psychol. Jul 4, 2023;163(4):480-500. [CrossRef] [Medline]
- Ferland L, Li Z, Sukhani S, Zheng J, Zhao L, Gini M. Assistive AI for coping with memory loss. Presented at: The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18); Feb 2-7, 2018; New Orleans, Louisiana, USA. URL: https://cdn.aaai.org/ocs/ws/ws0528/17360-76000-1-PB.pdf [Accessed 2024-12-24]
- Pednekar S, Dhirawani P, Shah R, Shekokar N, Ghag K. Voice-based interaction for an aging population: a systematic review. Presented at: 2023 3rd International Conference on Intelligent Communication and Computational Techniques (ICCT); Jan 19-20, 2023; Jaipur, India. [CrossRef]
- Robledo-Castro C, Castillo-Ossa LF, Corchado JM. Artificial cognitive systems applied in executive function stimulation and rehabilitation programs: a systematic review. Arab J Sci Eng. 2023;48(2):2399-2427. [CrossRef] [Medline]
- Huq SM, Maskeliūnas R, Damaševičius R. Dialogue agents for artificial intelligence-based conversational systems for cognitively disabled: a systematic review. Disabil Rehabil Assist Technol. Apr 2024;19(3):1059-1078. [CrossRef] [Medline]
- Hueso M, Álvarez R, Marí D, Ribas-Ripoll V, Lekadir K, Vellido A. Is generative artificial intelligence the next step toward a personalized hemodialysis? RIC. Dec 20, 2023;75(6):309-317. [CrossRef]
- Elyoseph Z, Refoua E, Asraf K, Lvovsky M, Shimoni Y, Hadar-Shoval D. Capacity of generative AI to interpret human emotions from visual and textual data: pilot evaluation study. JMIR Ment Health. Feb 6, 2024;11:e54369. [CrossRef] [Medline]
- Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023;14:1199058. [CrossRef] [Medline]
- Kelly C, Cornwell P, Hewetson R, Copley A. The pervasive and unyielding impacts of cognitive-communication changes following traumatic brain injury. Int J Lang Commun Disord. 2023;58(6):2131-2143. [CrossRef] [Medline]
- Asman O, Segal-Reich M. Supported decision making “adaptive suit” for non-dominating mental scaffolding. Am J Bioeth - Neurosci. 2023;14(3):238-240. [CrossRef]
- Tal A, Elyoseph Z, Haber Y, et al. The artificial third: utilizing ChatGPT in mental health. Am J Bioeth. Oct 2023;23(10):74-77. [CrossRef] [Medline]
- Elyoseph Z, Gur T, Haber Y, et al. An ethical perspective on the democratization of mental health with generative AI. JMIR Ment Health. Oct 17, 2024:e58011. [CrossRef] [Medline]
- Johansson S, Gulliksen J, Lantz A. User participation when users have mental and cognitive disabilities. Presented at: The 17th International ACM SIGACCESS Conference on computers and accessibility; Oct 26-28, 2015; Lisbon, Portugal. [CrossRef]
- Donati M, Pacini F, Baldanzi L, Turturici M, Fanucci L. Remotely controlled electronic goalkeeper: an example of improving social integration of persons with and without disabilities. Appl Sci (Basel). 2023;13(11):6813. [CrossRef]
- Harb B, Sidani D. Smart technologies challenges and issues in social inclusion – case of disabled youth in a developing country. JABS. Mar 23, 2022;16(2):308-323. [CrossRef]
- Dirks S. Ethical challenges in inclusive software development projects with people with cognitive disabilities: ethische herausforderungen in inklusiven softwareentwicklungsprojekten mit menschen mit kognitiven beeinträchtigungen. Presented at: MuC ’22: Proceedings of Mensch und Computer 2022; Nov 4-7, 2022; Darmstadt, Germany. [CrossRef]
- Newbutt N, Glaser N, Francois MS, Schmidt M, Cobb S. How are autistic people involved in the design of extended reality technologies? a systematic literature review. J Autism Dev Disord. Nov 2024;54(11):4232-4258. [CrossRef] [Medline]
- Shastri K, Boger J, Marashi S, et al. Working towards inclusion: creating technology for and with people living with mild cognitive impairment or dementia who are employed. Dem (Lon). Feb 2022;21(2):556-578. [CrossRef] [Medline]
- Bircanin F, Brereton M, Sitbon L, Ploderer B, Azaabanye Bayor A, Koplick S. Including adults with severe intellectual disabilities in co-design through active support. CHI Conf Hum Factor Comput Syst. 2021:1-12. [CrossRef]
- Moreno L, Petrie H, Martínez P, Alarcon R. Designing user interfaces for content simplification aimed at people with cognitive impairments. Univ Access Inf Soc. Mar 24, 2023;23(1):1-19. [CrossRef] [Medline]
- Amershi S, Weld D, Vorvoreanu M, et al. Guidelines for human-AI interaction. Presented at: CHI ’19; May 4-9, 2019; Glasgow, Scotland, United Kingdom. [CrossRef]
- Bingley WJ, Curtis C, Lockey S, et al. Where is the human in human-centered AI? Insights from developer priorities and user experiences. Comput Hum Behav. Apr 2023;141:107617. [CrossRef]
- McLuhan M. Understanding Media: The Extensions of Man. McGraw Hill; 1964.
Abbreviations
AI: artificial intelligence |
GenAI: generative artificial intelligence |
Edited by Pieter Kubben; submitted 10.07.24; peer-reviewed by Abdullahi Yusuf, Chenxu Wang; final revised version received 22.10.24; accepted 06.11.24; published 15.01.25.
Copyright© Dorit Hadar Souval, Yuval Haber, Amir Tal, Tomer Simon, Tal Elyoseph, Zohar Elyoseph. Originally published in JMIR Neurotechnology (https://neuro.jmir.org), 15.1.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Neurotechnology, is properly cited. The complete bibliographic information, a link to the original publication on https://neuro.jmir.org, as well as this copyright and license information must be included.