Recent advancements in AI image generation have enabled more accurate representation of people with disabilities, as demonstrated by a former Paralympic swimmer who successfully created an image of herself after previous attempts failed due to algorithmic bias.
Jess Smith, a former Australian Paralympic swimmer, attempted to use ChatGPT to generate an image of herself this summer. She uploaded a photo and specified that she was missing her left arm below the elbow, but the AI consistently produced images of women with two arms or with prosthetic devices instead of accurately reflecting her appearance. This failure highlighted a significant gap in how AI systems perceive and represent diversity.
Upon questioning the AI, it responded that it lacked sufficient data to create such an image. Smith realized that this limitation mirrored broader societal inequalities, where people with disabilities are often overlooked in technological development. She noted that AI, as a reflection of current data, perpetuates existing biases unless intentionally addressed, emphasizing the need for inclusive design from the outset.
In a recent attempt, Smith was surprised to find that ChatGPT could now generate a correct image of a woman with one arm, similar to her. She expressed excitement about this progress, highlighting it as a significant step forward for inclusion in technology. For millions with disabilities, such improvements mean being seen and considered in the digital world, moving beyond being an afterthought.
A spokesperson for OpenAI, the company behind ChatGPT, acknowledged recent “meaningful improvements” to their image generation model. They admitted that challenges remain in ensuring fair representation and are actively working on refining methods and adding diverse examples to reduce bias over time. This commitment reflects a growing awareness within the tech industry of the importance of ethical AI development.
However, not all issues are resolved. Naomi Bowman, who has sight in only one eye, shared her experience where ChatGPT altered her face to “even out” her eyes when she requested a background blur. Despite explicit instructions, the AI could not accommodate her condition, revealing persistent biases in the system that still exclude certain disabilities.
Experts like Abran Maldonado of Create Labs emphasize that diversity in AI starts with inclusive data labeling and training processes. He stresses the need for cultural representation during the creation stage to avoid missing underrepresented groups, as seen in past studies where facial recognition algorithms were less accurate for non-Caucasian faces. Without this, AI risks amplifying societal inequalities.
Smith, who does not identify as disabled but acknowledges societal barriers, warns that AI systems risk repeating the oversights of physical world design if not built with everyone in mind. As technology evolves, ongoing collaboration with diverse communities and rigorous testing will be essential to ensure that AI benefits all of humanity equitably.
