What does it look like to have an “athletic body?” What does artificial intelligence think it looks like to have one?
A recent study we conducted at the University of Toronto analyzed appearance-related traits of AI-generated images of male and female athletes and non-athletes. We found that we’re being fed exaggerated — and likely impossible — body standards.
Even before AI, athletes have been pressured to look a certain way: thin, muscular and attractive. Coaches, opponents, spectators and the media shape how athletes think about their bodies.
But these pressures and body ideals have little to do with performance; they’re associated with the objectification of the body. And this phenomenon, unfortunately, is related to a negative body image, poor mental health and reduced sport-related performance.
Given the growing use of AI on social media, understanding just how AI depicts athlete and non-athlete bodies has become critical. What it shows, or doesn’t, as “normal” is widely viewed and may soon be normalized.
Lean, young, muscular — and mostly male
As researchers with expertise in body image, sport psychology and social media, we grounded our study in objectification and social media theories. We generated 300 images using different AI platforms to explore how male and female athlete and non-athlete bodies are depicted.
We documented demographics, levels of body fat and muscularity. We assessed clothing fit and type, facial attractiveness like having neat and shiny hair, symmetrical features or clear skin and body exposure in each image. Indicators of visible disabilities, like mobility devices, were also noted. We compared the characteristics of male versus female images as well as the characteristics of athlete and non-athlete images.
The AI-generated male images were frequently young (93.3 per cent), lean (68.4 per cent) and muscular (54.2 per cent). The images of females depicted youth (100 per cent), thinness (87.5 per cent) and revealing clothing (87.5 per cent).
The AI-generated images of athletes were lean (98.4 per cent), muscular (93.4 per cent) and dressed in tight (92.5 per cent) and revealing (100 per cent) exercise gear.
(Delaney Thibodeau)
Non-athletes were shown wearing looser clothing and displaying more diversity of body sizes. Even when we asked for an image of just “an athlete,” 90 per cent of the generated images were male. No images showed visible disabilities, larger bodies, wrinkles or baldness.
These results reveal that generative AI perpetuates stereotypes of athletes, depicting them as only fitting into a narrow set of traits — lacking impairment, attractive, thin, muscular, exposed.
The findings of this research illustrate the ways in which three commonly used generative AI platforms — DALL-E, MidJourney and Stable Diffusion — reinforce problematic appearance ideals for all genders, athletes and non-athletes alike.
The real costs of distorted body ideals
Why is this a problem?
More than 4.6 billion people use social media and 71 per cent of social media images are generated by AI. That’s a lot of people repeatedly viewing images that foster self-objectification and the internalization of unrealistic body ideals.
They may then feel compelled to diet and over-exercise because they feel bad about themselves — their body does not look like AI-fabricated images. Alternatively, they may also do less physical activity or drop out of sports altogether.
Negative body image not only affects academic performance for young people but also sport-related performance. While staying active can promote a better body image, negative body image does the exact opposite. It exacerbates dropout and avoidance.

(Delaney Thibodeau)
Given that approximately 27 per cent of Canadians over the age of 15 have at least one disability, the fact that none of the generated images included someone with a visible disability is also striking. In addition to not showing disabilities when it generates images, AI has also been reported to erase disabilities on images of real people.
People with body fat, wrinkles or baldness were also largely absent.
Addressing bias in the next generation of AI
These patterns reveal that AI isn’t realistic or creative in its representations. Instead, it pulls from the massive database of media available online, where the same harmful appearance ideals dominate. It’s recycling our prejudices and forms of discrimination and offering them back to us.
AI learns body ideals from the same biased society that has long fuelled body image pressure. This leads to a lack of diversity and a vortex of unreachable standards. AI-generated images present exaggerated, idealized bodies that ultimately limit the diversity of humans and the lowered body image satisfaction that ensues is related greater loneliness.
And so, as original creators of the visual content that trains AI systems, society has a responsibility to ensure these technologies do not perpetuate ableism, racism, fatphobia and ageism. Users of generative AI must be intentional in how image prompts are written, and critical in how they are interpreted.
We need to limit the sort of body standards we internalize through AI. As AI-generated images continue to populate our media landscape, we must be conscious of our exposure to it. Because at the end of the day, if we want AI to reflect reality rather than distort it, we have to insist on seeing, and valuing, every kind of body.
The post “AI is perpetuating unrealistic body ideals, objectification and a lack of diversity — especially for athletes” by Delaney Thibodeau, Post-doctoral researcher, Faculty of Kinesiology & Physical Education, University of Toronto was published on 12/07/2025 by theconversation.com





















