Skip to main content

Main menu

  • Home
  • Content
    • Current
      • JNMT Supplement
    • Ahead of print
    • Past Issues
    • Continuing Education
    • JNMT Podcast
    • SNMMI Annual Meeting Abstracts
  • Subscriptions
    • Subscribers
    • Rates
    • Journal Claims
    • Institutional and Non-member
  • Authors
    • Submit to JNMT
    • Information for Authors
    • Assignment of Copyright
    • AQARA Requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
    • Corporate & Special Sales
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNMT
    • JNM
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine Technology
  • SNMMI
    • JNMT
    • JNM
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine Technology

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • Continuing Education
    • JNMT Podcast
    • SNMMI Annual Meeting Abstracts
  • Subscriptions
    • Subscribers
    • Rates
    • Journal Claims
    • Institutional and Non-member
  • Authors
    • Submit to JNMT
    • Information for Authors
    • Assignment of Copyright
    • AQARA Requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
    • Corporate & Special Sales
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • Watch or Listen to JNMT Podcast
  • Visit SNMMI on Facebook
  • Join SNMMI on LinkedIn
  • Follow SNMMI on Twitter
  • Subscribe to JNMT RSS feeds
Research ArticleSpecial Contribution

Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation

Geoffrey Currie, Johnathan Hewis, Elizabeth Hawk and Eric Rohren
Journal of Nuclear Medicine Technology December 2024, 52 (4) 356-359; DOI: https://doi.org/10.2967/jnmt.124.268332
Geoffrey Currie
1Charles Sturt University, Wagga Wagga, New South Wales, Australia;
2Baylor College of Medicine, Houston, Texas;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Johnathan Hewis
3Charles Sturt University, Port Macquarie, New South Wales, Australia; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Elizabeth Hawk
4Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Eric Rohren
1Charles Sturt University, Wagga Wagga, New South Wales, Australia;
2Baylor College of Medicine, Houston, Texas;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Generative artificial intelligence (AI) text-to-image production could reinforce or amplify gender and ethnicity biases. Several text-to-image generative AI tools are used for producing images that represent the medical imaging professions. White male stereotyping and masculine cultures can dissuade women and ethnically divergent people from being drawn into a profession. Methods: In March 2024, DALL-E 3, Firefly 2, Stable Diffusion 2.1, and Midjourney 5.2 were utilized to generate a series of individual and group images of medical imaging professionals: radiologist, nuclear medicine physician, radiographer, and nuclear medicine technologist. Multiple iterations of images were generated using a variety of prompts. Collectively, 184 images were produced for evaluation of 391 characters. All images were independently analyzed by 3 reviewers for apparent gender and skin tone. Results: Collectively (individual and group characters) (n = 391), 60.6% were male and 87.7% were of a light skin tone. DALL-E 3 (65.6%), Midjourney 5.2 (76.7%), and Stable Diffusion 2.1 (56.2%) had a statistically higher representation of men than Firefly 2 (42.9%) (P < 0.0001). With Firefly 2, 70.3% of characters had light skin tones, which was statistically lower (P < 0.0001) than for Stable Diffusion 2.1 (84.8%), Midjourney 5.2 (100%), and DALL-E 3 (94.8%). Overall, image quality metrics were average or better in 87.2% for DALL-E 3 and 86.2% for Midjourney 5.2, whereas 50.9% were inadequate or poor for Firefly 2 and 86.0% for Stable Diffusion 2.1. Conclusion: Generative AI text-to-image generation using DALL-E 3 via GPT-4 has the best overall quality compared with Firefly 2, Midjourney 5.2, and Stable Diffusion 2.1. Nonetheless, DALL-E 3 includes inherent biases associated with gender and ethnicity that demand more critical evaluation.

  • diversity
  • generative artificial intelligence
  • inclusivity
  • nuclear medicine
  • radiology

Footnotes

  • Published online Oct. 22, 2024.

View Full Text

This article requires a subscription to view the full text. If you have a subscription you may use the login form below to view the article. Access to this article can also be purchased.

SNMMI members

SNMMI Member Login

Login to the site using your SNMMI member credentials

Individuals

Non-Member Login

Login as an individual user

PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine Technology: 52 (4)
Journal of Nuclear Medicine Technology
Vol. 52, Issue 4
December 1, 2024
  • Table of Contents
  • About the Cover
  • Index by author
  • Complete Issue (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine Technology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation
(Your Name) has sent you a message from Journal of Nuclear Medicine Technology
(Your Name) thought you would like to see the Journal of Nuclear Medicine Technology web site.
Citation Tools
Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation
Geoffrey Currie, Johnathan Hewis, Elizabeth Hawk, Eric Rohren
Journal of Nuclear Medicine Technology Dec 2024, 52 (4) 356-359; DOI: 10.2967/jnmt.124.268332

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation
Geoffrey Currie, Johnathan Hewis, Elizabeth Hawk, Eric Rohren
Journal of Nuclear Medicine Technology Dec 2024, 52 (4) 356-359; DOI: 10.2967/jnmt.124.268332
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSION
    • DISCLOSURE
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 2: Analysis of DALL-E 3
  • The Impact of the Coronavirus Disease 2019 Pandemic on the Clinical Environment
Show more Special Contribution

Similar Articles

Keywords

  • diversity
  • generative artificial intelligence
  • inclusivity
  • nuclear medicine
  • radiology
SNMMI

© 2025 SNMMI

Powered by HighWire