Is Gender Health Bias Masking the Brilliance of AI?
In an increasingly cyber-world, we’re becoming more and more familiar with the power of artificial intelligence (AI) – as our daily platforms, apps and devices race to provide us with the latest AI bot. AI models analyse large amounts of data, learn from this information, and then share informed points of view1. Whilst the world revels in the image-generating or recommendation capabilities of AI, people are rightly questioning the extent to which human biases have made their way into AI systems. Especially through the glaring gap in gender and women’s health data2.
AI has potentially numerous benefits in healthcare and could significantly impact the way in which people access medical information and care2,3. Analysing large amounts of data to detect patterns and predict outcomes could lead to quicker and more accurate diagnoses securing improved patient outcomes and reduced healthcare costs3. And for the general public, it’s also a 24/7 support available at the touch of a button3. Despite its huge potential, something AI is confronted with is the bias that exists in the women’s health data2,3. The historical gender gap in healthcare mainly stems from the underrepresentation of women in research studies. Indeed, only 54% of research articles published during 1980-2016 reported results on both male and female populations4. AI tools that are building on these biased data will, in turn, produce biased outcomes, with AI even amplifying the phenomenon3.
For example, cardiovascular diseases were considered predominantly a man’s illness and women have been and are still underrepresented in studies5. Yet, women’s hearts are not just smaller versions of a man’s heart6. And in fact, cardiovascular disease is globally the leading cause of death in women. Contrary to common belief, young women are actually more likely to die following a heart attack than young men7. So does the historical bias impact todays’ tools? Totally: a study demonstrated that an AI algorithm performed poorly when used for computer-aided diagnosis in women8. Indeed, the training data sets only included a low proportion of women.
AI generated imagery perpetuates the idea that cardiac events are more likely to happen to men than women
These discrepancies spread way beyond just cardiovascular diseases and pose a danger to women now and in the future who may be reliant on AI to provide them with information related to their health2,3.
How to address the bias perpetuated by AI?
The Weber Shandwick Collective aims to tackle the disparities that exist in women’s healthcare.
There is no unique and one-size-fits-all solution to eliminate all gender-based bias in AI tools. One option is to improve the data set that the tool is building on, for example through new clinical studies that better represent women and report the outcome accordingly to gender2. Another approach is to enhance the way the AI tool is trained, meaning actively looking for gender bias and actively involving women in the programming (according to a survey in 2022, more than 90% of professional software developers are men)9.
However, a complementary approach, and quicker to implement, is to change the way the AI tool is used – through a more thoughtful way of interrogating the tool. The prompts are within the control of the user and will help limit the inherent gender bias of AI tools.
The following pointers are meant as a quick guide for professionals in communications working within the healthcare industry.
1. Paint a picture (and mind the details!)
Be clear, specific, and provide context when interacting with AI tools. For text prompts, key information should include a detailed persona to help the platform understand the speaker’s voice and intended audience, specifics around tone, and a clear instruction for the ask. For example, if the desired output is an article from a health care professional meant for patients, specify the doctor’s specialism, the tone of voice, and the main message. You should also include information on the type of patient for who you want to generate health information such as their gender, disease state, level of knowledge, age, race and any other factors that may influence their understanding of the communications.
For image prompts, information should include things to describe how the scene feels, what’s in the scene, and how it is set up. It is helpful to begin with an idea of what you want the image to look like and tailor prompts based on your vision. This is particularly important for patient images, where we want to be medically accurate about the most common type of patient, the age of patients affected, physical ways the disease may manifest, etc. Image outputs will reflect the most bias; make sure you specify characteristics like gender, age and ethnicity to ensure diversity or to reflect a certain patient population.
2. Go step by step
Remember that not all necessary information needs to be provided in the first prompt. Start with an initial query and add further details in the conversation with the AI tool. Provide feedback on the answers received, pointing out useful parts and areas that could be improved. Change the wording or tone, or add more context and specificity to guide the AI toward the output you’re looking for. It is possible to chat with the AI as if it were a colleague or teammate and you’re working on a project together in order to reach the desired outcome. If you notice some of the information was wrong, inform the AI tool so that it may be able to correct its mistake. This can include correcting any errors or biases in the response and providing more information to help the AI tool better understand the question.
3. Fight the bias
According to the World Health Organisation (WHO) the data used to train AI models can be biased, excluding certain groups10. This is especially true in the medical setting, where certain communities have been intentionally excluded from research.
Instruct the AI tool to avoid stereotypes and biased assumptions. Researchers have found that prompting a model not to rely on stereotyping had a dramatic positive effect on the algorithm’s response. By consciously engaging in ethical conversations, you can identify and challenge biases, actively avoiding the propagation of discriminatory or unfair content in the generated response.
4. Get the facts
Unlike a Google search that provides source information, AI tools may not always disclose their sources. When it comes to healthcare-related questions, vetting the source is essential. You can choose AI tools that show sources automatically or ask for this information. For example, while ChatGPT is a popular AI tool, other platforms like Perplexity.AI provide transparency in its sources and where its language model learns from.You can also specify desired sources in your prompt, asking for example the AI tool to focus on recent peer-reviewed data only when providing an answer. But in all cases, including a human review is always crucial.
5. What not to do (especially when looking for health information!)
- AI tools are great to provide information and a perspective. They should however never replace a conversation with a healthcare professional.
- AI tools should not be relied upon for any individual health advice (e.g. ”should I go to an emergency room”). Consult healthcare professionals for such matters.
- Maintain a healthy level of scepticism. Just because something sounds correct doesn’t guarantee accuracy.
- Some AI platforms are not updated in real-time. Because medical knowledge is constantly evolving, the lag in data may mean that the AI responses of some tools are not capturing the latest medical knowledge on conditions or treatments.
We need to challenge, adapt and strive to deliver the benefits of AI without falling prey to the limitations set by the gender health data gap. At the Weber Shandwick Collective, we are committed to improving women’s health. We are taking on AI tools to assess data limitations, refine prompts to deliver balanced answers, address algorithms and create safe spaces where women see their data and their health concerns addressed equally. With the potential it has to shape the future, we need to protect women from the AI echo chamber of past disparities. We have the power to influence what this future will be, and it’s time to move women to the heart of health.
To learn more and get involved, contact us at: enquiryapac@webershandwick.com
- Forbes https://www.forbes.com/advisor/in/business/software/what-is-ai/ 2. Hastings J. Lancet Digit Health. 2024;6(1):e2-e3. 3. Alowais SA et al. BMC Med Educ. 2023;23(1):689. 4. Marsh McLennan https://www.marshmclennan.com/insights/publications/2020/apr/how-will-ai-affect-gender-gaps-in-health-care-.html 5. Feldman. JAMANetworkOpen.2019;2(7):e196700 6. St Pierre SR et al. Front Physiol. 2022;13:831179 7. World Heart Federation https://world-heart-federation.org/what-we-do/women-cvd/ 8. Adedinsewo DA et al. Circ Res. 2022;130(4):673-690 9. Stack Overflow https://survey.stackoverflow.co/2022#demographics-gender-prof 10. World Health Organisation https://iris.who.int/bitstream/handle/10665/375579/9789240084759-eng.pdf