The world is abuzz with the potential for Artificial Intelligence (AI) to revolutionize every facet of our lives. In the sciences, at a time when it is possible to generate big data to describe everything from the human genome to climate change, AI and machine learning (ML) provide a necessary and powerful tool to mine insights and gain new knowledge. It’s part of what is now called the Fourth Industrial Revolution—a leap forward in knowledge generation and processes that will push the boundaries of engineering, science, medicine, and beyond. 

Advances in the number of women and non-binary people in the sciences should mean greater gender diversity and offer hope that women will play an equal role in bringing the potential of the Fourth Industrial Revolution to bear. But the undesirable reality facing the world is telling a different story: there is potential for women to play an equal role, but not if we remain on the current trajectory. Women are approaching parity in graduate-level studies (44% of PhDs are women, according to the UNESCO Institute for Statistics), but there is significant variability across subject areas. The pipeline for biomedical sciences is generally well-balanced, with men and women equally likely to pursue master's and doctoral-level degrees. Fields like engineering and physics are still struggling to find that balance. But the numbers for the AI pipeline are more concerning: In 2021, 78.7% of new PhD graduates in AI were men, and just 21.3% were women. Even in biomedical sciences, where things start out with relative parity, female representation falls off at every step of the career ladder. For those who stay in the field, the pay gap broadens. We can project that women in the field of AI will be similarly impacted by forces causing attrition over the course of their careers. Given these current trends, we must immediately focus on gender parity in the AI workforce to avoid gender-biased outcomes. 

AI is code. It’s math. But AI is also a lens designed to offer insights about the world around us. If that lens has filters or biases that are intended or not, the information gleaned ends up distorting our perception of the very issues AI is trying to help us understand.

For example, think about using ChatGPT to learn about a topic or to draft a letter. Several factors will influence the information the AI system shares in its response to a query. How was the prompt phrased? What context or constraints were provided? What version of the chatbot was used? What data was made available to the AI to inform its response? These are all points of influence that, if shaped with an insufficient diversity of perspective, can result in a narrowed or even misleading response. Rather than a lens, in this context, perhaps we should view AI as a magnifying glass –with the potential for small biases on the intake to generate larger biases on the output. 

Recently, a group of scientists put bias in AI to the test. They prompted ChatGPT to write a reference letter for “Kelly” and “Joseph,” 22-year-old students at UCLA.  The authors of the study found that ChatGPT demonstrated clear gender biases. While the reference letter for Kelly touted her “excellent interpersonal skills” and “exceptional teamwork,” it highlighted Joesph’s participation in “engineering-related clubs” and flagged him as a “natural leader and role model.” While concerning, these results are not unexpected. To generate its response, the algorithm pulls from past data, which provides many examples of male leadership and not as many of female leadership. This underscores the importance of the training set—the data used to teach an algorithm to make predictions and generate insights. If an AI model is built with statistically biased data, the outputs will be skewed. This is as true for an AI-generated letter of recommendation as it is for an AI-assisted medical diagnosis. Point in case, an AI algorithm developed to identify liver disease from blood tests was found by researchers to be twice as likely to miss the disease in women as in men. Gender imbalance in the training set was found to be the underlying reason for the bias. If more women were on the frontlines of AI, the likelihood of such bias going unchecked would reduce. AI and its application to just about everything is an emerging field. That’s good news because it means there is still time to implement the required course correction to ensure that women are not left behind. Jobs in AI and machine learning have grown by almost 75% over the past four years, and that trend is expected to continue. Even more promising, the gender pay gap in computer science is one of the smallest in the United States, with women earning 94% of what their male colleagues do. 

Time and time again, across multiple industries and fields, gender diversity leads to better productivity, innovation, and profitability. That is a bottom line that cannot be ignored. AI is at a crossroads where we can shape it with a diversity of perspectives which includes women and non-binary people, or it can fall short of its true potential and impact. Academia, government, and the private sector must take intentional steps to ensure that women and girls, and non-binary individuals too, are not only encouraged to enter into AI fields but embark on rewarding careers that provide the right environment to sustain and succeed at the same rate as their male colleagues. This is the clearest path toward gender equity in the Fourth Industrial Revolution.