By: Matison Miller
Published on: May 4, 2023
Gender biases are deeply ingrained in American society, limiting women’s career options. Educational institutions, employment opportunities, and familial structures reinforce gender stereotypes and impose reductionist assumptions about the characteristics and roles of men and women. Despite decades of work to close the employment gap between men and women, forty percent of individuals surveyed by the United Nations feel that men make better business executives and have more of a right to jobs than women when jobs are scarce. As employers develop and rely more heavily on tools created to streamline the hiring process, like artificial intelligence (“AI”) programs, the threat of perpetuating systemic gender stereotypes increases. Most AI programs used today rely on the input of data or text examples by programmers to learn information and create a response. AI reflects the biases of its designers and is limited by provided datasets in its decision-making abilities. Without adequate attention, programs created to assist in lengthy employment processes, like ChatGPT, may harm women’s abilities to secure viable employment or sue for sex-discrimination.
Studies have already shown the design and use of AI programs can significantly impede women’s lives. Flawed AI programs may enhance pre-existing biases, preventing women from entering the workforce. For example, the head of the computation and language lab at U.C. Berkeley, “got the bot to write code to say only white or Asian men would make good scientists.” Employers could create a program that results in only hiring specific groups of men, discriminating against women or other minority groups. Professor Melanie Mitchell, a researcher of recognition in artificial intelligence systems at Santa Fe Institute, claims that problems come from programs relying on data or inputthat may be racist or sexist. Gender gaps can widen because of misinformed algorithms, which primarily come from a limited dataset input and involve fewer women from the start. Currently, very few women work in tech, and the work conditions ensure that not many will stay. The exclusion of women in the tech industry creates flawed datasets that lack representative data, resulting in AI that perpetuates implicit bias.
Women may attempt to sue their employers for sex discrimination resulting from flawed AI but will likely find it difficult to succeed. Sex-based discrimination during the hiring process takes place when an employer intentionally discriminates against a qualified candidate based on their sex. Title VII of the Civil Rights Act prevents employers from using race, sex, religion, or national origin as a motivating factor for any employment practice, such as hiring an employee, firing an employee or promoting an employee. To bring a claim, a woman must establish that the AI explicitly discriminated against her during the screening process. This may make the already difficult task of succeeding in an employment discrimination claim even harder. An algorithm may analyze tone and vocabulary to determine whether a candidate exhibits the preferred traits compared to the job description. Gender coded-language, words that are commonly associated with either men or women, often creates gender biases that inhibit a woman’s job performance; thus, when language is male-coded, the algorithm may be more likely to choose a male candidate. A woman challenging sex-based discrimination caused by gender-coded language is then required to prove to the court the link between the language comparison and explicit discrimination against her. This provides shaky ground for a successful lawsuit and may create an opportunity for corporations to practice more discriminatory hiring practices without recourse. Therefore, as AI develops, datasets should include diverse datasets to prevent implicit bias against women in the workplace.
 U.N. Dev. Program, Almost 90% of Men/Women Globally are Biased Against Women, UDP (Mar. 5, 2020), https://www.undp.org/press-releases/almost-90-men/women-globally-are-biased-against-women.
 See Ayesha Nadeem et al., Gender Bias in AI: A Review of Contributing Factors and Mitigating Strategies, ACIS 2020 Proceedings 1, 6 (2020), https://aisel.aisnet.org/acis2020/27; see also Gender Bias in Search Algorithms Has Effect on Users, New Study Finds, N.Y.U. (Jul. 12, 2022), https://www.nyu.edu/about/news-publications/news/2022/july/gender-bias-in-search-algorithms-has-effect-on-users–new-study-.html (showing study participants, both male and female, judged members of these professions as more likely to be a man than a woman).
 See U.N. Dev. Program, supra note 1 (concluding close to ninety percent of men and women hold some sort of bias against women).
 Antony Brydon, Why AI Needs Human Input (And Always Will), Forbes (Oct. 30, 2019), https://www.forbes.com/sites/forbestechcouncil/2019/10/30/why-ai-needs-human-input-and-always-will/?sh=11cc4b855ff7.
 ChatGPT is a language model developed by OpenAI. ChatGPT gathers information from datasets of text, the internet, and various other sources to generate a response to users’ questions. Users can ask ChatGPT to write a set of code, summarize books, design crochet patterns, or compose a song. Natalie, ChatGPT General FAQ, OpenAI,https://www.help.openai.com/en/articles/6783457-chatgpt-general-faq.
 See, e.g., Carmen Niethammer, AI Bias Could Put Women’s Lives at Risk – A Challenge for Regulators, Forbes (Mar. 2, 2020), https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=17a0d001534f.
 See Davey Alba, OpenAi Chatbot Spits Out Biased Musings, Despite Guardrails, Bloomberg L. (Dec. 8, 2022), https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results (noting that OpenAI has since updated the ChatGPT to respond “it is not appropriate to use a person’s race or gender as a determinant of whether they would be a good scientist.”).
 Data like job descriptions, statistics, or progress reports.
 Alba, supra note 5.
 See id. (stating only twenty-two percent of AI professionals globally are women).
 See Ashton Jackson, 38% of Women in Tech Plan on Leaving their Job Within the Next 2 Years – Here’s Why, CNBC(Nov. 24, 2021), https://www.cnbc.com/2021/11/24/38percent-on-women-in-tech-plan-on-leaving-their-job-in-the-next-2-years.html (following a study of 1,000 women working in the tech industry, 38% of female participants said they plan to leave their jobs in tech altogether and 46% say their organizations do not actively promote gender equality in hiring and culture).
 See generally Gary D. Friedman & Thomas McCarthy, Employment Law Red Flags in the Use of Artificial Intelligence in Hiring, ABA (Oct. 1, 2020), https://www.americanbar.org/groups/business_law/publications/blt/2020/10/ai-in-hiring/ (explaining that employers are gravitating more towards the use of AI algorithms to ease the hiring process by inputting candidate qualifications and hoping that AI will identify qualified candidates).
 See, e.g., Sex-Based Discrimination, U.S. Equal Emp. Opportunities Comm’n, https://www.eeoc.gov/sex-based-discrimination (forbidding discrimination regarding any aspect of employment including hiring, firing, pay, job assessments, promotions, layoff, training, fringe benefits, and any other term or condition of employment).
 See Civil Rights Act, S.1177, 114th Cong. tit. VII (1964) (requiring the challenging party to provide demonstrative evidence of sex-based discrimination).
 See, e.g., McDonnell Douglas Corp. v. Green, 411 U.S. 792, 802 (1973) (establishing a claimant must establish a prima facie case of discrimination showing that he belongs to a minority group, he applied and was qualified for a job for which the employer was seeking applicants, he was rejected, and after his rejection, the position remained open and the employer continued to seek applicants); see also Price Waterhouse v. Hopkins, 490 U.S. 228, 249 (1989) (holding that the employer must have made the decision because the applicant or employee was a woman and the employer was acting on the basis of a belief that a woman fulfill the role due to gender stereotypes).
 See Maryam Jameel & Joe Yerardi, Workplace Discrimination is Illegal. But our Data Shows it’s Still a Huge Problem, Vox News (Feb. 28, 2019), https://www.vox.com/policy-and-politics/2019/2/28/18241973/workplace-discrimination-cpi-investigation-eeoc (explaining that because workplace discrimination can often manifest in subtle ways, such as the assignments workers are given, the pay or benefits they receive, and the ways their performance is judged or rewarded, most discrimination claims fail due to insufficient evidence of unequal treatment).
 See Friedman, supra note 12 (using HireVue, a video interview platform, as an example of how and AI platform can analyze applicant data).
 For example, “fireman” or “policeman.”
 See generally Christine Ro, The Coded Language that Holds Women Back at Work, BBC (Aug. 3, 2021), https://www.bbc.com/worklife/article/20210730-the-coded-language-that-holds-women-back-at-work (explaining that the language used in job applications, websites, and marketing materials often deter qualified women from applying and succeeding in certain positions).