Algorithmic Bias in Hiring

The introduction of a virtual interview with predetermined questions as a preliminary part of the hiring process has yielded a variety of benefits: in addition to having the opportunity to rerecord answers to assuage candidate nerves, it can make it easier for recruiters to review responses and reduce barriers created by the time and cost associated with traveling to in-person first-round interviews.

However, when technology is used not just to facilitate this process, but also to score candidates based on their facial expressions, speech patterns, and other mannerisms, it becomes dangerous. While incorporating algorithms into hiring decisions can result in significant savings for a firm, doing so also risks exacerbating existing inequities.

Biases are a part of human nature, which is precisely why we must be vigilant in rooting them out from the technology we rely on to help us make decisions. This is crucial to prevent these biases from seeping in undetected.

An optimistic, heedless embrace

Algorithms are now being used to make decisions with long-term impacts on human lives, such as hiring, welfare, and healthcare. Although they offer a consistent, systematic approach to decision-making, they can also “replicate institutional and historical biases, amplifying disadvantages lurking in data points.” The underlying risks and steps needed to mitigate them are often overlooked in favour of the rapid processing speed and cutting-edge appearance of hiring algorithms. One example of how a disadvantage can be amplified is when algorithms are trained on datasets with underrepresented applicants from certain identity groups. An MIT Associate Professor Danielle Li remarks, “[s]tatic supervised learning approaches may push firms to replicate what has been successful in the past, and that may reduce opportunities for people with non-traditional backgrounds.” The premature embrace of these algorithms leads to consequences, particularly for marginalized communities. Stanford’s Dr. Danton Char says, "I think society has become very breathless in looking for quick answers … we need to be more thoughtful in implementing machine learning."

The black box of algorithmic hiring

The increasing complexity of algorithms can obfuscate the operations at their core and the resulting impacts. In fact, scholars including Microsoft Research’s Solon Barocas found that “companies tend to favour obscurity over transparency in this emerging field, where lack of consensus on fundamental points – formal definitions of ‘bias’ and ‘fairness,’ for starters – have enabled tech companies to define and address algorithmic bias on their own terms.” The idea that these companies can be trusted to reliably do so independently is far-fetched given their track record. For outsiders especially, “little is known about the construction, validation, and use of these novel algorithmic screening tools, in part because these algorithms (and the datasets used to build them) are typically proprietary and contain private, sensitive employee data.” This opaqueness provides companies with cover to generate alternative explanations for evidence of bias in their hiring practices.

False sense of objectivity

Algorithms are “overhyped” and often seen as inherently superior to human decision-making, which is a dangerous assumption that can cause people to overlook their shortcomings. Emotion and AI researcher Luke Stark notes that “[s]ystems like HireVue … have become quite skilled at spitting out data points that seem convincing, even when they’re not backed by science … this ‘charisma of numbers’ [is] really troubling because of the overconfidence employers might lend them while seeking to decide the path of applicants’ careers.”  This brings us back to the issues associated with a hasty adoption, as journalist Matt O’Brien points out: “Algorithms tasked to learn who’s the best fit for a job can entrench bias if they’re taking cues from industries where racial and gender disparities are already prevalent.”

This is further echoed by a study from UC Berkeley and Chicago Booth, indicating that “[f]rom predicting who will be a repeat offender to who's the best candidate for a job, computer algorithms are now making complex decisions in lieu of humans. But increasingly, many of these algorithms are being found to replicate the same racial, socioeconomic, or gender-based biases they were built to overcome.” We need not look far to find a disturbing example – Amazon’s recruiting tool was biased against women. Since it was trained on applications submitted to Amazon over the last decade, the resumes were overwhelmingly male, and the system “taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s’ as in ‘’women’s chess club captain’. And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.”

Moving forward

How can these issues be addressed? Twitter recently introduced the first algorithmic bias bounty competition. This competition emulated the common approach towards rooting out bugs by drawing on outsiders to report bugs in exchange for monetary rewards. While the competition was limited in scope to Twitter’s cropping algorithm, it revealed a rather pertinent concern in relation to bias in hiring: the algorithm favoured “people that appear slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits … this bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards.”

There is a growing need for algorithmic auditing, and for these processes to be held to the same, or higher, standards of accountability we have in place for ourselves. As two-time Nobel Laureate Linus Carl Pauling stated, “it is sometimes said that science has nothing to do with morality. This is wrong. Science is the search for truth, the effort to understand the world; it involves the rejection of bias, of dogma, of revelation, but not the rejection of morality.” Following this spirit, we must leverage technology to reject bias, but not morality, from decisions with such an outsized impact on people’s livelihoods.

Above all, however, we would do well to remember technology’s limitations. No amount of bounty hunting or auditing can change the fact that attempting to use predictive systems to neatly categorize an inherently messy world and its occupants is inaccurate and reductive – while these systems may not be irrational in the same way a human recruiter can, they are flawed in a myriad of other ways.

Previous
Previous

NFTs & the Blockchain

Next
Next

The Death of Privacy