Exploring the Intersection of AI and Civil Rights
Increasing numbers of employers using artificial intelligence (AI) and other forms of automation in hiring and recruitment, making it critical to discover whether that use is leading to any form of discrimination, said Charlotte A. Burrows, chair of the U.S. Equal Employment Opportunity Commission (EEOC).
“This is really a new civil rights frontier,” she said last week during a Brookings webinar, “AI in employment and hiring.”
At issue, she said, is that there is a disconnect between employers and technology developers.
Learning the Signs
Understanding how AI works requires a certain level of knowledge, something most employers don’t have. “We have to make sure those things that as a democratic society we think of as important are not undermined inadvertently or changed in ways we don’t even know,” she said, referring to civil rights, among other issues. “The good news is we have pretty good laws to deal with a lot of the issues we’re talking about.”
The challenge, she said, is getting the two sides — employment/civil rights experts and technology experts — on the same plane of understanding. Employers tend to have a lot of expertise regarding discrimination laws, while those developing AI and automation are versed in technology. “They are two different worlds … It’s almost like you need a translator,” Burrows says.
Additionally, she said, “the world of those who develop these new technologies is not very diverse” when compared to the population in general. It’s important to make sure those who are developing such technologies, “who in good faith, are trying to make the world a better place, (understand) what their civil rights obligations are,” she said.
It’s equally important that civil rights experts know about the technology, she said: “A lot is going on in both spaces, and they have to meet up in order to protect and best serve people. … There’s not a natural bridge.”
To facilitate communication and connection, the EEOC has been working to train its employees in what to look for, Burrows said.
“For instance,” she said, “some companies decide to send an ad proactively to individuals based on an algorithm about what they believe that individual’s characteristics are.” They can’t do so, she said, if it is in a way that has disparate impact based on race, gender, LBGTQ+ status or other discriminatory areas. “Finding that is trickier,” she said. “So, one of the things we’re thinking in training our investigators around is understanding what kinds of questions to ask people.”
For example, AI and automation can be used for pre-screening, and applicants who were not selected to move to the next round of the interview process have no idea technology played a part in the decision. But there are tell-tale signs: “If you got a rejection at 2 a.m. or five minutes after you applied, probably that was not a human being doing this,” Burrows said.
Rethinking Algorithms
The EEOC has identified two emerging types of issues, she said. One is that the algorithm being used in screening and hiring could be based on training data that is insufficiently diverse and therefore amplifying the lack of diversity. The other is that a company’s employee monitoring or surveillance tools may not account for workers’ disabilities or special needs.
The latter set of issues is much easier to find, Burrows said. For example, the algorithm for a surveillance tool may dictate the acceptable length of time to deliver a package and back or pack a box. It may not allow for an employee with a disability to do it differently or an extra bathroom break for women who are pregnant.
“Maybe you can make it up in other ways, but if that bot is too rigid, you have a problem,” she said. Employers using AI and automation must reach out to the provider to solve for this issue, she said.
Employers whose screening and hiring practices, including those with AI and automation components, discriminate or don’t have accommodations for applicants who need them could be violating laws like the Americans with Disabilities Act — and might not be getting the best applicants for the job.
Not only is the use of AI and automation in hiring a new civil rights frontier, Burrows said, but the technologies themselves are also a new frontier that employers and employees have to explore, adjust to and react to, as well as determine their impacts and implications regarding civil rights and other areas.
“We have to figure out how make this work for us,” she said. “If people are not comfortable raising their hands in their organizations to say, ‘I see a problem with this (being discriminatory),’ we’re going to end up in a place we don’t want to be.”