Visit hbs.edu

Ifeoma Ajunwa on the limitless boundaries of employee surveillance

Ifeoma Ajunwa thumbnail image with title

As automated and surveillance systems become unyieldingly dominant in our professional and personal lives, and as our professional and personal lives become unyieldingly blurred, hiring and managing practices have encountered new challenges in understanding and addressing systems on inequality in organizations. With often unclear ways to protect worker privacy, these new tools pose as both a resource and threat to anti-discrimination efforts in the workplace.

In this episode, our hosts Colleen Ammerman and David Homa talk with Dr. Ifeoma Ajunwa about the legal and ethical implications of workplace surveillance in the age of remote work, wearable tech, and DNA testing. She offers legal and scholarly frameworks in delineating and navigating the ever-changing boundaries for worker surveillance. Ifeoma is a tenured associate law professor at the University of North Carolina School of Law. She is also the founding director of the Artificial Intelligence Decision-Making Research (AI-DR) Program at UNC Law. At the time of this recording, she was an associate professor in the labor relations, law, and history department of Cornell University’s Industrial and Labor Relations School.

Watch the episode with Ifeoma Ajunwa

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So today we’re speaking with Ifeoma Ajunwa, Dr. Ajunwa is an associate professor at the Industrial and Labor Relations School at Cornell University and also an associate faculty member at Cornell Law School. In addition, she’s a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University and also a faculty affiliate at the Center for Inequality at Cornell University. So welcome, Dr. Ajunwa. We’re very excited to talk to you today.

Ifeoma Ajunwa (tenured associate law professor, UNC Law): Thank you so much. I’m excited to have this conversation.

David Homa (Digital Initiative director): Thanks for joining us. We really appreciate you spending the time. So, we’re going to jump right into an interesting and deep question, looking at employee surveillance. I know this is a topic that you’ve spent quite a bit of time thinking about and researching and is on a lot of people’s minds with so much remote work. One of the things we’re interested in is how surveillance tools affect workers. What are the ways in which gender, race, and other axes of inequality shape those effects?

IA: Well, thanks for that question, David. Surveillance is something that has been around for as long as we’ve had the workplace. Employers do have a vested interest in surveilling workers, particularly in ensuring productivity and also in deterring misconduct. However, the issue arises when you have a workplace where the surveillance becomes intrusive or pervasive. And also surveillance operates on several axes in ways that can be discriminatory or that can be used to single out certain employees for harassment. So we do need to be aware of that. Currently, for American law, there are no limits really on what can be surveillance in the workplace. So, for example, in my law review article, Limitless Worker Surveillance, I look at the various types of surveillance currently employed in the workplace, whether it’s surveillance of productivity, or surveillance for healthy behaviors through workplace wellness programs. And I find that the law really essentially allows carte blanche to the employer in terms of how far they can go into revealing their employees. And while employers might think this is a boon or this is a benefit, employers do really have to be careful in weighing the surveillance choices that they make to ensure that it does not then become actionable against them or is not seen as being discriminatory or harassment.

And to that effect, I wanted to bring up a case that recently happened in the state of Georgia to mind here. So this case was called ‘devious defecator,’ for reasons that will soon become clear. So in that case, certain people or individuals were leaving feces around the workplace, and this was a warehouse. And the employer, to determine who was doing it, decided that they would surveil the workers through DNA testing. Unfortunately, they singled out two employees for this DNA testing and those two employees happened to be African American. The DNA testing revealed that it was not these employees who were responsible for the acts of vandalism. Those employees subsequently sued their employer, and in a verdict returned against the employer, the judge noted that this could be seen as harassment or discrimination because of the singling out of those two individuals. And that also was a violation of the Genetic Information Nondiscrimination Act.

This is an interesting case for various reasons. First, you have to ask yourself, why do you use DNA testing to accomplish this surveillance? Could the employer not have used perhaps video cameras, which is actually still perfectly legal? And then the second interesting reason here is that the Genetic Information Nondiscrimination Act was not really created for the purposes that it was used in this case. It was really created to prevent employers from discriminating against employees for hiring or retention purposes because of their genetic profile. However, with this case, the court has now stretched GINA to, in some ways, be an anti-surveillance law when it comes to scrutinization of an employee’s DNA profile.

DH: Wow, that’s a fascinating intersection of law, technology, and employees’ relationships with their employers. That’s a very unusual situation. It’s a little unusual since companies, I think to some degree, are aware [that] targeting and singling out people is dangerous from a legal perspective. On the flip side, a lot of technology is sort of blanket and you’re casting a wide net that picks up all people and their activities in a broad sense — I wonder, at the other end of the spectrum, what’s happening in that space? You may have [tracking] software on your laptop [while] you sit at home and watch movies, et cetera. Your employer is capturing everything about you and we’re sort of blanket-capturing everything. There are certainly dangers there, right?

IA: Yeah, that’s a great question, because nowadays surveillance is prevalent in the workplace. It is pervasive. It is widespread. It’s not really just a trend. It’s really the standard, right? Any American working today can really expect it; you can expect that you will be surveilled in the workplace. And you might think, well, if everybody is being surveilled, then it’s going to affect everyone equally. But that’s not really the case. Let’s take the employer perspective for a moment. An employer might think, “Well, I’m surveilling all my employees equally. I’m not singling out anyone. Perhaps, you know, taking screenshots of their computers and what they’re doing. Perhaps I’m taking transcripts of the emails. It’s equal for everyone.” However, this can actually still be a situation of ‘more data, more problems’ for the employer. Because the more data you collect, the more you actually put yourself at risk of collecting data that is sensitive, or data that is really forbidden in terms of making employment decisions. This can then open up the employer to suits by an employee who comes from a protected category, right?

“You might think, well, if everybody is being surveilled, then it’s going to affect everyone equally. But that’s not really the case.”

So, for example, perhaps you have an employee who is not out in the workplace, in terms of their sexual orientation. But the information from surveillance actually captures this or shows this. If the employer then subsequently takes employment action against that employee — let’s say they are fired, or let’s say they’re demoted or not promoted. Well, in such a situation, the employee could have reason to say, “I suspect that it was because of my sexual orientation.” And this claim would be bolstered by the fact that the employer does actually have that information. So employers do really have to be cognizant of the issues that come with more data.

CA: Kind of following up on that, it sounds like part of what you’re saying is that some of the threat or risk around these surveillance tools and regimes is not just to the individual employee, in terms of their privacy or their rights being violated. There’s an exposure of the employer to certain kinds of legal risks, right? There are some threats there. And that’s sort of why it’s important for employers and organizations to be thoughtful about it, not just in the service of “doing the right thing” by their employees, but also just being cognizant of exposing themselves to risk.

IA: I really see the risks of surveillance as twofold. There’s certainly the risk to the individual employee: invasions of their privacy; information about them being revealed without their consent; and perhaps that information then being used to treat them differently. You can think of, for example, women with children, who perhaps prefer not to make that known in the workplace. But through surveillance of emails, or even through surveillance of the screenshots of the computer, that becomes known. This could in turn impact, severely, promotion chances, or their ascension to leadership positions, correct? But, on the flip side, there is also a risk to the employer of pervasive surveillance because they now have within their knowledge or within their possession, information that is pointing to protected categories. And just the mere fact of having that information puts them at greater risk for lawsuits alleging discrimination.

CA: It just makes me wonder about what we’re dealing with right now with remote work during the COVID pandemic. I feel as though I’m reading articles all the time about an increase in these surveillance tools and employers tracking employees. And it’s not quite clear to me what the prevalence of that is and how much that has increased. But I would be really curious to hear your thoughts. Is your sense that there are more intrusive or pervasive tools that are being used? And also, to this point about the risks to employers, what would you advise organizations that are thinking, “Oh, we need to more proactively monitor our employees if they’re working from home”?

IA: Yeah, that’s a great question. I would say that with the COVID-19 crisis, there certainly is an instinct to surveil workers who are working from home. Employers might have some anxiety [about remote work] in terms of maintaining productivity or even just deterring misconduct. We have seen some high-profile cases of misconduct happening with employees working from home. That being said, I think employers really need to be very deliberate and really need to be very conscientious in the surveillance tools that they are choosing and think about whether these are serving the purpose that they want them to serve, or whether those surveillance tools are too invasive and too infringing upon the dignity and privacy of workers. Because there’s another legal [angle] that has been brought on by this COVID-19 crisis. Most employees are now working from home, right? There is a difference between surveilling a worker in the workplace versus surveilling a worker in their actual home. And employers really have to give some thought to that.

DH: I wonder, for the people watching this [interview], how employees should be thinking about this [issue]. Most of what [workers] are doing is over video. I know a lot of cats show up, and also children. Do you have any advice for employees? Or do you have a sense of whether employees realize [they are being tracked], or should [workers] be more aware?

IA: I think it really behooves employees to be very careful when conducting work at home. And I would really urge any employee to treat their work hours as work hours and to be conscientious in terms of the activities that they’re doing during their work hours. I would say really try to have a dedicated laptop for your work and obviously don’t do personal activities on that laptop. I would say try, if you can afford it, to have a dedicated space where you work. That is hopefully a place that can be secluded, where you can close the door and it can be quiet and you can sort of shut out distractions. I would just really urge employees to understand that with the advent of technologies now, anything you do on an employee laptop — if an employer gives you a laptop or if an employer gives you any kind of electronic device — the law is that that device actually still belongs to the employer so that they can surveil anything on it. It is important for employees to realize that when they are using those devices. And it is important for employees to really be professional during your work hours and try very hard to keep their life, personal life, separate from your work life.

CA: Shifting gears to thinking in another way about how employers use technology, you’ve studied the use of artificial intelligence hiring tools — screening tools which often are created or implemented with the purpose of reducing bias, right? They’re approached as an intervention to either eliminate or mitigate human bias. I think most people who even casually keep up with news about technology and business know that that’s very much not the case all the time and often [the tools] just perpetuate bias. I know this is a long-winded sort of introduction, but we’d love to hear you talk a little bit about your work on how is it that these algorithmic hiring tools can perpetuate inequality. Maybe some examples of what you see in that space?

IA: Yeah, that’s a great question. When it comes to automated hiring, I would say that the public impression and also the ethos behind why employers adopt them is that they’re seen as impartial. They are seen as neutral. They are seen as having less bias than human decision-making. In my paper, “The Paradox of Automation as Anti-Bias Intervention,” I really examine this idea that automated hiring platforms are neutral, or without bias, and can be sort of an intervention to prevent bias in coming into the hiring process. What I find is that this is not actually the reality. And don’t get me wrong — I think automated hiring as a technological tool can be quite useful. But just like any technological tool, automated hiring will perform the way that the people who use it make it perform. The people who use automated hiring are ultimately the people who will dictate the results. And what I mean by that is that there is a false binary between automated decision-making and human decision-making. And that’s because we don’t have the singularity, right? [laughter] We don’t really have machines that are completely thinking on their own. All the algorithms we have right now are created by humans. Yes, we have machine learning algorithms that learn from the initial program and then create new programs. But you still have to understand that there is an initial program there and then there is a training of the algorithm created. And this is trained on data that a human decision-maker decides should be the training data. And this training data can come with its own historically embedded bias.

And just to give you a real-life example of this, there was a news article of a whistleblower exposing that Amazon had created an automated hiring program, really for the purpose of actually finding more women for its computer science and engineering positions. And it turned out that that automated hiring program or platform was actually biased against women. Amazon subsequently had to scrape that program. And, of course, you know, [they] didn’t really reveal that to the public [before it was reported in the media]. The question then became, how could this be? How could a program that was actually created to help women — that was actually created to ameliorate this bias against women — how could that program then actually go ahead and replicate that bias? That is an important point that I make in my article, “The Paradox of Automation as Anti-Bias Intervention,” which is that automated hiring platforms, if not programmed correctly, if care is not taken, can actually serve to replicate bias. At large scale [they] can also serve to obfuscate — actually serve to hide that this bias is happening.

So it’s not enough for an employer to say, “I want a more diverse workplace” or “I am going to use automated hiring and therefore eliminate human bias.” The employer actually should do audits of the results coming out of this automated hiring, because those audits are what will tell [you] if it has an issue. I advocate in my forthcoming paper, “Automated Employment Decision, Automated Employment Discrimination” that there should be an auditing imperative for automated hiring systems. Because why should we have invented hiring systems, some of which can be machine learning, and just [expect] to get a good result without actually checking for it? So I argue that the federal government should actually mandate that automated hiring platforms be designed in such a way to allow easy audits. The design features can incorporate elements that would allow for audits to be run in, like, one hour or less, because these are computerized systems. It wouldn’t really be a big burden on the employer then.

And I want to add one other thing to that end. Some employers take this tack of “look for no evil, see no evil, hear no evil.” Like they don’t want to do the audits because they’re afraid of finding discrimination, and then actually hav[ing] to do something about it. That’s not actually a good tack to take in this day and age. Why? Because a recent [court] decision actually has now allowed for academic researchers to audit systems. So whether the employer wants it or not, an academic researcher could come about and audit the system. And guess what? Now they’re caught unaware. So it is actually better for the employer to take this responsibility of auditing the system regularly, checking for bias and then also correcting for that bias.

CA: I found what you said about how we set up this false binary between human and machine decision making really useful, because in [the] general diversity, equity, inclusion field, there’s a lot of discussion about how it’s very hard for people to unlearn bias. So we need to focus on processes and systems. I think there’s a lot of merit to that. But I get uncomfortable sometimes with this pivot to [the idea that] if we just have the right technologies and tools, then that’s the solution. I think what you said is a very helpful way for me to think [about it]. I’ll be relying on [it] to articulate that concern. Human and machine decision making are not these two independent things. So I just want to thank you for that.

IA: It’s still quite shocking to me. Even other scholars have that idea [of], “Oh, we should just give it all to the machines. You know, humans are just so full of unconscious bias that we can’t debug them. We can only debug the machines.” But I’m like, “Well, who’s creating the machine?”

CA: Exactly. But I think there’s a strong trend for that. Especially in kind of behavioral-science-driven approaches to discrimination in the workplace.

DH: Those are really good examples. You’re starting to share examples of how technology can be perfected to actually reduce bias. Are there other ways — or have [you] come across [areas] where we can actually leverage technology to fight bias?

IA: You know, I think a lot of times the perception is that people like me are Cassandras. Because we are always predicting doom and gloom when you use technology. Many people see technology as like panaceas; there is this brand new shiny tool and they want to just be able to use it and not really have to worry about consequences. I don’t think I’m a technology pessimist, but I am also not necessarily a blind-eyed technology optimist. I think I’m somewhere in between. Which is [to say], technological tools are just that, tools. The results from them will depend on how you use them.

I think technology can be a boon for employers who are trying to do the right thing and diversify their workplaces. I think technology could also be a boon for employees who are trying to get a foothold in the workplace, trying to find employment. But I do think for that to happen, we need regulation of technologies. Technology makers can’t really just be allowed to take a laissez-faire approach to the development of automated decision-making technologies. We need strong regulation to make sure that they are serving the good of society.

“Technology makers can’t really just be allowed to take a laissez-faire approach to the development of automated decision-making technologies. We need strong regulation to make sure that they are serving the good of society.”

In automated hiring, specifically, I think the proper regulations could actually be a boon to anti-discrimination efforts. Because, for one, if you have a data retention mandate, and a record-keeping design, then through automated hiring, you could actually see exactly who is applying, and exactly who is getting the job. They could actually then be very accurate records of the picture of [your] employment decision-making, such that you could then see if there is bias. You could then see if there is employment discrimination. And I think, frankly, the first step to fixing the problem is seeing the problem. I think with traditional hiring, a lot of times the problem is quite hidden. It’s not as easy to see the bias. It’s not as easy to see the discrimination. Whereas with automated hiring, it could actually become easier to see all of that.

DH: You know, it’s a good point. With automated hiring systems and the appropriate audit tools, you could actually see the scoring of factors like you mentioned [with Amazon], where maybe [there’s bias] predominantly for women’s universities or higher ed. Whereas with hiring managers, that’s hidden away in someone’s head and they may not even know why they’re making that decision. That’s a great point.

IA: Exactly. As we say in the field, the worst black box is the human mind. That’s uncrackable to some extent.

DH: So maybe we could talk a little bit about wearable tech and the implications for employees and employers. I know in some of your writings, some of your research, you’ve [discussed] examples that affect people of different genders differently. Some of this technology is getting quite invasive. What can you share about this topic?

IA: Yeah, that’s a great question. I think we’ve had so many technological advances in the past, I would say, few decades. And one of the biggest ones is really this rise of wearable tech, because as computer systems become smaller and smaller, then we’re more able to embed tech in so many different things. And wearable tech is definitely becoming even more than a trend now. It’s become really a fixture of the workplace. And when I speak about wearable tech, probably the first one that comes to mind for most people is the Fitbit that you’re wearing on your wrist. There are also rings that do similar things to the Fitbit, like track your heart rate, pace, et cetera. But there’s actually a plethora of types of wearable tech. What I am seeing, though, is that these wearable tech are also raising several legal questions. The first one is really related to data ownership and control. There’s this idea that these wearable technologies are collecting so much data from employees and there’s a question of, well, who owns the data? The device belongs to the employer, but the data is being generated by the employee. So should the employee own the data? Even if the employer owns the data, who has access to the data? Should the employee have access to the data to actually review it and make sure it’s accurate? And have some say over how that data is used?

I wrote an article for Harvard Business Review where I actually noted that currently all the data that’s being collected as part of workplace wellness programs through wearable tech can actually be sold without the knowledge or consent of the worker — and has been currently and in the past. So, should that be legal? Should employees have a say in how their data is exported and exploited? When it comes to a workplace wellness program, you have the wearable tech like Fitbit, but you also have other apps that workers are being asked to download on their phones to track their health habits. And unfortunately, some of those apps have actually been found to be doing things that could be used for discrimination or for discriminatory purposes.

“all the data that’s being collected as part of workplace wellness programs through wearable tech can actually be sold without the knowledge or consent of the worker.”

So there was an article in The New York Times where Castlight, a workplace wellness vendor, had requested that employees download an app to track their prescription medicine usage. And they were using this information essentially to figure out when a woman was about to get pregnant. Certain prescriptions are contraindicated for when somebody is either pregnant or about to get pregnant. So women [employees] would stop taking those prescriptions and Castlight was using that to predict when a woman was about to get pregnant. This was especially concerning because, although we have the Pregnancy Discrimination Act, which forbids employers from discriminating against women who are pregnant — notice the act does not forbid employers from discriminating against women who are about to get pregnant. So essentially, this was a tool that could allow employers to really discriminate against women who were about to get pregnant without legal recourse. It is concerning when wearable tech is used for those purposes.

DH: Thank you for that. I can see from Colleen’s face she has given up on all of humanity, especially technology. I know some of your work has certainly looked at surveillance. And I know you have other scholars you either collaborate with or respect in the field. Tell us about some of that.

IA: Right. So I definitely want to mention the work of Ethan Bernstein here. He is a Harvard Business School professor who has done empirical work looking at surveillance in the workplace. He’s looked at surveillance in factories in China and other places. And I want to highlight one important finding of his, which I think is something that employers need to keep in mind. In one of his papers, he noted that when workers were overly surveilled it actually backfired. It actually had the opposite effect from what employers wanted. He found that in one specific factory when workers felt that they were being overly surveilled, they did work exactly how they were expected to but they didn’t actually take initiative. They didn’t actually get creative, in terms of getting things done in ways that were faster and more productive. I think employers really need to think about the fact that organizational theory has recognized something called a practical drift, which is that in any given work, there’s sort of a standard way of getting it done, right? And the standard way has been thought of by management, right? But the people on the ground, the people who are doing the actual work, they sometimes quickly figure out that, “Yes, the standard ways is okay but there’s actually better or quicker or faster or more efficient ways to get the stuff done.” And so they drift away from the standard way of doing things. This is called practical drift. But when you have over-surveillance, then you’re not allowing for this practical drift from workers. You’re basically cutting off your nose to spite your face, as they say, right? You’re actually hamstringing your employees from being able to be as efficient as possible.

CA: We often end these conversations by asking the person to recommend a resource or a takeaway for people who care about these topics. I want to do that slightly differently, since you have this forthcoming book, that certainly is going to be a resource for people who care about these issues. It’s coming out I believe in 2021 — I’m sure you’re in the homestretch with writing and editing and all of that! I would just love for you to talk a bit about the focus of your book, whom you hope will read it, and what impact you’re hoping to have with the book.

IA: My book, The Quantified Worker, is really a historical legal review of all the technology that is now remaking the workplace. The focus is on technologies and really examining how those technologies are changing the employer-employee relationship and whether we can ensure, through new legal regimes [and] through new regulations, that those technologies actually don’t change the workplace for the worse, but actually, can change the workplace for the better.

My hope is that my book will actually be read, not just by business leaders or HR people, but also by employees [and] definitely by lawmakers to get an in-depth look at what these technologies are doing in the workplace. Because I think a lot of times we hear about these technologies, but without having experienced them firsthand, we’re not really actually aware of the impact that they’re having on the individual worker. We’re not aware of the impact that they are having on society. So my book will include historical accounts of the evolution of these technologies [so that we can] understand where they came from and therefore the sort of ethos behind them. [I] also include some interviews of people who have encountered these technologies and their experience with them. And then finally, I have proposals for legal changes, new laws for how to better incorporate these tools in the workplace. I’m not a Luddite. I think these technologies are definitely here to stay. But it is about making sure that they are operating in a way that is respecting human autonomy, operating in a way that is respecting our societal ideals of equal opportunity for everyone and also inclusion of everyone, regardless of disability, race, gender, sexual orientation. So that’s really what I hope to do with the book.

DH: That’s a wrap on the interview, but the conversation continues.

CA: Yes. Thank you so much, Dr. Ajunwa. This has been a really fascinating conversation.

And we want to hear from all of you watching. So please send your comments, suggestions and ideas, and questions to justdigital@hbs.edu.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.