AI and Its Impact on Civil Liberties
Presented by: Amada Echeverría, Esha Bhandari
Originally aired on April 12, 2021 @ 4:00 PM - 4:30 PM EDT
Join Cloudflare’s Amada Echeverría in conversation with Esha Bhandari, Deputy Director of the ACLU Speech, Privacy, and Technology Project as they discuss artificial intelligence and its impact on civil liberties.
Guest: Esha Bhandari, Deputy Director, ACLU Speech, Privacy, and Technology Project
English
Interviews
AI
Transcript (Beta)
Hi and thanks for joining us on Cloudflare TV. My name is Amada Echeverría and I'm on the field marketing and events team here at Cloudflare.
I'm here with Esha Bhandari, Deputy Director of the ACLU Speech Privacy and Technology Project, and we'll be discussing artificial intelligence and its impact on civil liberties today.
Esha, thank you so much for joining us on Cloudflare TV today. Thank you for having me.
So before we dive in further, a note for our viewers, we hope you'll join in by sending us your comments and questions or emailing us at livestudio at Cloudflare.tv.
You can find the banner right below this video. So Esha, where are you dialing in from?
I'm calling in from Brooklyn, New York. Great, great.
New York as well. So I wanted to dig into this topic because it's a very, very important one, obviously, and it's something that I think we all need to understand better.
I think people, most people don't understand it that well. And obviously there are governments and certain institutions essentially monitoring us at all times with automated tools.
So it's important to get educated and make sure AI isn't influencing our lives in ways that we don't even understand.
So let me just introduce you. You're the Deputy Director of the ACLU Speech Privacy and Technology Project, where you work on litigation and advocacy to protect freedom of expression and privacy rights in the digital age.
You also focus on the impact of big data and artificial intelligence on civil liberties, and you've litigated cases, including Sandvig v.
Barr, a First Amendment challenge to the Computer Fraud and Abuse Act on behalf of researchers who test for housing and employment discrimination online, and Al-Assad v.
Wolf, a challenge to suspicionless electronics device searches at the computer, sorry, at the U.S.
border. Excuse me. So amazing. So let's dive in. First of all, what problem is the ACLU Speech Privacy and Technology Project looking to solve?
Well, I think what we're trying to do with respect to technology, just at the most big picture level, is make sure that technology and developments in technology don't take away our liberty and our rights.
That to the extent we have these innovations, they enhance our liberty.
They improve our ability to live out our values, equality, freedom, privacy.
But of course, one of the big concerns is that we've seen so far, oftentimes technology is a force for diminishing those rights.
So we want to really make sure that we hold the line against these increasing encroachments and generally preserve liberty, freedom, and equality.
Great. And you also work on litigation advocacy to protect privacy rights in our age.
So can you speak on your work to protect privacy rights? So what does that look like?
So when we talk about preserving privacy rights in the digital age, that includes a host of things.
One of the big ones, of course, is surveillance.
Technology has enabled far greater surveillance techniques than we ever knew before.
So whether that's Internet surveillance, whether that's facial recognition technology, whether that's automated license plate readers that can track us as we drive, we want to make sure that the degree of privacy that we'd expect in a free society is maintained.
So we use a mix of tools, public education, advocacy with lawmakers and the public, and also litigation to make sure that these surveillance tools don't end up encroaching on those privacy rights that we expect that allow us to still live in a free society and not be under constant surveillance.
Great. And when we talk about AI and its impact on civil liberties, what are the primary issues that you tackle within this space?
AI is interesting because it not only implicates privacy, there's all kinds of aspects of AI that involve data collection that can include intimate private data that is collected from people often without their consent.
But there are other aspects to the transformative power of AI that implicate civil rights and civil liberties.
Two of them are equality, or what I'd say, anti-discrimination principles, and then the due process that we expect.
So just to dive into that a little bit more, we know that AI can have very useful applications.
It can make certain processes in society more efficient.
It can allow us to do more things.
But depending on the context in which AI is used, and of course, depending on the AI itself, it can have really negative consequences for equality.
It can exacerbate existing discrimination in society, or it can actually come up with new forms of discrimination that we've just never had before because we didn't have these AI systems.
So the equality principle or the anti-discrimination principle is a big focus of our work when it comes to AI.
And then the second, when I talk about due process, what I mean with that is, we have certain understandings about consequential decisions that are made, whether someone is going to remain locked up, whether someone is going to be deported, whether someone is going to get a government benefit or have their child taken away from them.
And we have certain systems in place, and there are certain processes that exist.
And of course, in an ideal world, we have what we call due process, which is people have a right to make their argument and their case for these consequential decisions to go their way.
When you have automated systems taking over those decisions, what does that mean for us as a society if you no longer have a human being to make those arguments for, or if a human being is just ratifying the decisions of a computer?
And how does that affect us, how we feel?
Because I think there's a lot of open questions.
Some people might say, well, actually, the robots can make less biased decisions, right?
The AI will take away some of the bias in these human decisions that we've all been concerned about.
But I think that there's also a fundamental underlying question when there's no longer a human being to appeal to, when it's not your peers or your society writ large making these judgments, but it's an unaccountable computer.
So those are questions we also grapple with. And depending on the context in which AI is used, they can definitely be really important ones.
Great. Yeah, I think it's not, people always point to historical data and the problems with that.
And they feel like maybe if we can fix the data training these models, for example, then that will solve the problem.
But like you said, I think taking out the human element can be really harmful.
And it's not just a matter of data, but it's so much, it goes so far beyond that.
And an interesting audience questions that I'll just bump up because I really liked it.
What is the ability as you see it of our current traditional system to keep up with the speed of technology?
Can judges make laws quickly enough to protect our rights as technology develops?
I think it's a real challenge for the judiciary to keep up with the speed of technology.
And part of the problem there is that technology develops very rapidly as we know, and it feels like things are always changing, especially now we're used to new innovations coming at lightning speed.
And litigation is a slow process.
It's not only slow just in terms of the nuts and bolts of cases, from filing to getting decisions can often take years.
It's also slow in that case law often is built incrementally and it takes time.
It takes time for judges to be educated on new technologies.
It takes time for that incremental process to build up when sometimes what you really need is a court sort of being the first to decide a question because it's the first time anyone's ever seen a technology or an application.
So I think the courts play a really important role.
I don't want to minimize that at all because in fact, we have so much litigation that really does depend on the courts holding the line on certain constitutional rights and other legal rights that people have, but it can't be the only tool because it's just too slow a process to keep up with the pace of technology.
And that's why really we need a robust legal framework and regulation.
I think one of the problems we've seen is that the way it's worked so far with a lot of new technologies, particularly technology used by government or law enforcement, is the approach is adopt the technology first, deal with the consequences or the legality of it later, rather than a framework that might say, actually, we presume that a new technology or application that affects people's rights, you can't use it until you've gotten the go ahead from court or until you've gotten the go ahead from lawmakers passing a law.
But in fact, the default often is you use it first, then we figure out the legality later through potentially months or years long litigation.
And that I think has had a harmful effect because in the meantime, people's rights are implicated and you tend to entrench technologies before courts or Congress or lawmakers have even had a chance to consider them.
Hmm. Yeah, that definitely sounds dangerous and sounds like we've got it backwards.
Yeah.
So I guess what concerns you most with respect to this topic? And conversely, what are you most hopeful for?
Well, I think the last question gets at part of what concerns me the most, which is just the pace of development.
And we in civil society and policymakers and government actors are just catching up in some ways to the development of technology.
You have the private sector moving so quickly, there's new tools being developed and sold every day.
And so sometimes it can feel like we're playing catch up, right, where we're trying to claw back things that have been developed and built.
And then there tends to be resistance among the people who've already invested time and money in building something or agencies that are using a tool when, if we could live in a world in which you didn't adopt things or you didn't develop or sell them without front end consideration of community involvement, community desire for these tools, what are the legal and ethical and moral implications, then we wouldn't always have to be working after the fact, after people have started using something and are more invested in it and maybe more resistant to stopping its use.
So I think that's my biggest concern is just making sure civil society is empowered enough that we have the expertise on our side to keep up with what's going on, that we aren't taken by surprise by new technologies that have been in development for years and appear out of nowhere to us.
But of course, the people developing them have all the knowledge about what's going on.
So the speed issue and the information asymmetry issue, I think, is something we need to really work on.
And I think we have technologists at the ACLU who come from technical and scientific backgrounds to support the work of the lawyers.
And I know that partner organizations do the same.
And I think that's a nice and promising development. And hopefully more people in technical fields will see the human rights community or will see civil society as a place that they can take their skills and talent to help contribute to that information asymmetry to make sure that we have the expertise to do this work properly.
So what are you most hopeful for?
Is it that we'll be able to resolve this as you've mentioned?
I think that there's been a pretty big change in the last decade with respect to people's attitudes towards privacy.
And particularly, I would say, people of our generation, I think, are maybe more aware or conscious of the issues than we might have expected.
There's certainly a lot of tools and apps and technology that we all use because they're very convenient.
But at the same time, I think the host of revelations that have come out in the media in recent years just about tracking, about the amount of data that we give up unwillingly, I think have led to a pushback.
So that gives me hope that there would be widespread public support for limiting the use of a lot of the most invasive technologies out there that public education has come a long way, and people do care about this.
We know now it's not fair to say, as I think people said years ago, that young people don't care about privacy.
I think that's not true. So that gives me a lot of hope. Right.
Yeah, I think we do care a lot about privacy, but there's still a disconnect between that stance and our actions.
And I don't even think I'm fully aware of all the ways in which I reveal information about myself every moment.
And I make an effort to limit the number of devices or what I call commercial wiretaps, those little IoT devices in my house.
But obviously, I don't think I'm doing enough to protect my privacy.
So I guess, what call to action do you have for our generation or for anyone to do more?
What can we do to preserve our privacy more, in your opinion?
And should we care more about our privacy? And what can we do to protect it?
I think that there's two approaches to take on this. One is sort of the individual level, what practices or habits do we adopt for ourselves?
And I think that's a really important thing for everyone to do, right?
Be aware of what tools you use, be aware of what the implications are, opt out where you can or where it's important to you.
At the same time, I think that it can be overwhelming if we think of this as an individual level problem.
Because we do, we need to use certain tools to survive, right?
It's not realistic for a lot of people to live without a smartphone given the realities of their job, for example.
A lot of people need to use various tools.
And some of them we just have no control over, right? Like automated license plate readers.
At this point, if you go through the Northeast on a lot of highways, you can't even pay tolls by cash if you wanted to.
It's just going to take a photo of your car or use an easy pass or something, and it's going to be entered in the database.
So individual choice doesn't enter that realm unless you want to say, I'm never going to drive or I'm never going to travel.
So I think that sometimes it's almost too easy to say, this is about individual level choices when in fact, it's about society-wide choices.
And so what my advice would be is educate yourself, be aware of what practices you find most harmful, whether they're government or industry practices, and then advocate for policies, policy changes, or laws that would address that on a systemic level.
Because it is hard and in some cases impossible for us individually to opt out of sharing our data, if we're going to live in modern society and participate in the economy, and we shouldn't have to.
So what really is needed is sort of support for broader changes.
And that's something I think that people can put their time and energy into just as lots of people have particular causes close to their heart.
This is one you can advocate with elected officials on, you can vote on the basis of how people, policy makers stances on privacy issues that matter to you.
And I think that can be an empowering way to feel like you're making a difference.
That's great. And yeah, absolutely agree. And that's something that I've been thinking about.
We definitely can't put the burden on the individual.
That's too easy and definitely doesn't look at the whole picture. But I guess, how do we determine when it is appropriate to rely on artificial intelligence to make a decision?
I would say that there's a few factors. And as we think about this problem at the ACLU and our partner organizations, one of the big things that always comes into play is, is this a decision that involves someone's individual rights?
So again, are we talking about a benefit for someone? Or are we talking about a punishment for someone?
Because that's where we need to give the highest level of scrutiny to the tool.
And that's different, I think, than if you're talking about artificial intelligence that may not implicate rights so directly.
So for example, a tool that serves as an aid to humans in particular contexts, whether it's certain physical labor, whether it's certain medical applications where the AI tool isn't directly making decisions about people's care, but is assisting the surgeon or is assisting the medical practitioners in certain things, making the work more efficient or more accurate.
Those are the kinds of areas where I think you can see the potential for AI to do really good things.
There are certain applications where AI can really help make processes more accessible.
For example, as live captioning gets better, that can have really wonderful applications to help people with disabilities participate in more events.
We can see these applications that could be good, but we really need to be very cautious when we're talking about decisions that affect people's rights, whether it's positive because of benefits or punishment.
And we need to give scrutiny there.
Yeah, absolutely. Yeah. I've read of instances of people being arrested because they're misidentified with facial recognition technologies and things like that.
And that's pretty outrageous. So great. I'll give you another example is hiring tools.
So hiring is such a consequential decision in people's lives.
And if you have artificial intelligence, screening resumes, or even interviewing people and making decisions about who gets put on a shortlist, that's a big consequential decision to have an AI system make.
So that's another one, in addition to, as you said, the decision about who to arrest.
Yeah. I definitely see the potential for, I guess, leaving out groups of people because they're a woman or based on their race or age.
So definitely a big problem and something we can all relate to.
So all the more reason to push for change in certain policies. So AI can be developed in a biased way, but how do you think AI has been deployed in a biased way?
Because there are two different things. Great question. On the latter, which is, I take it to mean what applications or what uses have we made of AI?
I think in the criminal legal system, that's a big one where it's not just the problems with the AI systems and the outputs, it's what we're asking them to do, which I think fundamentally are what I would call discriminatory or biased outcomes.
So for example, the decision to use facial recognition technology and given what we know about policing in America and that racial minorities are disproportionately subject to policing and surveillance and disproportionately subject to these kinds of tools, using facial recognition on a community, and particularly where we know that the inaccuracies are higher for non-white people when you're looking at the existing available facial recognition tools, that's a choice.
And that's a policy choice of how to deploy AI. I would say similarly, we see AI being used in the family regulation system, where decisions are being made about families and whether children can remain with their parents.
And we have very serious concerns about the gender and racial bias inherent in that whole system.
And again, the choice to use these automated tools if done without regard to that larger context can have really devastating consequences for families.
And no matter how well you design your AI or what you've done with the data, it's still a choice to use these tools in a context of a system that isn't working for a lot of families.
And rather than re-evaluating the system or thinking through how can we best support families, protect children, help families that are struggling rather than reifying existing problems.
So I think what we've seen is sometimes the use of AI actually masks policy choices, where it's not that there's not a scientific right answer to a question.
What we're asking the AI to do is something that we need to interrogate of ourselves.
If we're asking an AI how many people should remain locked up because they can't afford bail, that's not really a question with an objective answer.
What we need to do is ask the question of why are we asking an AI to decide that people should remain locked up just because they cannot afford bail.
So that's another example where it's not even about the the black box per se, it's about what we're using an AI for.
Do you think in some cases that's by design maybe to delegate certain decisions to an AI to kind of escape accountability?
I think that that does happen. People call it tech washing, that sometimes you can just give something the veneer of objective neutrality.
You can point to a system and say well the AI system says this person should remain locked up or the AI system says this person shouldn't get any benefits they're not entitled to or what have you and it's actually masking a policy choice of why this person should remain locked up and why this person shouldn't get benefits.
So I think that's another thing we have to be wary of as a society.
We have to really interrogate what we're designing these systems to do, what questions we're asking them to answer and if those questions fundamentally aren't scientific or neutral or objective questions but they're policy and society choices.
That's fascinating. So I'm an immigrant from Mexico myself and I've been here a very long time but I'm still concerned with the civil liberties of immigrants and I read I know before the ACLU you were an Equal Justice Works Fellow with the ACLU's Immigrant Rights Project.
Do you see the work that you're doing now as a separate, as in you're working on different topics now or is there some overlap in the work that you currently do being relevant to immigration and the civil liberties of immigrants?
There's a large overlap. There's more of an overlap than I even expected and as I remain passionate about immigrants rights, it's something I care about deeply.
I try to make sure that the work that we do takes into account so often the disproportionate impact on immigrant communities.
There's a few ways that the work intersects. First, I think a lot of new systems and technologies are deployed on immigrants first.
That's just a reality.
It's sad to say but it's easy for the government to claim that it's doing something in the name of national security or border security and we're trying out this new DNA collection pilot or biometrics and falsely claiming that immigrants don't have any rights against them and that's just not true.
Everyone has the same human rights and we think that a lot of the technology and the tools that are being deployed on the immigrant communities in the United States just flat out violate their rights and so we work really hard when we see new tools being deployed in those contexts to stop them there if we can.
We also do see there's a pattern in the United States of trying things out on immigrant communities and then expanding them to wider uses.
For people who may be inclined to think well this doesn't affect me, we've seen it over and over again.
Technologies that are used in a border context or against immigrant communities eventually get widespread deployment and affect everybody so it really is a problem that we all have to address.
As I mentioned, the issues that intersect are on different technologies that are being used, on what are the implications of building databases of private information of immigrants that can then later be used in a whole host of contexts.
We work on a lot of issues that intersect with our immigrants rights project and a lot of our issues that have a disproportionate effect on immigrant communities.
Very important and I think it's a recurring theme maybe of folks not fighting for certain things because they think that it's not relevant to them but in the end I think as you're saying everything kind of connects.
Even algorithmic bias that could affect like hiring and things like even if I think I don't belong to this specific category I will eventually in terms of age for example so we're all interconnected and we have to care about the civil liberties of people at every level of society definitely.
The great work that you do is so admirable.
It's great to hear about it. I'd like to move on to our rapid fire questions.
What's your kryptonite?
Probably ice cream right now. Pandemic ice cream. Can't say no.
I second that. Guilty pleasure which might be the same answer but well I don't know if it's guilty.
I love tennis and I'm watching as much tennis as I can and sports stopped for a while in the pandemic but now the tournaments are back so I would say that but I'm not sure if it's guilty.
What are you reading right now? I'm actually catching up on the Anne of Green Gables series which I first read as a teenager and I thought this was a good time to revisit them.
Great. Last show you binged?
Never Have I Ever and I loved it. I haven't seen that. Yeah it's a teen comedy by Mindy Kaling about a young Indian American girl in LA in Southern California.
Go-to karaoke song? Maybe Shake It Off by Taylor Swift. Great, good one.
Favorite snack?
Chocolate.
Tell us about a concert you'll never forget. Manu Chao on a Canada Day weekend in Montreal when it was raining outside.
It was pouring and we had to wait hours for Manu Chao to show up.
I love Manu Chao. I love the answer. And what's one thing you're deeply, deeply grateful for right now?
I'm really grateful that I have a warm home and a roof over my head and that my family has been during the pandemic.
Don't take that for granted. Great. So on that note, I want to bring this conversation about AI and its impact on civil liberties to a close.
Thank you for sharing your thoughts and insights with us today.
I think this has been a really valuable conversation and a big thanks to everyone tuning into Cloudflare TV.
Thank you so much, Esha. And until next time. Thank you for having me, Amada.