Interview with author of Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System
Presented by: Patrick Lin , Amada Echeverria
Originally aired on January 30 @ 3:30 AM - 4:00 AM EST
Join Amada Echeverría as she interviews Patrick K. Lin, author of Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System.
Patrick is an author whose research centers on artificial intelligence, technology policy, and algorithmic bias. He has worked for the ACLU’s Speech, Privacy & Technology Project, ForHumanity, the Electronic Frontier Foundation, and the Federal Trade Commission. While completing his law degree at Brooklyn Law School, Patrick wrote Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System. His book has received praise from Ben Wizner (director of the ACLU’s Speech, Privacy & Technology Project and principal legal advisor to Edward Snowden), David Ryan Polgar (founder and director of All Tech Is Human), and Sudha Jamthe (instructor at Stanford University’s Continuing Studies and Author of AIX: Designing Artificial Intelligence).
Hello, Cloudflare TV. My name is Amada Echeverria, and I'm excited to chat with Patrick K.
Lin today about his new book, Machine See, Machine Do, How Technology Mirrors Bias in our Criminal Justice System.
Patrick, thank you so much for joining us.
Thank you so much for having me, Amada. I'm really excited to be here. Great.
So are we. So I'd just like to take a moment to briefly introduce you to our viewers and set some context.
So you're an author focused on researching artificial intelligence, technology policy, and algorithmic bias.
And you've worked for the ACLU's Speech Privacy and Technology Project, the Electronic Frontier Foundation, which, by the way, is a partner of Cloudflare's Project Galileo, and the Federal Trade Commission.
And while completing your law degree at Brooklyn Law School, you wrote Machine See, Machine Do.
Your book has received praise from Ben Wisner, the director of the ACLU's Speech Privacy and Technology Project and principal legal advisor to Edward Snowden.
And by the way, Ben was also on Cloudflare TV a year ago, and he did a fireside chat with Alyssa Starzak, Cloudflare's global head of public policy.
And also, your book has received praise from David Ryan-Polgar, founder and director of Alltech is Human, and Sudha Damte, which I am probably not pronouncing correctly, instructor of Stanford's University Continuing Studies and author of AIX, Designing Artificial Intelligence.
So, folks, you can find the link to Patrick's book in the description of this segment.
And before we dive in further, just a quick note for our viewers, if you have any questions, feel free to submit them by emailing us at livestudio at Cloudflare .tv.
And you can find the banner right below this video.
And if you have any questions, feel free to dial in using our call-in number, 138033-FLARE.
You can find the banner right below this video as well. So, all right, let's get right into it.
So, Patrick, in one or two sentences, what is this book about?
Yeah. So, Machine See, Machine Do is, I think, really about how we need to recognize the consequences of treating AI and other technologies as inherently fair or objective.
My book uses history and policy to really contextualize and even scrutinize, I think, the adoption of high-tech tools in our public institutions.
And I specifically focus on criminal justice and sort of how these technologies are being used in sort of these, you know, unexpected and really, I think, scary ways, too.
Amada, I think you might be on mute.
Thank you. It's great to mess up in the first three minutes.
So, in the intro, you talk about Robert Moses. And I thought it was just a very compelling intro that really grounded the rest of the book for me.
And so, I thought it'd be useful for our listeners as well.
And so, he was a very influential figure in the urban planning, among other things, of New York.
And as you explained, the impact of his work is kind of analogous to what we're seeing today, even though it's about a century later.
So, it helped ground the book for me as well.
So, I'd love to hear about it from you and see how that can help us understand how important the decisions that we're making today really are and the impact that they'll have on our society if we don't work to get all of this right.
So, can you tell us a little bit more about his work and how you see it run parallel to the impact of biases and AI on our world today?
For sure. You know, I think Robert Moses is such a controversial and, I think, undoubtedly important person, right?
I think, you know, he was responsible for a lot of the urban planning in New York, as you said.
And he also was responsible for, you know, building things like Lincoln Center, the UN building, and so many of the parks and beaches on Long Island as well, and so many of the other suburbs around New York.
And, you know, buildings and structures like that, right?
Like, they seem innocuous, right?
Buildings, bridges, parks. But I think all of these things actually carry a lot of baggage.
You know, institutions and technologies, they don't exist in a vacuum.
And so I want to draw this analogy, right? Where these things that we think are totally neutral and are apolitical, right?
They are a part of history and that history includes events, you know, policies and decisions that affect different groups of people in different ways.
And I want to acknowledge that, you know, while certain neighborhoods or certain groups of people got to really enjoy some of these things that Robert Moses was creating and things that I think to this day really, I think, add to the identity of New York City.
A lot of people were ultimately evicted.
A lot of people were excluded from these neighborhoods during these huge, you know, projects too.
And so it had a lot of really negative impacts.
And so I want to convey to people that, you know, we operate in the system and even the Federal Housing Administration at the time, its official stance was to keep neighborhoods segregated.
And so Robert Moses, even in the ways he was building things, you know, quite controversially with keeping bridges along Long Island's highways really low so that buses could not pass through.
And so that kept a lot of people who rely on public transit out of, you know, these amenities that he was essentially building for very white and very wealthy people.
And secondly, I want to, you know, I can just really convey that technology isn't so neutral, like a lot of the things that Robert Moses was building.
Great. And well, not great, but thank you for that.
And it's amazing to see how, you know, decisions that seem to have been made a century ago are impacting the day to day of the city today.
So this is really important stuff. So we have more context now and I want to dig into why you think this topic is so important.
So on a more personal note, what motivated you to write this book?
Yeah, I mean, yeah, I think this topic is insanely important.
I think until recently it's really flown under the radar too.
I think we don't fully grasp to the extent to which the government is using things like AI algorithms and other data driven technologies to surveil us and make these really high stakes decisions about our civil liberties.
But I think what really motivated me is that so much of the most important and interesting work in this space is hyper technical and really academic.
And I think just not that approachable.
You know, the average person I don't think is going to willingly pick up, you know, a really dense, you know, hundred page academic piece and read through it to understand these issues.
And I totally get why they wouldn't want to do that.
And so I want to create sort of this point of entry through Machine See, Machine Do to make sort of a more accessible and easy to understand guide or introduction for people who care about criminal justice reform.
And, you know, I think this book is really for people who think technology should be safer and more ethical for everyone.
Absolutely. And the book is very accessible. I just want to echo that, you know, topic is so complex, but it's written in a way that I think a lot of people can understand and come to care about.
So more specifically, what technology do you talk about in Machine See, Machine Do?
So in the book, I kind of broadly divide it into three categories.
The very first I talk a lot about policing and surveillance.
And so there I talk about sort of the widespread use of surveillance cameras throughout the country.
The use of even, you know, police, you know, predictive policing algorithms, which are doing things like, you know, telling police departments which neighborhoods patrol at different times of the day.
And then in the second part, I talk about evidence and how technology is being used to analyze evidence, specifically forensic and DNA evidence.
And how, you know, in through shows like CSI and pop culture, I think we're sort of taught like, oh, like DNA and forensic analysis is totally infallible.
And I want to sort of turn that on its head a little bit and say that while it's certainly done a lot to exonerate people who are wrongly convicted, DNA and forensic evidence has also put people away, you know, incorrectly.
And DNA has been used, and I think manipulated in ways that are really nefarious.
And then also, you know, in the last category, the third category, I talk about things like sentencing and parole.
And how we're using algorithms to even decide, you know, whether someone should, you know, go to jail before their trial starts.
How long the prison sentence should be, how likely they are to commit a crime in the future, things like that.
And we're even using technology, I think especially during the pandemic, to do things like, almost like surveil and observe people who are on parole.
And I think that's really terrifying, but those are just, you know, at high levels of the technologies I talk about.
Got it. It seems all these technologies are maybe to some more efficient, but of course completely ignore like human component to all this.
So that's just terrible. And so are there any laws or regulations addressing these problems?
Can't help but ask that. Is there anything being done? Yeah, so that's the thing, right?
There really isn't any regulation in this space. Certainly not at the federal level.
There's no federal law or rule about how AI should be used.
There isn't even like a standard necessarily that, you know, companies follow to say, you know, this is when AI should be used, this is how it should be used, or even how accurate it should be.
You know, facial recognition is used so widely by different government agencies.
And there's no regulation or standard for how accurate it should be.
I think some of the most interesting work in terms of regulation that's been done has really been at the local and sort of, you know, city level.
You know, Illinois and California, for example, are some of the states that come to mind.
They've done some really incredible, you know, create some really incredible legislation, I think, to address and even limit, you know, the use of some of these technologies, especially by the government.
And a lot of cities, you know, city councils by oftentimes unanimous votes have said, we don't want police to use facial recognition technology.
We don't want them to be able to use facial recognition with surveillance cameras and things like that.
And so I think across the board, we're seeing a lot of these sort of grassroots, you know, movements to address these problems.
But I think it'd be really great at the federal level, maybe a bit optimistic to sort of see, you know, like a uniform rule to address it.
Right. So, of course, the law seems to not be able to keep pace with technology, but it's very hopeful to see that at least on a local level.
Things are being done that's inspiring. And so this book does sound and all these topics sounds very technical and extensive.
So I'd love to know how much research did you need to do for your book?
Um, yeah, a lot of research for sure.
Um, a lot of just, I mean, I'm one of the very few people who I think really enjoys reading.
You know, a lot of these like really technical academic pieces as they're released and really understand sort of what different people, especially in academia, you know, are thinking about when when writing and evaluating these technologies.
And so, you know, just reading a lot of academic pieces.
Um, well, I think we're in a time right now where a lot of like journalism is covering these issues too.
I think sometimes in ways that are, I think, oversimplifying, but also there's also a lot of really responsible journalism too that's, I think, discussing these topics in important ways.
But I think the most interesting part of the research I've been doing is actually getting the chance to connect with, um, you know, lawyers, advocates, um, you know, activists.
Um, and just people all across, um, you know, the industry and academia to sort of hear their opinions and stances.
I think that's one of the greatest things that this book has given me is the chance to meet all these different people and hear their opinions.
And so, so much of the book too is also including those conversations, sort of what questions am I asking people who are working and have dedicated their whole careers to this.
Um, and, you know, what, what information insights they offer and be able to bring that to sort of a more general audience.
And so that's been a really incredible part of the research too.
That's amazing. And I liked seeing how, while the book is accessible, it doesn't oversimplify.
So, um, there's a lot of nuance to it.
So, um, speaking of, of, of research and all the work that went into this project, um, I can't help but think and have very top of mind that you wrote this book during a global pandemic that's been, um, so difficult.
And you were able, you're still able to put out this work.
So, um, I can't help but think this context must have informed your thinking, your research, your writing, um, about AI.
And so I want to know, how do you think COVID-19 has affected the development of AI?
Um, yeah, you know, I would say the pandemic has, you know, at sort of a high level has really accelerated the adoption of AI, especially by government agencies.
Um, I think with everything being remote and, um, you know, the, the need to social distance, um, a lot of things have been sort of moved online.
Right. And so I think we're seeing the same with, um, a lot of government functions too.
And so I think for a lot of administrative purposes, right, it's, it's difficult to move those things online, but we're seeing even with things like unemployment benefits.
Um, there's sort of this big story now about, um, ID.me, which is like, which is this, you know, identity, you know, verification company.
They've been offering sort of these facial recognition services to the government.
And it's been adopted by more than half the country at this point, um, to evaluate whether, you know, the person applying for unemployment benefits is really the person applying for unemployment benefits.
Um, and I think there, there are legitimate, um, you know, concerns about, uh, you know, fraud in this space, but, um, a lot of it I think is, is born out of, I think, paranoia.
Um, and, and sort of a baseless fear that, um, you know, people who have, you know, who are out of work at this point and who during a pandemic need, you know, a little bit of help from the government.
That they're, they don't actually need that help or that they don't deserve it.
And so there's this sort of manufactured demand, I think, for these, you know, types of technologies to basically act as like a gatekeeper, right?
Um, and so we're seeing a lot of this, um, where, you know, even the IRS just last week announced that they would actually be requiring, um, people to essentially, they have to take a selfie and, or, or like a video selfie to access their taxes and, and tax information online, um, rather than just logging in with, you know, a username and password.
Um, and, you know, also just the, you know, Buzzfeed just last week, um, you know, published a piece about how, because of the pandemic, people have turned to actually like, um, religion apps to sort of, um, connect with other people in the community to share, um, you know, even things like prayers or, um, you know, again, I think turning to a religious community where they can't meet in person.
They can't congregate in person. Um, and a lot of these, um, apps are actually selling that data to, um, Facebook and to, you know, other sort of these like data brokers.
Um, and people are sharing very intimate and personal details in these kinds of apps, right?
You're, you're assuming that you're safe and, you know, in a way it's sacred, right?
And so, um, I think we're seeing this move to, um, relying on data and technology more and more.
Um, and I think there are a lot of these opportunities, unfortunately, for people to take advantage of that.
Okay. Um, and I was aware of, of the IRS using that since I interacted with that a few weeks ago.
Um, I was surprised. Um, it did feel a little bit strange and I kind of, you know, asked myself if it was manufactured necessity as well.
This is really necessary. It certainly felt like an invasion of privacy.
Yeah, I'll say. Um, so, um, I'd love to hear what was a challenge you faced when writing your book, um, and how that may have manifested and how you solved it, which seems like you would have since it's a great piece.
Thank you. Um, yeah, you know, I would say there were, there were definitely, you know, different challenges when writing the book.
I think the one that immediately comes to mind is, again, I think these can be very technical, um, you know, topics.
I think both in, you know, the legal space and in the technology sense too.
Um, and I think anytime you try to simplify something that is very complicated, there, there is this concern that you're losing some of that nuance, and I think intricacy of that topic.
And I want to be really mindful of that, because, um, I think if you oversimplify something it becomes kind of this like pop science right it doesn't, it doesn't fully capture.
I think how interesting or how complicated a topic isn't so making sure that I'm still conveying the complexity of an issue or a topic, while still making accessible, this is a really sort of difficult line to walk.
I wrote this book to be for the general public I want to be able to get people who have zero knowledge about this topic to be able to engage with it.
Um, and I recognize that there are some times where I've had to sort of simplify things, but making sure that I'm, again, I think striking that balance, it's difficult.
Um, another challenge to and this is something that I struggled with sort of throughout the whole writing process is, I certainly have very specific views about technology, especially in government use.
Um, you know, I generally don't think law enforcement should be able to use facial recognition I think a lot of the uses of predictive policing, for example, it's just no good I think parole applications.
Um, get a lot of things wrong.
Um, and I, I'm a little spooked out by, you know, courts using algorithms make decisions about how long someone's sentence should be.
Um, but I also want to recognize that those are views that I had and I know a lot of people share, but making sure that I'm still sort of welcoming people who might have opposing views, who maybe think that they feel safer, knowing that a lot of this information is captured and that, you know, police are there to protect them, whatever it might be.
I want to make sure that I'm still presenting arguments in a way that will allow people to, you know, who have opposing views to understand, appreciate and hopefully even get someone to understand that, um, that this isn't a matter of left or right, that I think privacy and wanting to have control over your own identity and your own data should be a very personal thing.
That's something that I think is very human.
And it doesn't have to be politicized in that way.
And so I think writing in a way that appeals I think to, to people who agree with me but also will hopefully bring in people who might have different views than me.
Very interesting. And I do love the argument that even though some of this is politicized or associated with certain parts of the political spectrum and really shouldn't be and I feel like in some other countries, maybe it's not as politicized as it might be here so it's interesting.
And so, I want the last few questions to be grounded and in what our audience can do moving forward, and some takeaways and potentially calls to action so first in terms of surveillance and privacy.
And what are some trends that we should be paying attention to in 2022.
Thank you. Yeah, again I think this move to, you know, remote.
And I think, to be really skeptical, and even to scrutinize the government's adoption of technologies.
I think, um, when we gain certain conveniences, there are certain things that we have to give up as well.
Right. Something that comes to mind is sort of at the, you know, at the airport now rather than having to, you know, scan a boarding pass a physical boarding pass.
A lot of airlines will just take a picture of you and compare it to whatever you know government identification you uploaded on their website.
That's sort of the same thing that the IRS is doing now with ID me.
And it's super convenient, right, you don't have to worry about you know fumbling for your, your boarding pass or losing it and having to present it.
And you don't even take out your phone now all you can do is just stand in front of a camera someone snaps a picture of you and then you're on your way.
It's super convenient. But I also find myself wondering sort of what's done with that photo.
After I step away from the kiosk. Right.
We know, and it's well documented that you know the FBI and other law enforcement agencies are building these massive, you know databases, you know gathering images of faces.
A few years ago we know that more than half of all, you know, American adults have their faces stored in the FBI's like facial recognition database for example.
And I think we need to be really skeptical of that I think I think anytime we are gaining these conveniences we have to wonder sort of, what am I giving up in exchange, in the same way that like social media companies right we can use Twitter, Facebook and Instagram really freely.
We don't have to pay anything to use it.
But in exchange, you know, we are helping to build out their algorithm they're testing out different features on us, the data that we are, I think in a lot of ways.
I think of a lot of times our data is not just our identity but also our labor right like the experiences that we are having the work that we're sharing the art that we're producing.
Those are all things that are now being showcased and being, you know, quantified in some way by these companies, and I think we have to be, you know, I think, I think we're going to see more and more of that this year and beyond I think we have to really keep an eye on it.
Yes, definitely. What would echo and urge folks to think about the trade offs of convenience.
Great. And so, of course you've touched on it throughout the this call but what do you want readers to take away from reading machine see machine do any calls to action.
Yeah, you know when I was writing this book, and even just the research I've been doing the space.
It can feel really daunting. I think it can feel really overwhelming to realize, um, you know, what do you, what do you mean, you know police are able to, you know, are using algorithms decide which, you know, neighborhoods patrol What do you mean, you know, algorithms are deciding you know someone sentence in court and things like that.
Um, I think when we learn about these things can be easy to get discouraged by just how awful some of these uses are how disproportionately affects certain groups of people more than others.
Um, but I think at the same time as we're identifying and I think recognizing sort of how widespread a lot of these biases and a lot of these issues are, I think it also presents us with a really unique opportunity to I think address, you know bias and like systemic, you know biases in a way that we've never been able to really do before.
Um, you know, a lot, and I think there is a real urgency to because as we move closer and closer to, you know, technology being, you know, more deeply embedded in government use in our day to day use in our day to day routine.
Um, I think it's really difficult to be able to remove it right because now we've created a, a day to day life that depends on it.
And I think, you know, to sort of bring this back to Robert Moses something that he said was, um, you know, it's really difficult, or, you know, laws will change one day.
Right, but it's really difficult to tear a bridge down once it's up.
Um, you know, when he wanted to keep neighborhood segregated he was able to use his connections with politicians and lawmakers to make sure that you know discriminatory and you know laws that furthered segregation were passed.
Now, you know, eventually those were, you know, overturned but the bridges that he builds and a lot of the institutions and physical structures that he built.
Those are still in place today, and I think Long Island remains still a very homogenous community in a lot of ways.
And so I think as technology becomes, you know, more ingrained. We're going to see the same issues, and I think we've addressed it sooner than later, because all of a sudden, it's going to get sort of concrete concrete concretized in a lot of our routines.
Yes, I thought it was very clever how you said his biases were made sort of concrete, literally, and so we definitely don't want to see that happen today.
Love how you brought that full circle. And this has been very interesting.
And so Patrick we're nearing the end of our time. I want to make sure I have the time to thank you for encouraging us to think twice about our current system of justice, and to question the technology that is sold to us as making systems more objective and fair or efficient when it seems that this is clearly not the case.
And there's a lot more to do going forward to make criminal justice reform and technology more ethical and safe, and as you've explained it's quite urgent that we act today.
So folks, I want to thank you as well for tuning into this interview with author of machine see machine do Patrick a Lynn today.
It's been very enlightening discussing how technology mirrors bias in our criminal justice system with you today Patrick, the interview portion of the conversation is over, but I want to know what's next for you.
Yeah, well, first of all, Mada, thank you so much for having me today.
It's been a, I always love talking about this stuff I really enjoyed the questions that you're asking.
But as for me sort of what's next is, um, I want to keep writing.
I love, you know, doing research in this space. It's really important and it can be again I think really overwhelming to sort of grapple with all these things but, um, I really enjoy it.
I think it's really important work and so I hope to continue you know researching and writing about the space.
And I think longer term, you know, I want to be able to to work to change some of these things.
You know, I think there's a lot of room for technologists and lawyers are like I think to really do things to further the public interest and to do things that will actually improve society in a meaningful way and technology, there's there's a room and space for technology to do that.
I just don't think we're there yet.
And I think that creates some really incredible opportunities.
But yeah, that's sort of where I'm headed. Fantastic. Can't wait to read more and learn more.
And so, thank you, Patrick. Congratulations on your book.
Thank you all for joining and folks stay tuned into Cloudflare TV for more insightful content.
Thank you.
Bye.