Web Summit 2022: Alan Rusbridger (Facebook's Oversight Board)
Presented by: João Tomé, Alan Rusbridger
Originally aired on January 1, 2023 @ 3:00 PM - 3:30 PM EST
Join João Tomé for Conversations at Web Summit 2022, one of the biggest tech conferences in the world — held November 2022, in Lisbon, Portugal.
In this interview, Alan Rusbridger, member of the Oversight Board (Facebook) and former editor-in-chief of The Guardian, goes over: main trends related to the Internet of the past years; working in the Oversight Board (Facebook) and what is it done there; Elon Musk's Twitter and the role of an Oversight Board there; future of media (and social media) online; journalism and trust in the news; where can the Internet go (wishlist).
English
Interviews
Web Summit
Transcript (Beta)
I'm Alan Rusbridger. I used to edit The Guardian. I now edit a magazine called Prospect, and I'm here as a representative of the oversight board of Facebook or Meta, and I'm based in London.
Enormous.
It is mainstream. I'm old enough to remember it when it was, well, actually before it began.
And it's been fascinating to see how both the wonderful and the terrible parts of it are competing.
I've always been slightly on the utopian side. I believe overwhelmingly it can be a force for good.
And I'm really interested to see how the best players, I mean, I think we could spend all evening talking about the bad players if you like, but I prefer to think about the best players and the good uses and interesting uses and vital uses.
We talk about the pandemic that people are putting it to.
For those who don't know, what does the oversight board at Meta do? So I think one day Mark Zuckerberg woke up and thought, I don't need these headlines, these headaches.
I'm an engineer. I'm not a moral philosopher. I'm not an editor. And I shouldn't be regulating the free speech of 3 billion people, and nor should governments.
So his idea was to set up an oversight board that would try and think through some of the more difficult problems that Facebook was thinking and come up with some systemic guidance about the obvious areas like hate speech, incitement of violence, free speech, protection of voice, security, crisis areas, nudity, obscenity.
It's a sort of moral philosopher's honeypot. So that's what we do.
We do individual cases and we also do bigger pieces of policy advice. I think there are three things that we do.
One is there are individual cases that get referred to us.
We try to pick the most interesting ones, a bit like a Supreme Court.
I mean, we're not a Supreme Court that will say, well, this issue that's been raised by this case has wider ramifications.
So we've done about 30 cases so far.
They're very interesting and rigorous to read. We try and apply a human rights lens to how you balance security and right to life and right to privacy, et cetera, versus right to freedom of expression.
And by the way, META is obliged to do whatever we tell them to do in relation to those cases.
Then we can recommend things because in the course of that work, we find things that we think META is doing wrong or could do better.
We've done about 130 recommendations. And finally, we've done three really big meaty pieces of work on more substantive things.
So the most recent one has been about whether META whitelists people.
There are six million people on a list where they seem to get preferential treatment.
And we've gone behind the curtains to see what's going on there.
During the pandemic, there was a lot of talk about authoritative news organizations being in front in terms of algorithm for the more newsworthy, let's talk in this case, news stories to be top in those terms.
What do you think even personally about that of creating a source of truth of possibilities there?
Or is that also a problem?
It's slightly outside. This is my personal view because it's outside our our range within the oversight board.
I think this is a hugely important and complex case.
The same is true of Twitter, the same is true of Google, who they find, who they don't find, who they rank, who they don't rank, who they prioritize.
These are really important questions in how we vote, how we think.
And I think the problem generally is that certainly in the case of META, it's very opaque.
We don't understand.
And META is very protective of its algorithm and doesn't encourage, won't share, for instance, with academics so that people can understand.
And I think that's problematic for societies.
And I hope very much that they open up. Well, I think within a week he's discovered it's a little bit more complicated than some of his early pronouncements made out.
For a start, there are laws.
He's got to comply with laws of lands. There are things called advertisers, and advertisers don't want to be in a Wild West of revolting and violent and angry content.
So it's not in his interest to fall out with advertisers. So he is talking about some kind of oversight.
He said this week that he wants a board to think about content issues.
And I hope we can talk to him because we've been doing it for over a year now.
And I think we could certainly have a useful conversation about what we've learned, what he might be able to learn from us.
In terms of a broader sense of the overboard, let's say if Elon Musk and Twitter asked to join the overboard to be also related to Twitter, where do you see the area going in terms of being in a more wholesome Internet, but also a free Internet in a sense?
Well, I guess that's why we're all doing it. I'm trying to think of my fellow board members.
I mean, some of them actually aren't on social media.
So you've got the whole sort of gamut from people who are a bit like me, still utopians, and people who actually think it's all rather bad, which is a healthy thing in an oversight board, I think, to have all that represented.
But basically, who would not want this digital space in which we all live, and which controls so much of our lives, to be better, and to be better organized, and to be more civil, and to be less harmful.
So the oversight board is not the only model. Maybe Elon Musk is a clever guy.
Maybe he'll come up with a better model. But I certainly think it's worth having a conversation.
Maybe we could learn from him.
But I suspect he's going to have to create something a bit like what we're doing.
And maybe we can team up. I don't know. He said so many contradictory things in the first week.
I think he's like a sort of new kid who's just arrived at school and is learning the rules.
I definitely think that every child from about the age of five should be learning about this virtual space, and to work out for themselves who the good players are, and how the bad players behave, and what's true and what's not true.
I'm surprised that governments have been a bit slow on that.
More generally, I think I'm really interested, again, because I think of the good players, I think it's more interesting to think what can we learn from the last 20 years.
And I'm really interested about the sort of techniques of trust that people are building up.
It's all very well if you go and say, I work for the BBC, I work for The Guardian, you should trust me.
But if you're just somebody trying to get an audience on social media, how do you build that trust?
Well, there are certain things that we can see people doing.
So they don't take it for granted that you're going to trust me. I had to share my evidence.
Here's my proposition. Here's my screenshot. Here's my link.
This is how I know what I'm telling you. And then they encourage response.
So if I've got this wrong, please tell me. And then they engage. And then if they get anything wrong, you don't last long on Twitter if you don't correct it.
Now, you look at the behavior of a lot of journalists. That's the opposite of how they behave.
They tend not to show their evidence. They don't want to encourage response because they find that tiresome.
They certainly don't want to engage. And if they get something wrong, they make it as difficult as possible to correct it.
And I think you think going forward, which attitude and ways of behaving is likely to win trust.
And of course, trust is the big crisis for journalism. So I think, you know, that the journalists who are entirely dismissive of social media and say it's all a cesspit are missing out on something that as the rest of us try to work out how this space works, and how you can earn trust in this space.
I think there's a lot we could learn.
I think I mean, because I was always being sued.
I'm quite interested in lawyers. And there are some fantastic lawyers on Twitter.
And they do, you know, they've learned to write very concise threads.
And they behave in the way that I've been talking about.
And I think most journalists could look at some of these lawyers and think, ah, that's, that's interesting.
That is somebody who's got a huge following from nowhere.
And they're probably quite trusted. Why am I not trusted?
What could I learn from them? And, yeah, so I just think we're at an interesting stage of development.
And, you know, who knows where it could all go, but I remain optimistic.
Well, I think a lot of people are troubled by the thought of billionaires, as it were, owning the spaces that we want to occupy.
A lot of people are troubled by all the privacy aspects of that and who owns our data and so forth.
So I think there are a lot of clever people trying to think, well, how can we build a better Internet?
You know, the first one or the second one, depending on how you're counting, was good.
There are certain aspects of it that aren't great. So if we're going to build a third one, what would that look like?
And I would imagine that something will emerge.
And, you know, Elon Musk will have, you know, he could determine that if he behaves appallingly over the next month or so, I would imagine there's going to be a flight from Twitter.
Maybe that thought has occurred to him too, because he's slightly rolling back on the rhetoric.
Or he could, you know, build a better space.
But I think the Internet is never not exciting. And, you know, I think we could be back here in five years' time and something will have happened that neither of us foresaw.
For sure. Do you think, just to wrap things up, do you think the tech founders in this area are the new philosophers of this time?
You talk about a little bit of philosophy.
It's an area where philosophy could be in play to think the future?
I think the tech, the big famous tech, I mean, you think of the sort of top three to five owners who began life as entrepreneurs and engineers.
I wouldn't use the word philosophers about them.
You know, I think they're obviously absolutely brilliant men.
They are all men in their own right. But I think a bit of humility now and saying, well, actually, some of these questions are profoundly about morals and ethics and the way we live and democracy is how democracies work about free speech.
These are huge questions that people have been wrangling with for three, four hundred years.
So I think it's quite, it should be applauded that someone like Mark Zuckerberg says, hey, I need help.
Yeah. Wanting help helps to be better in a sense, right?
Yeah, I think so. Yeah. I mean, there are 24 of us on the Facebook oversight board and we're incredibly diverse.
The most diverse body I've ever been part of.
Incredibly smart. There are a lot of people who have been thinking about these kind of issues in a different context and essentially they're the same issues.
When we were doing, you know, going back to my life on the Guardian, we did the Edward Snowden documents.
Well, that issue of what right does the state to have to come into your house and seize your documents was litigated in England in the 18th century.
It's the same issue. And in the 18th century, they were saying, no, the state can't come into your house and seize documents.
And I think that's why the governments were on the back foot there because they thought, well, actually, the people's sympathy is actually on the Snowden side.
We don't like the thought of the state just helping themselves to anything.
So these are not new issues.
And I think it's a good time that the tech company is now willing to reach out to people who have thought about the history of these issues and said, look, help us here.