Cloudflare TV

Social media threats and regulation

Presented by Jenny Reich, João Tomé
Originally aired on 

From our San Francisco headquarters, we sit down with Jenny Reich, a Fellow and Adjunct Professor at the Georgetown Law Center on National Security. As an expert in social media law, Jenny examines social media's impact on misinformation, cybersecurity and journalism. In this episode taped during the 2024 RSA Conference, Jenny offers her expert perspective on growing congressional pressure for social media regulation, the origins of data breaches from chat rooms, Section 230 and TikTok as the new Gen Z search engine.

**Read more: **

English
News

Transcript (Beta)

Hello, everyone, and welcome to our show. Jenny Reich is a fellow and adjunct professor at Georgetown Law Center on National Security, where she teaches social media law.

So she has a bunch of students that she hears constantly, right? Hello, Jenny, and welcome.

Hi, thanks so much for having me. I'm excited to be here and have this discussion.

Before we jump into the security trends, some of your views there, can you give us like a quick run on your background?

Sure. So I have a law degree and an MBA from the Wharton School and the University of Pennsylvania.

That's where I really started to develop my interest in national security and emerging technology and kind of existing at this nexus.

So I've gone through a number of different places, law firms, and that's how I landed at the Center on National Security, trying to really make an impact on these really important issues.

Do you remember when you started to be more interested in this area like Internet, social media, security?

Sure. So the Center on National Security had just gotten this major grant on social media, actually.

And I came from much more of an emerging tech background.

So looking at more hard tech things like semiconductors, infrastructure, those types of questions.

And as social media came up, they reached out to me.

And it seemed like this really interesting opportunity to transition into what I see is really the frontier of where technology is going, right, into the digital world, as opposed to just thinking about things as they exist here in our physical world.

And so seeing that transition and seeing that Georgetown had kind of invited me to be a part of that was really exciting.

And you've worked in different projects there, not only teaching students, but also creating research projects.

Is there any research project that you're more proud of that really you think you put things into perspective and out there important, really?

Absolutely. So we actually put out a report on the future of social media and emerging national security trends, kind of looking towards what the future is going to look like.

So not the 2016 election, but really thinking about 2028, you know, what's going to happen.

And so that report's available online. We identified a number of different ways that social media in particular can undermine, for example, democratic principles and democracy.

So I definitely invite you to check it out and read our report.

We've got some really great material on where these things are going and really the fact that there are national security implications in the consumer space, which I think isn't always an arena that's appreciated in the national security community or in the consumer community.

So that's really where we're hoping to go.

And I invite you to check out our report. Absolutely. And I'm going to ask you a few questions about that in specific.

But before you're at RSA, I think this is your first time here this year.

How has it been so far being at the conference, participating, but also hearing different folks?

How has it been? You know, I've really enjoyed RSA in particular because you have such a diversity of different perspectives being presented here, right?

You have super technical people looking at like very particular, you know, different issues within like whether it's encryption or even within like cryptocurrency communities and things.

And then you've also got, you know, us lawyers, right, who can come into the room and have a conversation.

So the ability, I led a conversation on Monday, actually, and it was just so great to have, you know, tech executives and IT executives and also, you know, lawyers and policymakers and people from all walks of life and students in the room, able to brainstorm and kind of think about where these threats and concerns are coming around the bend.

And I think that's really one of the value propositions of RSA is this ability to bring together such a diverse group of people who are all invested in and knowledgeable about different aspects of cyber.

More specifically, what are those trends in terms of security that you saw there?

Some you brought to the table, others you probably hear.

What were those trends? Those who work in security, what are the main concerns, really?

You know, it's really interesting. I've heard a lot, just I think as a lawyer, people come to me with lawyer questions.

So I've heard a lot about liability questions, especially around artificial intelligence and where we're going to situate liability, especially as AI becomes an algorithm, algorithmic models become more and more divorced from the people who originally trained them or originally developed the data sets, things like that.

How do you start to establish liability in all these different contexts and all these new AI applications that people are seeing?

So I think that's been a really interesting trend that I wasn't fully attuned to is that that's something that not just lawyers are worried about, but actually people on the ground and, you know, technologists and business folks.

And then from my own perspective, I'm particularly interested in looking at these trends from a national security perspective.

And so seeing, you know, the ways that moving our interactions into the digital world, for example, can actually undermine our assumptions about trust.

So when we engage in the physical world, there are certain in-person trust-based expectations around the community, like the fact that we're sitting here in this room together tells me something about you.

All of that is gone from a digital space. And I, from our research and from what I've seen, it doesn't seem like our individual abilities to recognize that kind of subversion of in -person expectations is really necessarily playing out.

So that's one trend that I see that I think has major implications for how we interact in the digital world, as well as something I've seen at RSA.

It's quite interesting because most folks don't realize that cybersecurity is a broad word, but those elements that you said, there's security there at play there, right?

The trust in people, in institutions, things like that. In what way did you in the past years saw a change and in what way do you, are you worried of that change, but also hopeful in a sense?

Absolutely. So I, I'm both worried and hopeful actually, right?

I think that's one of the benefits of being in a time where there's just such rapid technological transformation, right?

There's incredible innovation, incredible opportunity.

And so from my perspective, I'm not worried about where we are today.

I'm worried about taking this incredible energy that we have and harnessing it in a positive direction, right?

And making sure that it continues to grow and develop in a way that's going to be positive for our communities and positive for our broader digital society, right?

And for a free democracy.

So that, that's my perspective on like being hopeful about where things are going.

Trying to understand specific examples, if you have them, even if they're hypothetical on how this happens and relates to people, like relates to relations between people and institutions, journalism, all that really.

Sure. So I think one of the really more fun, I guess we could say examples that I just think pretty starkly highlights this, the communal aspect that I mentioned earlier is when you look at the recent leaks by the US military service member regarding Ukraine packets, right?

That were all being leaked on Internet chat rooms, right?

Within communities that felt like they were communities of trust. And, you know, you want to build your cloud and influence in the same way you might in an in-person community.

But when you're doing that online, you know, obviously you don't actually know everyone who's behind the keyboard outside of this very flat pancake context, you might say.

And obviously that information can also be accessed by a number of other sources.

And so seeing ways that we can build that kind of trust and build that kind of social expectation to, you know, want to present yourself in an online and digital space can actually lead people to making poor decisions that if we, I think as hopefully, as we develop a better sense of, as a society of what it means to exist in a digital space, as opposed to in a physical space, some of those expectations will start to be tempered and people will be less socially pressured in those spaces because they're not carrying over their prior notions from digital spaces, from physical spaces.

So on specific examples, you mentioned that example where social media and this drive that young folks, but also not so young folks have a participate of being popular, showing they know something.

And sometimes it's an internal group that seems internal, but it's social media.

It can go everywhere, right?

And there's this difference between conspiracy theories that spread out.

And in this case was actually important government information that spread out.

It wasn't supposed to be. In what way do you think there's ways of doing, avoiding that for national security purposes?

That social media related to national security, a perfect example, right?

So in what ways can we avoid as a society, as a society, avoid that in a sense?

You know, I think that as a society, some of this will naturally take care of itself.

I think we've already seen that.

For example, I don't know if you remember the Ashley Madison hack back in 2015 or 2016, there was an uncomfortable number of dot mil email addresses associated with that.

And I think people learn over time, sometimes through trial and error, but just as we exist more online, people are starting to understand a little more about the do's and don'ts of digital hygiene.

I think some of that is just building of habits.

Now, of course, the hope is that we can build those habits before there's some more catastrophic changes and catastrophic national security threats that come through.

So some of that is just education, building good hygiene habits, digital hygiene habits from a young age.

I think some of that is also, of course, as you mentioned earlier, just the importance of journalism in the media and really just developing, redeveloping, I would say, because as you mentioned, conspiracy theories, a sense of trust in a media that is providing, you know, not the unbiased perspective, but like an honest and trustworthy perspective that they're authentically engaging.

And I think so much of that is just the fact that you've seen a subversion of business models in the media space in terms of where they can get their funding and also just this 24-7 news cycle that has created just this need for content.

And also now they're competing with everyone with a smartphone. And so figuring out ways to just prop up our media industry as that just independent pillar of truth that we can all go back to and trust and, you know, analyze critically, but also recognizes, you know, trying to present a really honest and balanced recounting of our current state of affairs, I think, is important both to democracy and also to having people have a way that they can much more critically engage with the information they see online, right?

It's much more helpful to have an answer key, right, that you can go off of, which in some ways the media can kind of provide to some extent to give you a little bit of a pushback on whatever you might be reading online to just triangulate your own views, right?

So it requires a little bit more effort on the part of individuals who are engaging online, but to have those additional resources and to be able to engage with additional resources, I think, is one way that we could be doing this.

The tech companies have a lot of this.

So it's security training. You do security training because you work at a tech company.

There's a lot of talk, not only in the US, I come from Portugal, even in Portugal, in Europe, of media literacy, social media literacy, online literacy.

So what to trust, how to, what to post. Is it public?

Is it not? Things like that. I think most people, several people learn by mistake or seeing the evolution of things.

But do you think there's a path of governments making a little bit more effort in terms of that media literacy, social media literacy of sorts?

So I think in the United States, we run into some really interesting questions around the First Amendment that make it much harder for government to engage in these ways than, for example, in Europe.

And I think that's actually a big part of why you see regulation kind of full speed ahead in other countries.

And in the United States, we are just much generally and naturally and perhaps goodly a little more distrustful of government.

So I think that that's a really important dynamic here to keep in mind is just that in the United States, government intervention is not always seen as a good or even a constitutional thing that can go forward.

That being said, I think that one of the most important trends that we've seen across the board and actually security and privacy folks have been pretty good about this, right?

Security by design, privacy by design.

This idea that because we're asking individuals to take on such a new role in terms of critically analyzing the information they're getting because our traditional gatekeepers in the media are now also being flooded by, you know, just every again, every Joe Bloke with a cell phone.

What I think we really need to do is take some of these other burdens off of individuals, right?

So if you can build privacy settings in by design, right, if you can build secure, you know, secure, you know, password protection, all of these things into technologies in a way, yeah, two factor identification in ways that are much more just automatic and understandable.

Even, for example, I actually love the community notes on X, right?

This idea that like you can kind of build in critical thinking a little bit and build in diverse perspectives, even on an individual post or even, you know, one of the solutions that is mentioned in the report that I helped work on is this idea that you might actually be able to say how certain you are of the post, right, as like a user interface decision, right?

So it's like I'm 80% this is true, I'm 10%, you know, how much do I trust this source that I'm sharing?

Even just forcing people to kind of have that additional moment of reflection and doing it through design, right, doing it in ways that make it really easy for people to understand might not be great from a business perspective.

And so that's an incentives issue and maybe a government regulatory issue we need to talk about.

But just really thinking about ways that we can make this easier for people because we are putting an additional burden on them as they enter these digital spaces.

I remember X, a former Twitter, there was a big preoccupation also because of Europe, for example, of when you do reposts or retweets in this case of an article, have you read it?

Like the suggestion of Reddit, it's still there, it's still happening.

Community notes was something created before but only implemented more recently, quite useful.

Where do you see the path going for the future? Are social media networks open to continue this path of exploring ways of bringing trust?

Because I know Facebook, Instagram, they're not interested, for example, in having a lot of news recently, most recent years for obvious reasons.

But where is this, and threads too in a sense, although it's a bit like Twitter, but where is this social media network landscape at this moment leading us to?

So on the social media networks front, I think you'd actually have to ask them.

For better and for worse, there's a lot of growing, I would say, in some ways.

It seems to be competition in the space, right?

You've got TikTok coming up, you've got lots of new apps that especially the Gen Zers are getting into.

And so I think that there's a lot of also just, it's very hard to keep up with the development of these new apps and these new platforms.

And I think really, even just as early as five or 10 years ago, you had a situation where platform effects and network effects meant that all my friends are on Facebook, I guess Facebook is going to be here forever.

And it seems almost as though those switching costs and the network effects that were associated with that have just kind of almost evaporated because you've got this new young generation who's so digitally native that it's so easy to switch between sites that some of those initial basically protective casings that I think kind of guided our current social media networks are kind of starting to fall by the wayside a little bit.

I'm curious to see if that trend continues.

But of course, if it does, all bets are off because there's a huge difference from the United States' perspective, I'm not speaking on behalf of the United States, but there's a huge difference between the ability to regulate companies that are based in the United States that have the majority of their workforce and their decision makers in the United States than there are companies who are based elsewhere, right?

Like TikTok for example. Yeah, you saw where I was going with that.

Of course. But that's a very unique use case because, for example, we know because important people at a company says so, Instagram and Facebook, they're not really interested in news recently, but TikTok is.

TikTok is a very well-known source of news for young people.

So that brings some preoccupation, maybe, really.

Yeah, well, and what's personally for me just a little more scary, right, is that you see all these new reports about young people using TikTok almost as Google, right?

So instead of using Google to run their searches, they're using TikTok.

And that can be great in some ways, right? Again, this whole concept of building a really easy and flawless user experience that's really keyed towards your interest or addictiveness, depending on your perspective, but is something that obviously TikTok has done a great job with.

But the problem with that and the flip side of that is that it means that the results that you're getting are keyed towards your priors in a way that they will also be on Google, but in a way that on TikTok, there's potential for potentially external government manipulation, but also a number of other different ways that because it's a social media network really aimed at keeping your attention as opposed to directing you towards information one way or another, I think could be potentially concerning.

And it takes out journalism, the typical journalism, because journalists can be on TikTok, but take out the typical journalism, newspapers out of the equation for young people, right?

And journalism over the years has been, and I was a journalist for a number of years, so it's close to heart.

Well-informed citizens make a good democracy, and that is because of journalism.

So journalism is a little bit at play there with TikTok being high in young people's minds.

I actually think it's great to talk about TikTok and journalism in the same sentence, because I think that one of the major, not as a journalistic expert, but it seems like one of the major issues that journalism is facing right now is that there has been this shift in the business model to really focusing on the attention economy, right?

So the definition of good journalism is starting to shift, not necessarily in newsrooms where editors are doing their best and on the front lines trying to report news, but in terms of where the money comes from, right?

Like if the money comes from the number of clicks you get, it's not about how accurate or how unbiased or how straight-faced the report is.

It's about how emotionally salient it is, how attention-grabbing, potentially how fear-inducing.

There's a number of different vectors that actually, some social science research has shown.

The research is a little muddy, as is all social science research.

So there have been some trends that just show that the things that you're being rewarded for, the things that are being seen as good journalism from just purely a monetary and an economic perspective are very different than those traditional journalistic values that we see that are so critical to democracy.

And that shift, I think, is just perfectly encapsulated in TikTok, right?

Because TikTok doesn't reward journalism for being excellent, Pulitzer Prize winning, whatever, right?

It rewards journalism for being funny and cute. And in my case, like dog videos, right?

And so it's just a completely different idea of what makes good journalism.

And I think that that shift in an incentive structure just isn't something that we fully grappled with as a society because you lose something.

Makes sense. At RSA, you spoke about AI and expanded reality, so virtual reality, augmented reality.

In what way those risks are real and are happening?

And did you saw also trends coming up in your panel there? Yeah. So RSA left me a little terrified, which was great.

We even actually, something that I wasn't fully expecting, because I consider myself to be a bit of an AI skeptic in some ways, was the talk about artificial general intelligence and this idea that maybe we're not actually so far off.

And so, you know, again, back to this idea where from my own perspective of there's so much potential here and so much good that can be done in the world, how do we establish guardrails now so that if we ever do reach AGI, for example, right, this idea of like a fully autonomous, artificially intelligent, you know, being or other creation that's kind of interacting in the world in a much more autonomous way than just, you know, draw me a picture of my dog in a Lakers jersey, you know, like, what is that going to look like?

And how are we going to protect against that for the good of all of society?

And so from an AI perspective, I think, again, there's so much hope and then there's also a lot of cause for concern.

I think on the extended reality front, there's, I mean, Apple just came out with a new Vision Pro headset, which I think if we think about it and conceptualize it, it's kind of like the iPhone 1.

Like, I'm not buying one. They're very expensive. They're very expensive.

But I could totally see myself buying a Vision Pro 5, right, or a Vision Pro 6.

And so I think the question is, are they going to be able to A, get the app environment to to encourage enough developers to create real use cases for it?

And, you know, are they going to be able to convince enough of a market? And so I'm not worried about national security concerns necessarily until we see enough engagement and uptake on that.

The one area where I would draw a bit of a distinction actually on both fronts and really on all technology fronts is just the digital divide.

And I don't think this gets talked about enough, especially in tech conversations where, you know, it's great when you implement 6G networks in a country, you know, what a lot of times, a lot of that ends up in urban areas or areas with higher socioeconomic development.

Sometimes even urban areas have dead zones, actually.

But, you know, to the extent that we introduce these technologies and they're able to make some people's lives a lot more better and a lot more productive, the other people who aren't necessarily getting access to this, right, who can't pay $3,500 for a headset, for example, are going to be left behind.

And what does that mean when you see this really stark divide as these technologies, in addition to their harms, just produce incredible benefits, but only for a particular segment of society?

What does that mean for us, A, from a security perspective, because highly unequal societies can also be more dangerous and can undermine democracy.

And what does that mean for us also just from the perspective of we want a society where all ships are rising, right?

And how do we accomplish that when some people are essentially being locked out of, like, one of the most productive revolutions of our time?

Makes sense. It's a good point. In terms of AI, potentially more than VR at this point or augmented reality, there's a lot of talk on regulation.

Tech companies, OpenAI and others, have been asking for some type of regulation.

A lot of talk in Europe about that too, in the US. Do you as a researcher see a path in terms of regulation?

It's not easy because it's not fully made, if you want.

AI is constantly evolving. We don't know what will be in two years.

But is there a path that you see in terms of looking at security of regulation that could be important to have in mind?

So I'll plug it one more time. The report that we worked on on social media has a lot of really interesting new proposals from a task force on where they see potential interesting ideas for regulation to come from.

For me personally, I think that all regulation needs to start from good data, and we just don't have that right now.

And I think that one of the most dangerous trends, particularly in social media, but I think we're seeing this across the board, is, for example, companies hiking their prices of API access for researchers.

And unfortunately, what that means is that the people who we trust as a society to get to the bottom of some of these thorny questions are not able to do that.

So I think establishing better access for researchers, like researcher transparency, I think there have been some really interesting efforts in Congress to look at ways that you can create a more regulatory agency associated with social media.

I think there's potentially some workarounds where you could do that with AI, but I just think that the most important thing right now is getting a real understanding of the landscape.

And I don't think that we have that because of trade secret concerns, because of competitive advantage concerns, and also because, frankly, regulation is expensive for companies, right?

And so to the extent that companies try to do things the right way or try to push for regulation, that regulation is going to mean they're slower to market.

It's going to mean they're not necessarily going to be reaching profits at the levels and at the speed that they would otherwise want to.

And so how do you build a society and how do you restructure incentives so that the incentive is not to move fast and break things, especially in a space where we know the thing that they could break is our society, right?

And we're already starting to see it. So how do we convince tech companies that they need to be doing these things?

Because clearly, they're not going to do it out of the bottom of their hearts, right?

And there's a number of different reasons for that, good and bad.

So how do we create economic and other real incentives for them to do that?

And I think the first step is we need an understanding of the playing field.

So we need them to support real and solid transparency with government regulators and also with academic researchers.

Makes sense.

Social media, in a sense, was a start of that specifically. AI is still growing in terms of impact, at least from most people perspectives.

A lot of talk of Chachapiti and others, but there's this sense of those tools are still, we're still trying to figure out what we'll do with them.

Companies. It's powerful. Everyone knows it's powerful.

But where will it go? It's still like we're figuring out.

Companies are figuring out. In what way can we make it so that it's not like social media mistakes were made and problems arose, in what way can we make it a little bit different there?

You gave a few examples, but maybe like more specific ones regarding the next few years, for example.

Sure. So I think in the social media space, I would actually argue that things with social media went pretty well for a while.

For a while, yeah. Yeah. I mean, there were a lot of unintended consequences, which is just a natural result of change, right?

Like when you have change, like things are going to go wrong.

And that's just a necessary evil that accompanies progress.

And I think that one of the areas where that was great, but that is kind of worn on its welcome is Section 230, for example.

So that's this regulation in the United States that essentially has created a situation where good faith content moderation is acceptable by companies.

And other than that, like platforms on the Internet don't necessarily face liability for bad things that might occur on their platforms.

And I think that was great for developing the Internet and for growing this incredible online ecosystem that we see today.

But I think that is a big part of why there just hasn't been as much incentive for companies to try to take steps against these things because they have Section 230 immunity.

And so I think from an AI perspective, there have actually been questions about whether or not, how Section 230 can apply, for example, to like generative AI and some of these new technologies.

Some of that is being questioned by like the plaintiff's lawyers, for example.

And so I think that what's really important is creating an environment where AI, and again, back to this liability issue that actually got talked about a lot at RSA, creating an environment where we kind of have parameters around what responsible AI development looks like, but also there will be consequences for irresponsible AI development.

And when you have a law like Section 230, there's really not a ton of consequences for irresponsible development.

And so figuring out a better way to balance that as we move into this new AI-based future, and so maybe that is just leaving the law as it is and allowing AI to kind of continue to simmer up a little bit, which I think would be the company's preference.

But I think that if you're going to do that, you really need to do it with a lot more transparency and a lot more understanding of maybe, for example, with tobacco companies, one of the things that US regulators did was basically create this settlement fund that basically set aside money to try to curb smoking.

Maybe you need something like some sort of AI harm settlement fund that gets created up front.

That didn't happen with tobacco, but maybe we could here, right?

That would essentially be able to compensate people who we know are going to be harmed by this technology.

So let's at least create some sort of backstop and some sort of safety net for the people who are, and just get that process started now and get those parameters started now.

You spoke about a few things regarding, for example, preoccupation in terms of democracy, elections, trust in the institutions.

There's a problem there right now, we would say, even we were talking about TikTok, for example.

But when you think about what would a better Internet would be for the future, what would that be really?

Definitely. So I think there's a couple of different ways to think about this question.

And so I'm going to take it from just first principles in terms of like, what are we looking for in a healthy and functioning democracy?

Because I think that those principles can be applied on the Internet, right?

So again, just drawing from that report, I think that we talk about democratic principles being things like accountability, right?

And that's can be an issue when you're talking about like frontline worlds, because from a lawyer's perspective, like jurisdictionally, it's really hard to like, prosecute somebody in, you know, Bangladesh, if I'm here, you know, in, you know, the Southern District of New York or something, sometimes, you know, getting access to evidence gathering, those things can just be harder than if everyone was physically in the same place.

So, you know, figuring out mechanisms for much more accountability, when there is harm that occurs online, figuring out new ways to conceptualize harm.

We talk a lot about harm in the physical world, right? Like, if I punch you right now, like, it's pretty cut and dry, that's assault.

Well, if I, you know, punch you in, you know, extended reality, for example, or if I'm able to, you know, touch you in ways that make you uncomfortable, I'm not necessarily touching you, right?

And so how do we reconfigure laws to kind of understand that these potentially emotional or psychological harms are very real, right?

And can be potentially tremendously damaging, especially to our children.

So figuring out ways to reconceptualize harms for this new Internet that we know is coming, thinking about, you know, open debate, and again, trusted gatekeepers and giving people access.

So that's those tools that I mentioned. That's also, you know, building a robust media, right?

Giving people the backstops that they need to kind of counter misinformation.

And also, again, just that education and the ability to build strong in-person communities.

Because the way I think, and from what I've seen in research, that you counter radicalization and that you counter the isolation that can accompany being driven into some of these darker corners of the Internet is you build real in-person connections and you build bonds.

And that starts at our schools, that starts in our local communities, that starts with investing in our libraries.

There's so many different ways in the real world that we need to improve things that I think will help create this platform for a healthier digital world.

In addition to, again, the researcher transparency, access initiatives, the idea that maybe you need to start running pilot programs, right?

Maybe the government needs to start funding pilots on what regulation could look like, or, you know, seed funding, you know, new and alternative models that don't necessarily rely on our same market -based incentives that, as we talked about, don't really work for things like journalism.

You just wrapped things up. I was curious, you mentioned your students, what do they think about these topics and do they surprise you in terms of their opinions?

Oh, my students are the best, so, and I'm not biased, I swear.

So I, what I really enjoy about teaching at Georgetown is that we get such a diverse array of students, everyone from 20-year -olds from the UK who came over to get an LLM to a military prosecutor who's, you know, been in the business for 20 years and decided to come back and just get a degree, and everybody in between.

So I think that one of the things that has really been interesting to hear my students talk about is just the way that their relationship with online changes, and even sometimes their kids' relationship with the Internet changes, which is really rough when, you know, my students haven't heard of, like, AOL screen names, right, or I'll make a reference to Lord of the Rings, and they're like, what was that from?

So, you know, so besides the fact that, like, the references have changed a lot, I think it's really interesting to see them and their hopefulness and their optimism about the Internet, but also their very real recognition of the dangers of the loss of journalism, right?

I think that's something that really came up a lot in our class, especially with the tumultuous world events we've seen, you know, my students were really concerned about where do we go for true information?

And that is just such a hunger that, A, is great to see in the next generation, but is also a very real concern.

That's interesting. A good way to end. Thank you, Jenny. It was interesting.

And that's a wrap.

Thumbnail image for video "This Week in Net"

This Week in Net
Tune in for weekly updates on the latest news at Cloudflare and across the Internet. Check back regularly for updates. Also available as an audio podcast!
Watch more episodes