Podcast

Smoking, cigarette, social media, company logos
Photo credit: WeDevlops_com / Pixabay, Editantpv / Wikimedia (CC0 1.0), Instagram / Wikimedia , YouTube / Wikimedia , X Corp. / Wikimedia , Facebook / Wikimedia, and atelier Moss / Peels.

Social Media Had Gun-Level Immunity. That Just Ended

04/03/26

A jury just shattered Big Tech’s legal shield. Meta and YouTube: guilty of engineering addiction. This is social media’s tobacco moment — and AI is next.

Two industries in America have enjoyed near-total legal immunity for their products: gun manufacturers and social media platforms. Last week, it appeared that the legal shield for social media companies has been shattered. 

Juries found Meta and YouTube guilty of harming users by deliberately engineering addiction, knowing the harm, and doing it anyway. Legal experts are calling it social media’s tobacco moment.

Our guest on this week’s WhoWhatWhy podcast is Fordham University law professor Olivier Sylvain, a former senior adviser at the Federal Trade Commission and author of Reclaiming the Internet: How Big Tech Took Control — and How We Can Take It Back.

He explains why these suits, in New Mexico and California, succeeded where previous attempts had failed to hold social media platforms accountable. At issue this time was not the content of online speech but the intrinsic machinery of social media — infinite scroll, autoplay, recommendation algorithms designed to hold attention at any cost.  

Although federal law has long held that social media providers are not responsible for what users post, Sylvain says that evidence of the harms to young people have become impossible to ignore. Courts are finally asking the question Sylvain has been pressing for years: Are these commercial services engineered to monetize vulnerability — and thereby putting millions of users at mortal risk?

Sylvain’s real concern, however, is what comes next: artificial intelligence is being built on this same, flawed foundation. Same immunity claims, same reckless deployment. Until we settle whether these new products are subject to safety standards, he argues, we can’t address what new harms may be brewing.

iTunes Apple PodcastsGoogle PodcastsGoogle PodcastsRSS RSS


Full Text Transcript:

Jeff: [00:00:10] Welcome to the WhoWhatWhy podcast. I’m your host, Jeff Schechtman. There’s a certain comfort in blaming the machine. It absolves us, doesn’t it? The algorithms made me click, the feed kept me scrolling, the platform radicalized my uncle. We’ve become a civilization of Flip Wilson’s “the devil made me do it,” except now the devil wears a hoodie and codes in Python. 

My guest, Olivier Sylvain isn’t buying it, or rather he’s buying it completely, which is precisely what makes his argument in his new book, Reclaiming the Internet, so unsettling. A law professor at Fordham and a former senior advisor at the Federal Trade Commission, Sylvain has written a book that reads like an indictment handed down from a prosecutor who’s seen the evidence and knows exactly where the bodies are buried. Big tech, he argues, has cloaked ruthless commercial exploitation in the language of free speech, turning our vulnerabilities into profit margins and our attention into the most valuable commodity on Earth. 

But here’s where it gets interesting. Sylvain doesn’t just blame Mark Zuckerberg or Elon Musk or the usual parade of Silicon Valley villains. He blames Congress. Specifically, he blames Section 230 of the Communications Decency Act, that 1996 legislative gift that gives platforms legal immunity rivaled only by gun manufacturers. That immunity, he contends, enabled everything that followed: the infinite scroll, the algorithmic feed, the auto play video, the recommendation engine. All of it engineered not to facilitate free expression, but to maximize engagement — and with it advertising revenue. It’s a compelling case. Maybe too compelling, because if we accept this premise, that humans are fundamentally fragile, that our psychological vulnerabilities are no match for weaponized design, then we’re left with a question that predates the internet by several millennia: Why do we forgive the weakness and prosecute the mirror? We’ve always been susceptible to demagogues, to carnival barkers, to the hypnotic flicker of the television screen. Madison Avenue spent half a century turning our insecurities into soap sales, and we called it capitalism.

Now we do the same thing with data and we call it a crisis. What’s changed? The scale, the speed, or just our willingness to admit that maybe, just maybe, the fault lies not in our apps, but in ourselves? Sylvain wants to regulate platforms the way we regulate automobiles: product safety standards for the digital age, data minimization, purpose limitations, accountability. It’s rational, it’s measured, and it’s the kind of framework that makes perfect sense, until you remember that the internet’s greatest innovation came precisely because no one was asking permission. Would we have any of this — the good, the bad, the transformative, the toxic — if the rules had governed it from the start, the internet, Sylvain insists, was never really ours to begin with. That story about liberation and connection and democratized knowledge, nostalgia masquerading as history. We’re living, he says, in the wreckage of a promise that was broken before it was made. He may be right, but if he is, then the real question isn’t how we reclaim the internet, it’s whether there was ever anything there to reclaim at all.

It is my pleasure to welcome Olivier Sylvain here to the WhoWhatWhy podcast to talk about his book Reclaiming the Internet. Olivier, thanks so much for joining us. 

Olivier: Thanks very much for having me. I don’t doubt that we can dive into the things that you’ve really surfaced with your introduction.

Jeff: Well, talk a little bit about that in a broad, general sense first: the idea that we have always been susceptible to persuasion, that Madison Avenue has a long history of that, and that some of it was just advertising. Some of it was its own kind of psychological warfare of the day. 

Olivier: Yeah. The mid-century model for broadcast advertising is completely different from what we have today. Mid-century, you have a handful of dedicated media, through which a handful of major companies can deliver their messages to a wide swath of the public. You know, we associate this with mass communication, itself causing all kinds of interesting changes, right? I mean, transformative in so many ways. I think of Walter Cronkite as a kind of cultural anchor, you know, in the mid-century. And advertisers are happy to support that model because there’s a singular voice that kind of articulates a consensus view. We have something altogether different in the modern era. I don’t need to remind your listeners about how different it is from broadcast television. And the motivations here, to the extent there is not a lot of clarity, is to personalize the information that so-called “users” — I put those in quotes when I generally talk about it — receive, and that’s a remarkable change, right, from the mid-century. And advertisers love this, right? At the margins, they’re spending less and getting more, because the content gets delivered to the people who are most likely to be interested or, in their view, most likely to be interested. This is what the companies have been cultivating since the early two thousands on the discovery of this business model, and they’ve been refining it over time.

Let me add more over that, the scale at which, and you mentioned this in your introduction, the scale of this far exceeds anything in the mid-century, right? Not only are they able to target with more precision, although there are limits to how good this stuff, this mechanism really is, but they’re doing it around the world. And it’s a handful of companies that are really positioned to play this mediator role between advertisers and, and consumers. 

Jeff: Weren’t newspapers doing the same thing? Certainly not at the scale, and you’re certainly correct about that, but the model of that kind of advertising that is directed at a particular consumer is something that has been part of broadcasting, part of newspapers, part of media for a long time.

Olivier: Well, newspapers are part of the old media. This is not the same story. When my father and mother received The New York Times at the front step, the advertisements in The New York Times were the same as the neighbors’ were. When my parents now open their Facebook account, they see ads that are altogether different from their neighbors’ ads. That’s not the same. 

Jeff: But the New York Times ads were very different than the ads in the Daily News or the ads that might have been in The Herald Tribune or whatever. 

Olivier: Sure. 

Jeff: They were directed for it to a specific demographic. It wasn’t as targeted, it wasn’t as precise. It wasn’t to the scale. But the principle seems the same. 

Olivier: Oh, advertising. It’s all advertising. If we’re not going to distinguish between different kinds of advertising models, and I agree, it’s all advertising. It’s a completely different animal, Jeff. I mean, the Daily News certainly has a different demographic audience than New York Times and the New York Post for that matter.

But we’re not talking about differences of orientation with regards to the entities that are delivering ads. We’re talking about the differences between the consumers. That’s particularized in ways that are out of proportion to what was going on before. And, what’s more, I appreciate the distinction you made between The New York Times and the Daily News and the New York Post for that matter. But, you know, even these companies have different orientations. So Facebook is different from TikTok, is different from YouTube. So that’s the closest corollary we have, right? They’re making choices about the kinds of ad partnerships they have. What’s different is the variety, the vast variety on the consumer end. It’s absolutely different than what was going on with regards to newspapers or broadcast television. 

Jeff: And to the extent that it’s different, where, in your view, where is the problem with that in terms of the advertising side? And we’ll get to other aspects of it in a moment, but just stay in for a moment and finish it up on the advertising side.

Where is the problem? I mean, the consumer is getting information arguably about something they might be interested in. In some cases they’re getting something free as a result. Where is the harm, I guess, is the question? 

Olivier: So I hope we talk about the ways in which business models are never subject to public scrutiny. That’s what I’m most interested in. But even here there are dangers — there are a couple of them. 

First, the regulatory agencies across the world, including the Federal Trade Commission, have been looking at the ways in which companies target the delivery of content, in ways that consumers ostensibly know, but also, you know, use tricks and design features that consumers are just not hip to, right, and using personalized information, the collection of information about the consumers, in ways that consumers don’t know. One of the more notorious, for me, is the case involving Vizio, a manufacturer of smart televisions that doesn’t convey to consumers the ways in which their viewing habits are getting collected for the purposes of monetization. Ways that consumers don’t know when they’re watching programs. That is a whole other beast. And apart from that is the fraud in the ad market. You might want to, your listeners might want to look up ad fraud online. Many of these companies — I’m talking the big ones, Meta in particular, but it’s not just Meta, Google as well — have been the subject of inquiries by advertisers about the claims these companies make about click rates or the extent to which people are watching their video. 

Give you an example. You know about autoplay, right? When you scroll onto a video, sometimes it just starts automatically. And many people don’t stop to watch. That’s a design feature. The companies have been claiming to advertisers is that that counts as viewership, and inflated the ways in which they monetize consumer tension. So we haven’t even yet talked about the deliberate design features that hold consumers’ engagement, just with regards to advertising.

Different, I think I’ve described two ways in which it’s completely different. 

Jeff: In terms of the ad, the relationship between the platforms and the advertisers with respect to their business, isn’t that their business problem and not something that should be regulated essentially? 

Olivier: No, no. I believe it’s something that should be regulated. But these companies have themselves filed lawsuits against Google and against Meta. And there have been settlements, you know, very large settlements on the basis of this ad fraud. 

So what role do regulators have? Regulators aren’t there to make, necessarily, make people’s life difficult, although you might ask yourself, you know, some people might have doubts about that given the different leaderships we have in government. If regulators or agencies are supposed to stand the position of market actors that cannot fend for themselves because of asymmetries, market asymmetries, then yes, regulation has a role to play, right? In the market for consumer attention. There are dominant players that advertisers often feel they have no choice but to engage.

That’s why we might just expect that advertisers be forthcoming, not deceptive, about the information they collect about consumers when they report out to advertisers. That’s regulation, and it’s fully consistent with a proper function of a market. 

Jeff: Talk a little bit about the consumers themselves and what they should be concerned about versus what they are concerned about, and why perhaps they’re not as concerned as they should be about this data being collected and the information that’s taken, because they’re getting something for it in many cases, whether it’s YouTube, whether it’s Google Maps, or what have you.

Olivier: Yeah, no, consumers are getting something out of this. I hope you don’t hear me saying that they’re not. They absolutely are and in droves, right? I mean, these are profitable companies because they’re very good at identifying the kinds of things that consumers want, without question.

But, you know, I push back on the idea that people are completely comfortable. Last year, the Pew Research Center published a study about how parents and children feel about social media, and the vast majority believe that social media are extremely, or somewhat, dangerous for teens and potentially adults.

So I don’t necessarily buy the view that everybody’s happy. I think there’s a mood. And you don’t need to go very far in this country and around the world that something is awry. And what’s more, the focus isn’t — and this is the focus of the book — isn’t that these are companies that are, you know, pursuing these business strategies out of nowhere. Congress has put in place a regulatory regime that makes this possible. So they’ve been able to exploit the protections under Section 230 and the First Amendment to insulate the ways in which they design their services, in ways that we could never really scrutinize, until now.

Jeff: Why do you think 230 has never been addressed? Clearly we can agree on lots of things that are wrong with Section 230. Lots of results that have come about that are uncomfortable to consumers and uncomfortable to the public, particularly with respect to teens. Why has it not been addressed in your view?

Olivier: You know, there’s been a lot of litigation involving Section 230 for the past three decades. And so there are people who have been addressing it. And there are a couple big cases — one in 2008, involving roommates.com, and then a handful that really started popping up seven or eight years ago, where it has been addressed in the courts, where people are becoming more assertive, and courts are pushing back against the big company’s claims to protection.

But, you know, if you’re asking about why Congress hasn’t changed the law, Congress was on the cusp of passing bipartisan reforms in 2020 to 2021, because everybody has some concern about the power that these companies play, on the right and the left. Now they’re coming to this from different places, but there has been this effort to attend to 230. So I actually don’t say that it hasn’t been addressed. It has been, it just, we haven’t seen a change in the law. And one of the big reasons is because of the substantial effort these companies put into making sure there is no change. But I’m hopeful because this concern has been manifesting itself in the courts. There have been a handful of cases involving very compelling plaintiffs, mainly children. For me, it’s not just children, where their parents are bringing cases on behalf of their kids for claims about harm and the ways in which these companies design their services. This is changing.

Jeff: Should there be a distinction when we talk about this, between the impact of all of this on children, on teens, on young people versus adults? 

Olivier: My argument is there should be no difference. Only because the market asymmetry, between what the companies do and what consumers know, is dramatic, whether you’re a child or an adult. Let me give you an example. There are a couple in the book, but one that’s harrowing involves a young man, who goes to a site called The Experience Project to find drugs, and the company… And he, eventually, does overdose on this, a black market drug that is facilitated by the website.

There are so many websites that were citing Section 230, like the Experience Project. And we could never find out the extent to which the company should be held accountable. Now, I’m not here arguing that they should have been. What I’m arguing is there is a legal shield that never lets us determine the extent to which they are responsible.

Another example, a case the Supreme Court heard, Gonzalez vs. Google, a question of whether or not the companies are facilitating, through recommender systems, connections between potential terrorists. These are a handful of these cases for the past couple of years. This arises out of a case where there was an ISIS bombing in Paris. And anyway, you can understand the fact pattern. And by the way, these are grownups. These are people that are bringing cases on behalf of their family members. And the companies say you can’t find out whether or not our recommender systems are causing violation of the Anti-Terrorism Act because, “we are just platforms. We’re just connecting users.” The Supreme Court says, “you know what? I don’t understand this 230 protection, we don’t get it.” The oral argument is really interesting because nobody seems to understand what to do and they, the court just says, you know what, let’s just go to the merits. We do not think that YouTube or Facebook are responsible for the murders, the deaths, the terrorist attacks, but at least the court went into the question.

We have a regulatory regime where that is rare. Because the companies have invoked Section 230 and even when they are facilitating connections, not simply being the platforms that randomly connect people as a matter of course.

Jeff: Talk about how this relates, really, the context of something like automobiles. I mean, cars kill what, 30, 40,000 people a year, and that’s a trade-off we make for mobility versus what the dangers are inherent in that. There are benefits that come from that. Talk about how that’s analogous — or not — to what you’re putting forth. 

Olivier: I appreciate the comparison and you set it up in the introduction. So, you know, when Ralph Nader and Public Citizen, you know, let’s see, 20, 37, 47 years ago? I’m sorry, what is it? Fifty, 57 [years], yeah, are advocating for seat belts. They’re talking about the failure of companies to build products that are safe. And the arguments that you’re seeing around the country now are precisely the same.

These are companies that are reckless to the extent they’re not attending to predictable downstream harms. Foreseeable downstream harms. When you design a randomized chat room that allows sexual predators to meet children, and you have no function to make sure to catch that in the first instance, never mind after, there’s something really worrisome about that. When you design services that rely on consumer information, volunteered by consumers to enable advertisers to target ads based on protected categories, like race or gender or age. To what extent should you be responsible for having done that? These are questions that are design questions. These are not questions about whether users are speaking freely or empowered. These are questions about whether the companies are engineering experiences that are dangerous in the first instance; Section 230 means to uncover that. You know, and I actually believe it’s a tough question, whether or not they are responsible, but at least we should be inquiring into it, not simply allowing these companies to walk away thinking they are simple platforms.

Jeff: Have we gone so far into freedom and what exists now with respect to Section 230, and the claims of free speech, and the libertarian view of all of this. Have we gone so far that it’s not just difficult, but almost impossible to begin to turn the clock back, to begin to address these things in a contemporary context when particularly many of the platforms that are the problem, many of the things we’re talking about are not yet, but almost, on the downward trend. And there’s new things in AI and other platforms that are coming along that it’s a world that is moving so quickly. 

Olivier: I love this question, Jeff, because it’ll get me the chance to talk about how the new big giants in AI are also invoking the free speech mantra. But you’re asking why has it been so hard? How did this happen? It’s because there’s something really alluring and seductive — and historical — about the free speech argument. You know, when John Milton wrote Areopagitica, you know, over 400 years ago, he was talking about a sentiment [that] resists government control of the way in which we speak with each other. When the internet is deployed commercially, its strongest advocates are saying the same. You know, this is going to transform the way we do our politics, our electoral politics. This will disintermediate the most powerful. Democracy will spread. You remember this, many people were fascinated and drawn by this. Even maybe 15, 20 years ago, this was the story about transforming democracy. There’s something really palpable and salient and attractive about it, and in many ways has proven to be true with regards to electoral politics. But, let’s make no mistake, this is an argument that has also been a way for the companies to consolidate power and control and insulate their business models from inquiry — because they are, to be sure, designing experiences, but presenting themselves as simple platforms for free speech.

Now, with regards to the most recent batch of companies, right? I don’t have social media on the downslope, but I think that people are imagining that the next media, right, I mean the next tech companies to really dominate the place are AI companies. And I want to use chat bots or conversational AI as an example. Character Technologies is a company that built a service called Character AI, and Character AI allows its consumers — or its users, if you will — to design personages that they can interact with. And character AI is not the only one. Gemini enables this open. … Frankly, any large language model operates as a kind of conversational AI. The companies, and Character AI, in a case involving Character AI, very recently has argued that a suicide by a teenage boy who met, or got into a romantic relationship with a person that just he’s created, that these companies are operating as free speech platforms, that the ways in which this young man interacted with the character he created, Daenerys — based on the Game of Thrones character — is an exercise of the young man’s free speech. Do you understand this? They’re arguing that they stand in the shoes of their consumers, even when the consumers use their services to commit suicide. This to me is remarkable. Perverse, actually. And I think we’re going to have to confront the plain fact that the kinds of things these companies do are not expressive activity in ways we think First Amendment doctrine, free speech doctrine, generally should be used to protect.

Jeff: To the extent that free speech is put aside as an argument, what is the further defense of these companies, beyond free speech, in terms of business models, in terms of the marketplace, in terms of economic freedom beyond free speech to conduct their business. And given that, how is the best approach to regulation of these companies?

Olivier: I mean the companies, in so many of the cases, they’re alleging product liability and negligent design in their theory. And the companies are arguing that these are not products. That they’re information services, right? That they’re expressive services. So I mean, they’re for legal purposes, there is a reason why, there’s a way in which to pushback that’s unrelated to free speech, right? They’re not products in the same way, you know. As you know, I feel differently about that, and many people are beginning to see that perhaps they’re not just products. So, you know, take a step back for a second, just kind of understand the gist of your very good question: Does an overbearing regulatory regime make it difficult for innovation? Make it difficult for companies to come out with products without fear that their newest thing is going to intrude or slow down the development of products that really might run down to the benefit of consumers? And my answer to that is that I don’t for a second think that these companies couldn’t build products that make consumers better. I believe we’re seeing a lot of that all the time. But the ground rules, the basics. About the playing by the rules of the game, just don’t apply to them in the way they apply to other companies. They have a leg up. They have a leg up. They don’t have to worry about downstream harms the way car manufacturers do, the way even the old media do, they are completely insulated to the extent they invoke these free speech protections.

Jeff: With respect to regulation, is there, in your view, a way to regulate these companies that really does not impede innovation? Does not impede their ability to do the things that they do, and even to have some impact as far as consumers are concerned with things like advertising that we talked about earlier, without overregulating.

I mean, one of the concerns is always overregulation, number one and number two, as we have seen time and time again when some of these CEOs have gone before Congress, a complete lack of knowledge, in many cases, of so many members of Congress, to what the realities of these platforms really are.

Olivier: Yeah, I would love to be in a world where we talked about what is too much regulation, Jeff. We’re not even there yet. We’re talking about any oversight. Any oversight. That’s what we’re talking about. So I’d rather be in a world where, sitting across the table, we can talk about different measures that don’t go too far. For example, Australia’s recent ban on teens access to social media, for my taste, is probably overdoing it. There are advantages and opportunities that our kids should be able to learn about when they’re on the phone. You might be surprised to hear that from me. But the instinct in Australia was healthy, right? There’s something going on on kids’ social media, particularly on their phones that is worrisome. I’m just arguing for some moderation. I’m not arguing for overregulation. We can have disagreements about what the right regulations are. I talked about data protection and the ways in which companies collect and monetize consumer data, and narrowing Section 230. Those are my interventions, which I tend to think are pretty modest actually. I can imagine more aggressive things, like bans on, you know, kids’ access to social media, which is really hard to implement in any event. But I’d rather have the conversation about whether regulation is possible, not whether we are overregulating. We’re nowhere there.

Jeff: But part of the fear of regulation is overregulation. I mean, if you talk to people in the tech industry, that’s their first knee jerk reaction — I agree with you — that they’re going to be overregulated. So isn’t it important to talk about what the specific… I’m not saying that we should get into it, in the weeds here, but to better define what that regulation would look like, as a way to blunt the argument of overregulation.

Olivier: Sure. I mean maybe you are asking for the details. So the last chapter of my book does talk about specific interventions that are specific to the kinds of harms that worry me. And as I said, data protection would go to the heart of the business model in ways that are consistent, other areas of the law, right? Where companies are held accountable for the ways in which they use consumers’ information, not just targeting ads. So, I think there are definitely interventions that are important. Antitrust, is another possibility, right? Making sure that companies don’t occupy or exploit dominant positions in the market, or structural limitations, that say that if you do, say, search, you can’t also do certain kinds of advertising on search — which is the nature of the lawsuit against Google, right? So I think there are ways for us to be careful, but it’s no surprise that companies don’t want to be regulated, and it’s no surprise that they say that they’re worried about overregulation and they say we need specificity before we decide we regulate. You know, there’s no disagreement there. If that means that everybody’s prepared to regulate, then let’s lift the protections that they know that they currently enjoy, and think about what those are. So, Jeff, you understand, they don’t want to go there. The argument that they are fear, or that others make, that they fear overregulation is not an argument against careful regulation. I mean, it is an argument that just doesn’t want any regulation. 

Jeff: But part of the problem is that, unless you blunt that argument from them, it’s hard to make the case for the specific regulation. 

Olivier: What are we blunting? What is it that we want to blunt, and whom are we trying to persuade? There is extant evidence that there is something that resembles addiction. I don’t really use that term, but there’s something that resembles dependency for the ways in which children use social media. We have evidence that the companies exploit their dominant position to deceive advertisers in the ways in which their ads are getting delivered to consumers. We have evidence that there are people who are subject to harm, adults, because of the ways in which the companies target ads and enable, for example, discrimination through their platforms. We have plenty of evidence that there’s harm, and so the best we can do is blunt those harms. And I think there’s widespread agreement on that.

We haven’t even yet talked about misinformation and disinformation. Right? The engagement model is driven in an ambition to hold attention, almost irrespective of what the content it is, as long as it holds that attention. I’m sure I’m not the first person to say this on your show.

There are externalities, what economists call externalities, that the companies never internalize. So I’m not worried about blunting, ahead of time, the concern about overregulation. I am arguing that it is time we be far more serious about regulation than we have been, and I think the evidence is plain that something has to happen.

Jeff:  I agree with that and I guess my broader point was in terms of the politics of it, in terms of the public sentiment. To come back to a point that you made at the beginning there is something amiss out there. People are annoyed, people are fed up with parts of this, there’s no question about it, including data protection and, and various other aspects of social media that you’ve talked about.

But part of the problem is that those very same people are concerned about going too far in the other direction, and that in order to create the public sentiment that’s necessary to create the kind of regulation that’s sensible, that you first have to do away with this argument, that the alternative is overregulation in terms of public consciousness.

Olivier: The burden… I don’t think the burden is on people who’ve demonstrably expressed concern about the ways in which these companies operate. I think the burden is on these companies, that “we should not be regulated.” Frankly, Jeff, I think that the concern about overregulation that you are expressing is not unanimous. It’s not the kind of thing people are talking about. We have enough evidence, enough research, survey research that shows that people are worried about it on the left and the right. I mean, part of the concern that Texas and Florida have, when they passed social media laws that the Supreme Court reviewed a couple years ago, is that the companies occupy a position that dictates and controls the information that people receive, that is intrusive of the information that people get, that censorship right now — I don’t share that view — that comes from a place that is not far from where I am, where most people are, that these are companies that occupy a very special place in our political economy, a dominant position that basically design, or engineer the human experience. And so I actually think the vast consensus in this country is for some form of regulation. I do not see what I hear you describing, and that is a concern about overregulation. It’s almost surprising to hear it, just because we have none. We had zero, right? There is no regulatory oversight in which ways in which these companies operate, but for these cases lawyers are now bringing, with success. 

Jeff: And finally, talk a little bit about how you see the situation playing out as AI becomes more dominant, as this new set of companies become more powerful and in some ways replace some of some of these older companies. How do you see this debate evolving? 

Olivier: You know, it’s interesting. I think people are pushing back. And I see a couple signs of this. So I mentioned the case of character technologies, right? They deploy Character AI as a chatbot service, and they invoked the First Amendment argument that they are speaking on behalf of a young man to commit suicide. The trial court that heard this case rejected this argument and said, “you know what, no, let’s actually just dig into the ways in which you may have built a defective product.” And this is not the only case, right? These are things that are popping up more and more.

There’s a horrible case involving TikTok that has the similar sort of result, where courts are pushing back. With regards to AI companies, moreover, I don’t think it’s an accident that communities all over the country are pushing against data centers. And the data centers, right, these are huge complexes on which AI companies rely to process the information that powers AI services. These companies are exploiting communities all over — I say exploiting — trying to place data centers all over the country, and communities all over the country are pushing back because of the costs this imposes on energy and water supply, right? The reason I take solace in this is because we are no longer talking about these companies operating somehow in the cloud or, you know, divorced from the world in which we didn’t have it. They are part of our communities and they should be accountable for the ways in which they are part of our communities.

So, whether it is chatbots or data centers, I think there is an emergent concern that is, by the way, careful, right? In many ways, we want to see AI evolve and become better, because it likely will improve many people’s lives. But there are so many risks and dangers in it. It’s about time that we think a little more carefully about this, and I think we’re seeing signs of it. 

Jeff: Olivier Sylvain, I thank you so much. Your book is Reclaiming the Internet. I thank you so much for spending time with us. 

Olivier: Jeff, it’s been a pleasure. You’ve asked really great questions. I feel lucky to have had a chance to talk with you and your listeners.

Jeff: Thank you so much and thank you for listening and joining us here on The WhoWhatWhy podcast. I hope you join us next week for another WhoWhatWhy podcast. I’m Jeff Schectman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.


  • Jeff Schechtman's career spans movies, radio stations, and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and, more recently, the world of podcasts. To date, he has conducted over ten thousand interviews with authors, journalists, and thought leaders. Since March 2015, he has produced almost 500 podcasts for WhoWhatWhy.

    View all posts