Get ready for an adrenaline-fueled exploration into the dark underbelly of the digital world.
As software takes over our lives — from hospitals to schools and even our national infrastructure — we’re facing an explosive wave of cyberattacks that could threaten our very existence.
The US — a hacker’s favorite playground — is a ticking time bomb, with 80 percent of its crucial systems tied to the internet and in private hands, with ZERO government control.
Join us in this week’s WhoWhatWhy podcast for a frightening conversation with New York Times cybersecurity reporter Nicole Perlroth. Were diving deep into her book This is How They Tell Me The World Ends, freshly updated with new revelations.
Perlroth exposes the dire need for a rock-solid national security strategy and exposes the terrifying world of “zero days” — secret security flaws that hackers crave. With a no-holds-barred talent war between the government and private companies, the US government’s cybersecurity defenses are left gasping for air.
Get ready to be appalled as Perlroth unveils the chilling tale of Russian hackers trying to topple US elections and the Ukrainian power grid, and the Chinese looking into America’s every digital nook and cranny.
This high-level conversation explores the critical need for ironclad cybersecurity policies and the power of imagination in defending against nightmare cyber threats.
Perlroth shares spine-tingling accounts of single-software systems leaving our infrastructure wide open: the SolarWinds attack, near-disasters at dams, and hospitals under siege by cyber bandits.
Perlroth raises the possibility that the only thing standing between us and total cyber annihilation is a digital doomsday scenario — a cyber version of global mutually assured destruction.
Listen to this conversation. It will make you question everything you thought you knew about the digital world.
Apple PodcastsGoogle Podcasts RSS
Full Text Transcript:
(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)
Jeff Schechtman: Welcome to the WhoWhatWhy Podcast, I’m your host, Jeff Schechtman. The world’s critical infrastructure, including our banking systems, hospitals, and air traffic control, could be taken down by a single flaw in billions of lines of code. If that happens, it’s not a black swan event, it’s simply what might be expected. This is what former Defense Secretary Donald Rumsfeld would have called an unknown-unknown.
The Internet, to which virtually everything in our lives is connected, was never built with security in mind. And yet, 80 percent of the critical infrastructure of the country is controlled by private companies who, even with the best of intentions, have no idea how secure they really are. Every day on the Dark Web and in places we don’t usually go, a shadow group of players is engaged in buying and selling security flaws known as “zero days.” It’s the new international arms trade.
The world where the next wars and the next battles between nations will be fought is in cyberspace. Our adversaries may not have our fighter planes and aircraft carriers and submarines, but they can recruit the best talent around the globe to take us down. This is the world that my guest Nicole Perlroth lives in and writes about in covering cybersecurity for The New York Times and as the author of This Is How They Tell Me the World Ends.
Prior to joining the Times, Nicole Perlroth covered venture capital and startups for Forbes. She’s a guest lecturer at the Stanford Graduate School of Business and a graduate of Princeton and Stanford. And it is my pleasure to welcome Nicole Perlroth back to this program to talk about This Is How They Tell Me the World Ends, just out in paperback. Nicole, thanks so much for joining us once again here on the WhoWhatWhy Podcast.
Nicole: Thanks so much for having me, Jeff, it’s great to be with you.
Jeff: It’s good to have you here. In all of the years that you’ve been working on this, the years you spent writing the book, and the year or so since the original book came out, what’s gotten better and what’s gotten worse in the world of cybersecurity?
Nicole: Well, for one, we hear this phrase in Silicon Valley all the time that software is going to eat the world. And that has happened. We’ve been baking so much software into every nook and cranny of our digital infrastructure, our pipelines, our power grid, even our water treatment facilities, oil and gas facilities. And then during COVID, everything just supercharged. Now we’re conducting most of our business virtually. Education really started depending on software. Healthcare, hospitals, et cetera.
And so that’s all just to say that the attack surface has rapidly expanded over the past couple of years. And then the other thing is that inside these companies, security teams have really lost the perimeter. Employees have become the new perimeter. Employees act as their own chief information officer. They decide what productivity apps they want to use, what marketing survey applications they’d like to use, and they start putting data into those applications, which just expands the attack surface further.
And then the big thing, too, is we’ve seen adversaries like China, who were really breaking into American companies with rudimentary phishing emails 10 years ago when I started covering this for The New York Times, move on to pretty sophisticated supply-chain attacks where they lace software in these applications, which many organizations don’t even realize they’re using, with malware. And that’s how they get in. And it’s not just China; there was a case a couple months ago where North Korea was doing this.
So, really, we’ve just been blindly automating everything without any pause or reflection on how this could be abused. Meanwhile, our adversaries understand that these are all new potential inroads for attack, and they’re seizing on them. And then the last thing has been Ukraine. We’ve seen a dramatic 40 percent increase in attacks on critical infrastructure since Russia invaded Ukraine last year. Now, most of that is aimed at Ukraine.
And that actually is one slice of hope, because we’ve seen Ukraine, together with private companies and federal agencies in the West, really partner to try and root out these attacks as quickly as possible, to mitigate these attacks and mitigate the blast radius from these attacks as quickly as possible. So those are the things.
Now, the attack surface here continues to get worse. We’re still blind to the potential vulnerabilities of what we’ve introduced. Meanwhile, Ukraine is fending off, or at least mitigating, a lot of these attacks and I hope will become a lesson for cyber defense and what we can do here.
Jeff: I guess part of it also is whether software has gotten any better. When Marc Andreessen first said that software is going to eat the world, in many ways he was talking about how dependent on it we were going to become, and that everything was going to be related to software. One would assume — incorrectly, I think — that in all this time, the software has gotten so much better. In fact, it seems that the proliferation of it has made it even more vulnerable.
Nicole: Yes, that’s right. Yes. I always say it’s the two Marks. The one Marc who said software will eat the world. And then the other Mark, Mark Zuckerberg, who said “Move fast and break things.” Now he towed that back. I say in my book, the last time I was at Facebook, someone had put up a sign that said, “Move slowly and fix your-— ”
Jeff: Fix things, right?
Nicole: A whole other word. Yes. So there is this recognition that we have to do both but, unfortunately, speed is still the name of the game. It’s still get your product to market as quickly as possible, particularly if it’s an app or digital service, and then get as many customers to use it as possible so that they don’t switch to the competition. It’s not, “Hey, let’s move slowly, make sure we’re vetting our software,” et cetera, et cetera.
And so, really, I think the only way to change that, unfortunately, is for the government to step in. And just last week, we actually saw the White House announce a new national cybersecurity strategy. It doesn’t have regulations baked in yet, but I think we can think of it as a guide for what might come. And a big part of that strategy was to say, “We want to see software makers take security seriously, and we want to see them be liable for putting out buggy code.”
It still remains to be seen how that would get implemented. But that’s a big announcement that we would shift liability for some of these attacks to software makers, essentially forcing them to take security seriously so they’re not just introducing this buggy code every time they have a new update to their software or product.
Jeff: Has the EU had any more success doing this? They’re more ahead of the curve, certainly, in terms of regulation.
Nicole: Yes. I mean, definitely on privacy, they’ve been the leader there, and we could debate whether what they have done has actually added to security. Every time we go to a website and it says, “Do you allow all cookies?” How many times do you just click yes and move on? But what’s interesting is, when I wrote my book, it was important to me to not just say, “Hey, we have a huge problem here.” It was important to me to end with some solutions.
And so I went and looked around and said, “Hey, are there any nations out there that have figured this out from a policy perspective? And has it had any kind of beneficial effect?” And what was interesting was there was a study — it’s already out of date, so I’d love to see someone pick this up and run with it — but there was a study a couple of years ago that used semantic data, and it looked at the total number of security incidents in a country, at least those that were reported, and looked at the ratio of successful attacks to total incidents. How many times were hackers able to get in?
And the United States fared pretty poorly on this list. We are now, I should remind everyone, the most frequently targeted nation on Earth. That may have shifted with Ukraine. We may be number two now, but we are where most of the value is for hackers, and so we are the most targeted. Anyway, the countries that did really well were in Scandinavia, which I always think is interesting, and sometimes Americans will shudder when you mention Scandinavia as an end-all-be-all solution to any kind of regulation.
But in Scandinavia, these are countries — Finland, Sweden, Norway — that are just, in many ways, as digitally dependent and virtualized as us. They have very healthy tech industries, and they bake software into much of their critical infrastructure, as we do. But what they have that we don’t have are very comprehensive national cybersecurity strategies that have real carrots and real sticks for companies when it comes to cybersecurity. So they mandate that these companies use things like multifactor authentication, which is this added security measure that texts you a code or you must supply another code when you’re trying to log into a service.
They make sure that companies are patching more regularly and it’s close to real-time as possible, et cetera, et cetera, et cetera. And there are fines for companies that don’t meet that bar. I thought that was interesting. And then the other instructive example was Japan, where they didn’t have a great ratio. They were on par with the United States, but over the course of two years they significantly improved their ratio.
And when you looked at, okay, what happened during those two years, they had put in place a national cybersecurity strategy that is probably the most specific strategy on the market today. That goes as far as to say, hey, if you are operating let’s say a pipeline, the software that touches your pipeline must be air-gapped or completely separate from your IT network, where we do our email and that kind of thing. So that had a huge impact on limiting their attack surface.
So, in other words, I’m not one to say regulation is the end-all answer or solution to everything, but when it comes to cybersecurity, it has had a hugely beneficial effect. And I would argue that it is high time for us to adopt something like it in the United States today.
Jeff: Is one of the problems, in terms of looking at the US versus Japan or some of the Scandinavian countries, that there is this obsession with frictionlessness that really overrides everything else?
Nicole: Yes. I think that’s a huge part of it. We don’t want to be bothered with those popups that ask us which cookies we want to allow in the case of privacy and some of the regulations that the EU has adopted. And it’s worth noting, to be fair, that the US has a horrible track record when it comes to cybersecurity regulation in the past. There’s still a law in the books called the Computer Fraud and Abuse Act that has been misused, abused, only used in these ridiculous cases by companies against people who are just pointing out flaws in their system versus really egregious cases.
And so, we need to make sure that any regulation that we do implement isn’t going to be somehow abused or used for coercion [?] and that kind of thing. So it’s tricky business, but we’re long overdue. And I would just say, the Biden administration has gone further than any previous administration when it comes to cybersecurity, but they’re still very limited because there is no regulation.
So, what’s interesting, one of the things they did, I think it was early last year, was they put out an executive order, the president did. And in it, it just said, “Okay, we don’t have regulation. What do we have?” Well, we have power over federal agencies. Okay, so from now on, federal agencies, you must meet this high bar for cybersecurity standards, and you must patch bugs that we know about within two weeks, et cetera.
And then the other entity that they have any kind of control over are federal contractors. So what was interesting is they really used the power of the purse to say, “Okay, federal contractors, we’re going to rip out the red tape, but you can self-certify that you meet these same cybersecurity standards. We’ll let you self-certify, we won’t make you go to some third-party auditor, go through this lengthy paperwork process. You can self-certify, but if we catch you lying to us—” which they probably will because we’re seeing so many ransomware attacks that expose weak defenses with a lot of organizations. “If we catch you lying to us, you’re banned from ever doing business with the federal government again.”
I thought that was a very clever workaround, but it really only applies to federal contractors. But I think that is what is going to come, especially for these companies that run 80 percent of our critical infrastructure. And that’s the magic statistic you cited at top: 80 percent of the United States critical infrastructure. So our pipelines, our power, our water, our nuclear plants, healthcare, et cetera, are run by the private sector. And until now, for the most part, none of these companies have had any kind of cybersecurity standard that they have to meet. There are exceptions with nuclear and some utilities, but for the most part, they’re nowhere near where they need to be.
So I think that is what’s going to come first. Is a real bar for those companies to meet when it comes to cybersecurity. Because here’s the thing: When Russia attacks another country digitally, cyberattacks, they don’t go directly for the government in a lot of cases. They wouldn’t come for the NFA or the FBI or name your government agency; they come for private companies: Colonial Pipeline, that was a group of cyber criminals; or they come for SolarWinds, a Texas company that supplies software to government agencies and most of the Fortune 500.
So, in other words, it’s just to say that businesses are on the frontline. And businesses that operate critical infrastructure are really on the frontline. And so, we need to figure out how to either adjust market incentives or, by nuanced cybersecurity regulation that can get them to meet this bar so we can really limit the opportunities that our adversaries would have to hold us hostage.
Jeff: Are we still fighting the battle that there were incentives at some point, certainly for government at some points, to keep software more vulnerable in terms of access?
Nicole: Yes. I think that really just comes back to the “move fast and break things” bit when it comes to just business incentives. I think I spent a lot of time in my book research talking about something called the zero-day market. So a zero day, for those who’ve never heard of it, is a vulnerability in code that the code manufacturer doesn’t know about.
Just to keep it simple, imagine I’m a hacker and I want to get into your iPhone. I could potentially get in by finding a vulnerability in your iPhone’s iOS software that Apple doesn’t know about. And if I could develop a program to exploit that vulnerability, which is called a zero day, I could potentially use it to read your text messages or track your location or record your phone calls, et cetera. So it’s really like attaching an invisible ankle bracelet to you in some ways. That zero-day “exploit” — that’s what that’s called, not program — that has been fetching millions of dollars on a gray market for zero-day brokers and governments who want those tools to be able to spy on their enemies or spy on potential terrorists, et cetera.
Some of you will remember, I think it was back in 2015 when the FBI tried to get Apple to build a back door for the FBI into the iPhone and Apple resisted and finally the FBI dropped its case and said, “Never mind, a hacker came to us with a way to get in and we paid him over a million dollars.” That’s the programs I’m talking about here.
Now, I think for a long time I had so many questions about this market because, wait a minute, isn’t the government supposed to be keeping us more secure, but in this case they’re exploiting vulnerabilities and leaving them open instead of closing them shut so they can preserve their surveillance, counterintelligence, espionage advantage.
And so that’s really— I started calling out those programs in the book. I think since the book came out, we’ve had these really significant attacks. I think most people by now are familiar with the Colonial Pipeline ransomware attack, SolarWinds would be another. I think there is starting to be this broader recognition, particularly in this administration, that it is high time that we pull away from the advantages of exploiting vulnerabilities to seeing to it that they get shut. Because, again, we’re one of the most digitally dependent nations on earth and now we’re the most targeted nation on earth and it’s really high time that we focus on cyber defense.
Jeff: Is part of the problem the globalization of hacking talent? When people talk about remote work, you often hear people talking about the fact that if you can do your job remotely, it means that anybody anywhere in the world can do your job. Do we have a situation where hackers and those that engage in this are able to recruit the best talent all over the world?
Nicole: Well, it’s a huge problem. The cybersecurity workforce crisis is something that I think should be among our top five national security threats to the United States. We have hundreds of thousands of unfilled vacant cybersecurity positions just in government. And a lot of those government agencies are finding it almost impossible to compete with the private sector for what, I guess, we would call hacking talent. People with the skills to find these vulnerabilities and then fix them.
And this is a really big problem. So one of the things I’m trying to focus on right now is how do we expand the pipeline for hacking talent in the United States? Now, unfortunately, if you have these skills, you can make a lot of money finding zero days, developing programs to exploit them, and selling them to governments for millions of dollars. And you will be infinitely employable to offensive cyber agencies like the NSA or in Russia. This is who they look for to work at the GRU; or in China, they figured out how to freelance or have some of these people who work in the private sector moonlight on some of these sensitive operations at night.
So, some of the really significant attacks we see come from China don’t come from the PLA anymore, the People’s Liberation Army. We see them come from this loose satellite network of private contractors that get called in by the Ministry of State Security to do their dirty work. It gives China a degree of plausible deniability if those hackers are caught. It works different ways in different countries.
Some have found that they can use their hacking talent to create what I call click-and-shoot spyware tools — the companies like NSO Group in Israel that sell government these tools to surveil dissidents very easily in some cases, terrorists in others. You can make a lot of money working for those companies. There are a lot of opportunities on the dark side of this business, and we’re finding it very hard to hire people into cyber defense that have these same skill sets that would be useful.
This is a huge problem, and I think that the answers are complicated. But I think one of them is how do we tap people with these skills? They might not consider themselves a cybersecurity expert, but maybe they’re a programmer who finds these bugs in the course of their day-to-day business. How do we make sure that they have an easy way to report that bug to the manufacturer to get fixed?
That’s where you’re starting to see, over the last decade, companies like Apple and Google — and others that might not be immediately obvious, like United Airlines — start creating something called bug bounty programs where they will actually reward hackers who come to them with bugs so that they can fix them before they can be exploited. I think that’s one creative way out, but we still have a long way to go.
Jeff: Where does AI fit in all of this? We’re now looking at software created by AI and arguably, as AI gets more sophisticated, it, too, can be effective in ferreting out these bugs.
Nicole: Yes. In the beginning of my book, I have this author is racing to finish this book before AI makes it almost irrelevant. My paperback just came out. November, we learn about ChatGPT. And what’s terrifying is here is this book where I write about all these hackers who are finding these zero days and selling them to these shady zero-day brokers, that then sell them to government who use them in all good ways and bad, mostly bad in a lot of cases. I’m worried about where this market is headed. Well, now you can use ChatGPT to find zero days and write exploits.
Essentially, this market is already automated, and these skills are already automated with ChatGPT. So we are really entering a terrifying new era here. I don’t think people realize how these tools can be abused. Another thing is, we’ve had this big push to slow it down and make sure that you’re vetting your software before you’re rolling it out to millions of customers. We’re nowhere near where we need to be, but there has been somewhat of a push. And when I talk to companies whose job it is to test software for vulnerabilities and vet it, what they said to me is, “I don’t know how to vet AI for security risks.”
That is the basic question I have. And I’ve asked others in our market, in our industry, how do you test AI for security? Very basic question, and nobody has been able to answer it. And I think what that means is we’re about to introduce a whole new level of obscurity and complexity, which lends itself perfectly to vulnerability and hacking. And I think that China and other countries are not going to just sit there. They’re going to take advantage of this new complexity of these vulnerabilities. And I think we’re entering a pretty scary place.
Jeff: It’s a scary place. It’s also scary to think about AI both offensively and defensively because it’s the same use, essentially, just for more nefarious purposes on one side.
Nicole: Yes. I think it will be a while before we can figure out how to use it defensively. I think, in the meantime— and we see this with any new technology, right? A new technology gets introduced, bad guys figure out how to abuse it, good guys figure out how they’re abusing it. They develop the protection. Bad guys figure out how to evade the protection. It’s always been a cat and mouse game.
And, fortunately now, I think this next generation of cybersecurity entrepreneurs are starting to take radically different approaches to cybersecurity, which is good. But I think AI — it just adds so much, in many ways exponentially, more complexity that this cat-and-mouse game is going to play out in a way that we have not seen yet. So, I am worried about it. Now, that said, there are a lot of companies out there that claim to be able to use AI to better identify vulnerabilities, to automate the patching of vulnerabilities, to identify insider threats at organizations based on employees’ email behavior or their web behavior.
They can out insider threats. They can use it to really map out who in an organization has access to what data. So there are beneficial use cases when it comes to cyber defense, but we don’t even really know yet how this is being exploited. And it’ll be a while before we figure out how to create defenses for those techniques.
Jeff: As this evolves, particularly in the cyber warfare realm, it seems that mutually assured destruction in the cyber realm may be the only thing that keeps it in check for a while.
Nicole: Yes, it’s interesting. I definitely saw mutually assured digital destruction start coming into play. I wrote a story with my colleague, David Sanger, at The New York Times about how under the Trump administration there had been some loosening of the rules when it came to what Cyber Command at the Pentagon could do. Under Obama, Obama mandated that every computer attack conducted by Cyber Command required presidential approval. Under Trump, they did away with that rule.
And David and I learned that one of the first thing Cyber Command was doing with its new-found leeway was hacking the Russian power grid and making a pretty loud show of it. And, interestingly, we did our research, we did our reporting for several months, and I thought, okay, now we have to talk to the government, tell them what’s coming, and offer them the opportunity to comment, which is just standard journalistic practice.
And I assumed we’d be in for one of those painful conversations where the government said, “No, you can’t possibly print this. You’ll have blood on your hands. You’re a terrible person.” But instead, we got very much an “Oh, we don’t care. You can print this.” In other words, what they weren’t saying was, “We want Russia to know we are in their power grid so that they better think twice before they ever hack the United States power grid the same way they’ve now done to Ukraine not once but twice.” And that is really what mutually assured digital destruction looks like.
There was also a case over a year ago where the US declassified findings that, over the past decade, China had hacked our pipelines, and not for intellectual property theft or theft of trade secret like we see with so many other Chinese cyberattacks. They were doing it to gain a foothold in our pipeline networks, so they could potentially sabotage those networks in the event of some larger geopolitical conflict. Really, instead of just one Colonial Pipeline happening that skyrocketed gas prices, they could trigger an attack that really sent gas prices skyrocketing, people unable to move, function, et cetera.
And so this is happening. We’re all in each other’s systems with guns to each other’s head — just in case someone decides they actually want to pull the trigger, so we can say, “Hey, you could pull the trigger, but we’ll just turn around and pull the trigger right back at you.”
And I wonder how much— and, of course, you’d have to be a fly on the wall of Vladimir Putin’s office at the Kremlin. But I wonder how much of that has factored into Russia’s restraint when it comes to directly attacking the United States and some of our Western allies in Europe, for example, because of our support for Ukraine with military tools and funding.
I wonder if he’s thinking maybe not just yet, and then I really wonder at what point will he decide we have nothing left to lose but to start retaliating against the West for their support. And despite all the bluster around nuclear threats, I still think their much more likely immediate response or retaliation would come in the form of cyber attacks. So that’s where I’m really watching right now.
Jeff: Nicole Perlroth’s book is, This Is How They Tell Me the World Ends: The Cyberweapons Arms Race. It’s just out in paperback. Nicole, I thank you so much for spending time with us here on the WhoWhatWhy Podcast today.
Nicole: Thanks for the great discussion. This is pretty high level, so I really appreciate it.
Jeff: Thank you so much.
Nicole: All right, take care.
Jeff Schechtman: Thank you for listening and joining us here on the WhoWhatWhy Podcast. I hope you join us next week for another radio WhoWhatWhy Podcast. I’m Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to WhoWhatWhy.org/donate.