Subscribe

bots consuming the internet
Photo credit: Illustration by WhoWhatWhy from OpenClipart-Vectors / Pixabay and Pete Linforth / Pixabay.

Listen To This Story
Voiced by Amazon Polly

The fundamental nature of the internet has transformed… again.

This new elemental change is significant for several reasons. First, as of 2024 the majority of the population (86 percent of the US and 71 percent of the UK) is now getting their news from the internet. 

Second, artificial bad bots, also known as invalid traffic (IVT), currently make up almost half of internet traffic and in 2024 are projected to cost industry 50 billion in fake Bot-driven digital ad traffic spending.

Third is the rise of AI-generated content, known as the “Dead Internet Theory”: the idea that most online content is created by sophisticated AI tools rather than actual human experience. Within this theory, the public is getting its world view from a digital sphere that is entirely derived from an AI agenda, not real human beings.

This situation is calling into question the authenticity of internet content and traffic (e.g. X-Twitter trending topics, news media/social media comments/likes/shares), and even the nature of reality (which traditional social media has already been doing for years) by “impeding information retrieval and distorting the collective understanding of socio-political reality or scientific consensus.” 

The creators of this tech are so alarmed that they published this 29-page report in June about its dangers. At its most benign, this mass of synthetic content and traffic will lead to fact-checking fatigue (there are already high profile individuals explaining away true negative items as AI-generated); at its worst it will create a total distrust of all digital information. 

The high degree of threat to industry profits and national security is actually a positive for the public, because it amplifies the incentive to address this problem: The US Justice Department is already coordinating security operations with industry and government and the Defense Advanced Research Projects Agency recently awarded a $14 million AI Cyber Challenge

This is a list of state legislation along with proposed federal legislation that can help. In the meantime, you can find a helpful guide for navigating this problem here


Manipulating Reality: The Intersection of Deepfakes and the Law

The author writes, “The use of artificial intelligence (“AI”) continues to expand, meaning the use of deepfake technology to create digital fabrications will necessarily follow. Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness using AI techniques. Deepfakes are incredibly realistic, making it difficult to distinguish between real versus manipulated media. This can and has led to impersonation, fraud, blackmail and the spread of misinformation and propaganda.”

If Truth be Told: AI and Its Distortion of Reality

From CPI OpenFox: “We have already seen instances this year where people are dismissing media accusations by suggesting it’s fabricated. This response may not have been possible a few years ago, but now, with the power of deepfake and generative AI technology, it provides a person with plausible deniability to any allegation. This has placed us in a grey area, as claims can now be refuted with no definitive truth. This article discusses how doctored media can distort reality, AI, and plausible deniability, the impact on investigations and chain of custody, and the methods to combat misinformation.”

Fake or Fact? The Disturbing Future of AI-Generated Realities

From Forbes: “The world is already affected by misinformation and fake news, and use of technology to spread slander and malicious falsehoods is increasingly commonplace. With everyone able to use this [generative AI] technology — which will no doubt become even more sophisticated as time goes on — how will we ever know if anything we see or hear is real again?”

The Dead Internet Theory, Explained

The author writes, “The Dead Internet Theory is the belief that the vast majority of internet traffic, posts and users have been replaced by bots and AI-generated content, and that people no longer shape the direction of the internet.”

US and Allies Take Aim at Covert Russian Information Campaign

The author writes, “Intelligence officials from three countries flagged a Russian influence campaign that used artificial intelligence to create nearly 1,000 fake accounts on the social media platform X.”

How AI Might Affect Decisionmaking in a National Security Crisis

From the Carnegie Endowment for International Peace: “Imagine a meeting of the U.S. president’s National Security Council where a new military adviser sits in one of the chairs — virtually, at least, because this adviser is an advanced AI system. This may seem like the stuff of fantasy, but the United States could at some point in the not-too-distant future have the capability to generate and deploy this type of technology.” 

AI & Data Exchange 2024: DARPA’s Kathleen Fisher on Prepping For AI’s Future Through High-Risk, High-Reward Research

The author writes, “Agencies across the federal government are looking at artificial intelligence tools to transform the way they meet their mission. But the Defense Advanced Research Projects Agency is laying the groundwork for the future of AI, by tackling high-risk, high-reward research into emerging technologies — often in cases with national security implications. Kathleen Fisher, director of DARPA’s Information Innovation Office, said about 70% of the agency’s portfolio of work involves some form of AI.”

Author

Comments are closed.