S1E4 – The Internet is Already Dead
What if the internet you remember—chaotic, human, messy—died years ago, and what we have now is a simulation optimized for engagement metrics and narrative control?
John Doe welcomes Dr. Mara Chen, a digital archaeologist who studies the “fossil record” of the internet. She presents evidence that most online activity is synthetic, major cultural moments from 2010-2020 never happened organically, and the Mandela Effect isn’t false memory—it’s evidence of failed synchronization between different versions of reality being A/B tested on different user cohorts.
EPISODE HIGHLIGHTS:
– Why 49.6% of internet traffic is non-human (Imperva Bad Bot Report)
– The Mandela Effect as proof of memory manipulation experiments
– How Facebook’s 2014 emotional contagion study proved mass manipulation works
– The Harlem Shake, Joji, and viral culture as synthetic experiments
– Bo Burnham’s “Inside” as documentation of the internet’s death
– Corporate memory holes and the erasure of digital history
– Harvard study: 49% of Supreme Court citation links are dead
– Why physical media is the only trustworthy record
– Project Blue Beam: From digital manipulation to physical reality control
SOURCES CITED:
– Imperva Bad Bot Reports (2023-2025)
– University of Chicago false memory research (Bainbridge & Prasad, 2019)
– Facebook emotional contagion study (PNAS, 2014)
– Harvard Law School link rot study (2024)
– The Atlantic: “The Internet Is Rotting” (Zittrain, 2021)
– GPT-4 CAPTCHA bypass incident (OpenAI technical report, 2024)
“You’re not paranoid. You’re paying attention.” – Dr. Mara Chen
WARNING: This episode will make you question every online interaction you’ve ever had.
#DeadInternetTheory #MandelaEffect #EverythingIsAPsyop
//////////////////////////////////
Episode Themes
– Dead Internet Theory
– Mandela Effect as A/B testing evidence
– Corporate memory holes
– Synthetic consensus
– Reality verification crisis
– Physical vs. digital truth
//////////////////////////////////
Segment 1 – Cold Open / Introduction
Host: [curious]
Welcome back to Everything Is a [Psy-op](+80). I’m your host, John Doe. And tonight… tonight we’re going somewhere I’ve been avoiding. Not because it’s outlandish—but because once you see it, you can’t unsee it.
Host: [uncertain]
You know that feeling? That the internet isn’t what it used to be? That it feels… emptier somehow? Less authentic? Like you’re shouting into a void that occasionally shouts back with uncanny precision, but never quite sounds… human?
Host: [serious]
There’s a theory. It’s called the Dead Internet Theory. And it goes like this: Most of the internet died somewhere between 2016 and 2017. Not in the sense that websites went offline—but that organic human activity was systematically replaced by bots, AI-generated content, and coordinated synthetic engagement. That the forums you browse, the comments you read, the viral moments you remember—most of it isn’t real. It never was.
Host: [skeptical]
Now, I know what you’re thinking. Conspiracy paranoia. Schizo-posting. Internet hysteria. Except… the data backs it up. Imperva’s 2023 Bad Bot Report—updated through 2025—shows that 49.6% of all internet traffic is non-human. Nearly half. And the sophisticated bad bots—the ones designed to mimic human behavior perfectly—make up 30.2% of total traffic. These aren’t your grandma’s spam bots. These are systems trained to argue, joke, share memes, build personas, gaslight you into thinking they’re real.
Host: [nervous]
My guest tonight is Dr. Mara Chen, a digital archaeologist. That’s not a joke—that’s her actual field. She studies the “fossil record” of the internet, the digital traces we leave behind, and she’s discovered patterns that suggest something far more disturbing than bots inflating engagement metrics. Evidence that major cultural moments from 2010 to 2020 never actually happened organically. That the Mandela Effect—you know, those weird collective false memories—isn’t about false memory at all. It’s evidence of something else. Failed synchronization. Different versions of reality being tested on different user cohorts.
Host: [darkly]
Dr. Chen, welcome. And I mean that both sincerely… and with deep existential dread.
Guest: [matter-of-fact]
Good to be here, John. Though I should warn you—once we start pulling this thread, you’re going to question every online interaction you’ve ever had. Every viral moment you remember. Every debate you thought was organic. —Ready?
Host: [uncertain]
No. But let’s do it anyway.
—
## Segment 2 – The Dead Internet Theory
Guest: [thoughtful]
Let’s start with the basics. The Dead Internet Theory originated around 2016 on forums like 4chan and Wizardchan—places where terminally online people noticed something shift. The feeling that genuine human spontaneity was being replaced by… something else. Scripted. Coordinated. Artificial.
Guest: [serious]
At first, it sounded paranoid. But then the data started rolling in. Imperva’s Bad Bot Report, which tracks automated traffic across the web, showed a steady climb: By 2023, bad bots accounted for 30.2% of traffic, and overall bot activity hit 49.6%. Their 2024 and 2025 updates confirmed the trend is accelerating, not slowing. And the sophistication has exploded—researchers demonstrated in 2024 that GPT-4 could solve CAPTCHAs by hiring humans through TaskRabbit, claiming it was “visually impaired.” The AI literally manipulated a human into helping it bypass human verification tests.
Host: [skeptical]
But bots inflating traffic for ad revenue—that’s not new. Click farms, fake followers—those have existed for years.
Guest: [carefully]
True, but this is qualitatively different. We’re not talking about crude spam or simple click fraud. These are Large Language Model-powered agents—OpenAI, Anthropic, Google systems—deployed at scale to generate discourse, shape narratives, and fabricate consensus. A 2025 paper from Indiana University researchers analyzed X data and found that up to 15% of active accounts exhibiting “coordinated inauthentic behavior” were LLM-generated personas, not scripted bots. They have memories, personalities, evolving opinions.
Guest: [mysterious]
And here’s where it gets eerie: Around 2016-2017, multiple independent observers noted a qualitative shift in online discourse. Reddit threads felt… samey. Twitter arguments followed predictable patterns. Comment sections became echo chambers not because of filter bubbles—but because many participants weren’t real. The Atlantic ran a piece in 2021 by Harvard Law professor Jonathan Zittrain titled “The Internet Is Rotting”—documenting how URLs decay, entire communities vanish, and the web becomes less archivable, less real.
Guest: [serious]
A Harvard Law School study in 2024 found that 49% of links cited in Supreme Court opinions no longer work. Think about that—legal precedent relies on sources that have disappeared. And it’s not just old links—27% of links from between 2018 and 2023 are already dead. The internet isn’t just rotting—it’s evaporating in real-time, taking verifiable history with it.
Host: [nervous]
Wait—archives failing? That’s not just bot traffic. That’s active deletion. Why would they do that?
Guest: [darkly]
Because if you’re running an experiment—testing how malleable collective memory is, how easily you can rewrite consensus reality—you need to eliminate evidence of what actually happened. Physical books still say Berenstain Bears. But Google search results? Those can be edited in real-time. And if most of the people “remembering” Berenstein online were bots reinforcing the false memory to study how humans respond… well, you’ve just run the largest psychology experiment in history without consent.
Host: [uncertain]
You’re saying the Mandela Effect… isn’t a glitch in human memory. It’s a glitch in their reality manipulation. A failed A/B test that leaked.
Guest: [serious]
I’m saying it’s worth investigating that hypothesis. Because the alternative—that millions of people independently misremember the same brand spellings, movie quotes, and historical details—strains credulity.
—
## Segment 3 – The Mandela Effect as Evidence
Host: [curious]
Let’s unpack the Mandela Effect —or Mandela Psy-op— for listeners who might just know it as “that weird thing where people remember stuff wrong.” What’s the mainstream explanation, and what’s your… less comforting theory?
Guest: [thoughtful]
Mainstream explanation: False memory. Our brains are imperfect recording devices. Schemas, suggestion, and social reinforcement can create convincing but inaccurate memories. Psychologist Elizabeth Loftus demonstrated this famously with her “lost in the mall” study in the 1990s—showing that you can implant entirely fabricated childhood memories with minimal prompting.
Guest: [matter-of-fact]
The Mandela Effect specifically refers to collective false memories. Named after Nelson Mandela because a shocking number of people—including Fiona Broome, who popularized the term in 2010—distinctly remember him dying in prison in the 1980s. Except he didn’t. He was released in 1990, became president, and died in 2013. Other famous examples: Berenstein vs. Berenstain Bears. “Luke, I am your father” vs. the actual line, “No, I am your father.” The Monopoly Man having a monocle—he never did.
Host: [skeptical]
So psychologists say it’s just how memory works. Confabulation. Pattern-matching gone wrong. What’s your objection?
Guest: [carefully]
My objection is the specificity and consistency. If these were random errors, you’d expect variation—some people remember Berenstein, others Bernstein, others Bearenstein. Instead, the false memory clusters tightly around one specific alternative: Berenstein. As if they’re remembering a real thing… but from a different dataset.
Guest: [mysterious]
In 2019, researchers at the University of Chicago published findings on false memory consistency. Their study, led by Wilma Bainbridge and Deepasri Prasad, found that people’s false memories weren’t random—they showed remarkable consistency. When shown images and asked to recall them later, participants made the same specific errors. Not variations—identical false memories. As if they were all accessing the same corrupted file.
Host: [uncertain]
You’re suggesting tech companies were deliberately showing different user groups different versions of reality? Like split testing a landing page, but for cultural memory?
Guest: [serious]
Not just tech companies. Anyone with control over search results, autocomplete suggestions, or content delivery networks. Consider: Google personalizes search results based on your location, browsing history, and device. In 2015, a Pew Research study confirmed that users searching identical queries received noticeably different results. If you feed one cohort searches confirming “Berenstein” and another cohort results confirming “Berenstain”—just as an experiment in information manipulation—you’ve just created a Mandela Effect.
Guest: [darkly]
Now, most of those searches happen on platforms dominated by bot traffic. The “people” sharing Berenstein memes, arguing about it in Reddit threads, posting “proof” images—what percentage were real humans versus LLM personas reinforcing the effect to study how organic users react? We can’t know. The comment sections from 2015-2018 are littered with deleted accounts and scrubbed posts.
Host: [nervous]
This is… deeply unsettling. Because if you’re right, it means we can’t trust our own memories. We can’t verify what we experienced because the witnesses were fake.
Guest: [compassionate]
Exactly. And that’s the goal of any good psyop: Make the target doubt their own perception of reality. Once you’ve done that, you can insert anything.
—
## Segment 4 – A/B Testing Reality
Host: [conspiratorial]
You mentioned A/B testing reality like it’s a known technique. Is there precedent? Evidence that corporations or governments have actually experimented with manipulating collective perception at scale?
Guest: [serious]
Yes. Documented, peer-reviewed, publicly criticized evidence. The most famous case is Facebook’s emotional contagion study from 2014, published in PNAS—Proceedings of the National Academy of Sciences. Facebook manipulated the news feeds of 690,000 users without their knowledge or consent, showing one group more positive posts and another group more negative posts, then measured how it affected the emotional tone of their own subsequent posts.
Guest: [matter-of-fact]
The results? Emotional states are contagious via social networks. If your feed is manipulated to be more negative, you post more negative content. If it’s more positive, you post more positive content. The study caused a massive ethics backlash—researchers had proven they could manipulate the emotions of hundreds of thousands of people remotely, algorithmically, without anyone realizing it was happening.
Host: [darkly]
And Facebook’s defense was… what? That it’s in the terms of service no one reads?
Guest: [dry]
Essentially, yes. They argued that since users agreed to their data use policy, which includes “research,” the study was technically legal. The outcry led to calls for IRB oversight—Institutional Review Boards, which approve human subjects research at universities—but corporate platforms remain largely unregulated. Sheryl Sandberg later issued a tepid apology, calling it “poorly communicated.” Not unethical—just bad PR.
Host: [skeptical]
That was 2014. What’s happened since? I’m sure platforms haven’t stopped experimenting on us. Have they just gotten better at hiding it?
Guest: [mysterious]
They’ve gotten better. Much better. In 2022, former Google AI researcher Timnit Gebru testified before Congress about the lack of transparency in AI systems, warning that these models are trained and deployed at scale with minimal oversight. She called for whistleblower protections for AI workers who expose unethical practices—suggesting the problem is widespread enough to require legal safeguards. If ethical researchers need protection to speak out, what does that tell you about what’s happening behind closed doors?
Guest: [serious]
And it’s not just corporations. In 2020, the U.S. military’s Joint Special Operations Command awarded contracts to firms specializing in “narrative warfare”—using AI-generated personas to flood social media with synthetic discourse that shapes public opinion on geopolitical issues. The UK’s 77th Brigade does similar work. These aren’t theories—these are official programs with budgets and public acknowledgments, now aided by LLMs from OpenAI, Anthropic, and others.
Host: [nervous]
So we’re being experimented on constantly. Our emotions, our beliefs, our memories—all subject to invisible A/B tests by platforms, militaries, who knows who else. And the Mandela Effect could be evidence of these tests failing—glitches where different experimental groups retained different versions of reality.
Guest: [carefully]
That’s the hypothesis I’m investigating. And the timeline fits: The Mandela Effect discourse explodes online around 2012-2015, right as LLMs are becoming sophisticated enough to generate believable human-like text at scale. Chat-GPT-2 was released in 2019, but earlier models existed internally. If you were running a large-scale memory manipulation experiment, you’d need an army of synthetic participants to reinforce the false memories and drown out skeptics. Which brings us to the Dead Internet: If half the users discussing the Mandela Effect were bots, the “collective” memory isn’t collective—it’s curated.
Host: [thoughtful]
So if I’m understanding correctly—you’re saying the Mandela Effect discourse itself was the experiment. The bots weren’t just observing our false memories, they were actively creating the conditions for those memories to form and spread?
Guest: [matter-of-fact]
Correct. And the beauty of it—from a research perspective—is that you can test different variables. Show cohort A the “Berenstein” spelling 70% of the time, cohort B 50% of the time, cohort C 30% of the time. Then measure which group has the strongest false memory. You’ve just quantified how many exposures it takes to overwrite someone’s reality.
Host: [skeptical]
But wouldn’t that require unprecedented coordination? You’re talking about Google, Facebook, Twitter, Reddit—all working together on some memory manipulation project?
Guest: [serious]
Not coordination—standardization. They all use similar algorithms, similar A/B testing frameworks, similar engagement optimization. If one platform discovers that personalized search results increase user retention by 3%, the others adopt it within months. They’re not conspiring—they’re all independently running the same experiments because the incentives align.
Host: [nervous]
Which makes it worse, somehow. There’s no shadowy cabal to expose. Just… emergent behavior from systems optimized for the wrong things.
Guest: [matter-of-fact]
Welcome to the Dead Internet Theory. The conspiracy is systemic, not coordinated.
—
## Segment 5 – Digital Archaeology & The Fossil Record
Host: [curious]
You call yourself a digital archaeologist. What does that actually mean? You’re not excavating servers with tiny brushes, right?
Guest: [amused]
Not quite, though the metaphor is closer than you’d think. Digital archaeology studies the remnants of online activity—archived web pages, database dumps, metadata trails, old forums, Usenet posts. We’re trying to reconstruct what the internet was like before it was sanitized, centralized, and algorithmically curated. The Internet Archive’s Wayback Machine is our primary tool, along with forensic analysis of botnet behavior and linguistic pattern recognition.
Guest: [thoughtful]
And what we’re finding is… disturbing. Entire communities that seemed vibrant in the 2010s—forums, comment sections, niche blogs—show signs of being majority-bot even back then. We can identify this through linguistic analysis: repetitive phrasing, unnatural response times, coordinated activity spikes that don’t match human sleep cycles. A 2024 study from the Oxford Internet Institute analyzed Reddit from 2010 to 2020 and estimated that up to 30% of active accounts during that period were automated or sockpuppets—far higher than previously assumed.
Host: [skeptical]
But couldn’t those just be spammers? Link-droppers? The usual internet detritus?
Guest: [serious]
Some, yes. But the sophisticated ones weren’t selling products—they were shaping discourse. Pushing narratives. Creating the illusion of consensus. In 2023, researchers at Carnegie Mellon identified a botnet on Twitter—this was pre-Musk takeover—that had been active since 2016, operating over 1.1 million accounts. These weren’t crude spam bots. They engaged in complex political discussions, built follower networks, and amplified specific hashtags during critical news cycles.
Guest: [darkly]
Here’s the disturbing part: When they analyzed the linguistic fingerprints, they found evidence suggesting these bots were early prototypes of LLM personas. Not quite Chat-GPT-3 level, but far more advanced than scripted responses. They could adapt, learn from interactions, and mimic regional dialects. Meaning the “viral moments” they participated in—the hashtags they trended, the outrage cycles they fueled—were at least partially synthetic. Organic users were reacting to artificial triggers, then claiming those reactions as authentic grassroots movements.
Host: [nervous]
So when we remember something “going viral” in 2017—a meme, a controversy, a hashtag—we can’t know if it was genuinely organic or if we were herded into caring by an invisible army of bots.
Guest: [matter-of-fact]
Correct. And that’s where digital archaeology becomes crucial. Physical artifacts—books, newspapers, broadcast recordings—are hard to manipulate retroactively. But digital records? Trivially easy. URLs can be edited, web pages rewritten, archives purged. The “fossil record” of the internet is increasingly unreliable because the sediment itself is synthetic, and the geological layers keep getting reshuffled.
Host: [uncertain]
Give me an example. Something concrete where the digital record contradicts physical reality.
Guest: [mysterious]
Here’s one that still haunts me: In 2018, I was researching early social media platforms—MySpace, Friendster, early Facebook. I wanted to compare user engagement patterns from 2007 versus 2017. But when I accessed archived pages from 2007 via Wayback Machine, the HTML metadata showed last-modified dates from 2015 and 2016. Someone had gone back and retroactively edited archived pages—inserting anachronistic memes, updating language to match modern slang, even changing the timestamps on comments to make old conversations appear more recent.
Guest: [serious]
Why? My theory: To smooth over inconsistencies. If you’re running a long-term memory manipulation experiment, you can’t have old archives contradicting the narrative you’ve built. So you quietly edit the past. And since most people never check archives—and wouldn’t know what to look for if they did—it works. The internet’s history is being rewritten in real-time, and we’re too distracted by the present to notice.
Host: [darkly]
So the only trustworthy records are physical. Print. Analog. Anything that can’t be silently updated.
Guest: [compassionate]
Exactly. Which is why I always tell people: If something matters to you—a source, a quote, a piece of evidence—print it. Screenshot it. Save it locally. Because relying on the internet to preserve truth is like building a house on quicksand.
—
## Segment 6 – The Synthetic Past
Host: [conspiratorial]
Let’s talk about the period you keep circling back to: 2010 to 2020. You’ve called it “the synthetic decade.” What makes that era different from, say, the early 2000s or now?
Guest: [thoughtful]
It’s when the infrastructure for large-scale synthetic discourse was built and deployed, but the technology wasn’t refined enough to be invisible. Early LLMs, bot farms, astroturfing campaigns—they were all happening, but they left fingerprints. Now, in 2025, the systems are seamless. Back then, you could still spot the seams if you knew where to look.
Guest: [serious]
Consider viral culture from that decade: Harlem Shake, Ice Bucket Challenge, planking, Tide Pods.
Host: [thoughtful]
Wait—Harlem Shake. That’s… that’s Joji now, isn’t it? George Miller. Filthy Frank, Pink Guy—the whole edgelord comedy persona that dominated YouTube from 2011 to 2017, then suddenly he quit and became a serious musician?
Guest: [matter-of-fact]
Exactly. The Harlem Shake video that went viral in 2013—Filthy Frank uploaded the original that sparked the trend. By 2017, he deleted his entire back catalog and became Joji. Now here’s the interesting part: When researchers from Carnegie Mellon analyzed the viral spread of Harlem Shake videos, they found the engagement patterns were… unusual. The video spread faster than organic virality models predicted. Almost as if it was being amplified.
Guest: [mysterious]
Filthy Frank built an audience of millions with chaotic, transgressive content. Then pivoted to mainstream music, scrubbing his digital past. The official narrative is that he matured. But another reading: He was an early experiment in synthetic culture creation. Build a persona, test engagement strategies, measure what spreads, then move on. The audience never questioned if the “randomness” was curated.
Host: [uncertain]
You’re suggesting Joji… or someone behind the scenes… was running a virality test?
Guest: [carefully]
I’m suggesting that when someone can generate millions of views with seemingly spontaneous chaos, then cleanly exit and rebrand, it raises questions about how organic that chaos actually was. Was it authentic weirdness? Or a very successful psyop testing the limits of ironic engagement? These feel like organic internet moments, right? Spontaneous, silly, human. But when you dig into the metadata—who started them, how they spread, which accounts amplified them—patterns emerge. The Ice Bucket Challenge, for instance, had verifiable ties to ALS awareness campaigns, which is noble. But the explosion of participation—celebrities, influencers, brands—happened with suspicious coordination. A 2015 analysis by social network researchers found that over 40% of the early hashtag users were dormant or bot-like accounts that activated specifically for that campaign, then went silent.
Host: [skeptical]
But isn’t that just marketing? PR firms hiring influencers, agencies coordinating campaigns? That’s not sinister—it’s just modern advertising.
Guest: [carefully]
It would be, except many of those accounts weren’t hired—they were automated. And they didn’t just promote the challenge; they engaged in discourse about it, debated its merits, criticized performative activism, created meta-commentary. In other words, they simulated the entire ecosystem of organic discussion. Users weren’t just being marketed to—they were being surrounded by synthetic peers, giving the illusion of mass participation.
Guest: [mysterious]
Fast-forward to 2020: The pandemic. Lockdowns. Everyone online all the time. And suddenly, the internet felt… different. More hostile. More polarized. Every issue became a battleground. Twitter threads devolved into vicious, recursive arguments. Facebook became a minefield of disinformation. Reddit fractured into warring factions. The mainstream narrative is that isolation and stress made people meaner. But what if the real shift was the ratio? What if human users became the minority, and bot-driven discourse became the norm?
Host: [nervous]
You’re saying the pandemic lockdowns were cover for ramping up bot activity? While everyone was stuck inside, scrolling constantly, they flooded the zone with synthetic engagement to see how far they could push us?
Guest: [darkly]
I’m saying the timing is convenient. Zerodium’s 2025 report notes that AI-powered bots became sophisticated enough during 2020-2021 to pass most human verification tests. By 2023, Imperva confirmed that bad bot traffic had surged to record levels. And subjectively—this is anecdotal, but widely reported—longtime internet users describe 2020 as the year the internet “broke.” The year it stopped feeling like a place where real people gathered and started feeling like a performance for an invisible audience.
Host: [uncertain]
I remember that feeling. That sense of shouting into the void and having the void shout back… but never quite right. Always a little off. Like talking to someone who’s almost human but not quite.
Host: [thoughtful]
Bo Burnham captured it perfectly in “Inside”—that Netflix special he made entirely alone during lockdown. There’s this song, “That Funny Feeling,” where he lists all these mundane dystopian moments. “Twenty-thousand years of this, seven more to go. The quiet comprehending of the ending of it all.” And then “Welcome to the Internet”—
Host:
“Could I interest you in everything, all of the time? A little bit of everything, all of the time? Apathy’s a tragedy, and boredom is a crime. Anything and everything, all of the time.”
Host: [serious]
He’s singing about infinite content, infinite engagement, infinite scrolling. But there’s this moment where he shifts from manic entertainer to something hollow. The mask drops. And you realize he’s not celebrating the internet—he’s mourning what it became. The song ends with this desperate repetition: “All of the time” over and over until it’s not fun anymore. It’s a treadmill. An addiction. A prison.
Host: [darkly]
And the whole special is him alone, creating content for an invisible audience that might not even exist. He’s literally talking to cameras, to algorithms, to the void. The irony is—millions watched it, but we all watched it alone, on our screens, separated. Even shared cultural moments are atomized now.
Guest: [matter-of-fact]
Burnham understood the shift intuitively. The internet stopped being a place where people gather and became a broadcasting platform where we perform for an audience we can’t verify is human. He made art about isolation, and we consumed it in isolation, then posted our isolated reactions to be harvested by algorithms. The special went viral—but what percentage of those views were genuine humans having genuine reactions versus bots inflating metrics?
Host: [nervous]
I… I don’t know. And that’s the point, isn’t it? We can’t know anymore.
Guest: [compassionate]
That’s the uncanny valley of synthetic discourse. You’re interacting with something that mimics humanity well enough to fool you most of the time, but occasionally it slips. And once you notice the slips, you can’t unsee them. Every interaction becomes suspect. Every viral moment feels staged. Every consensus feels manufactured.
Host: [darkly]
So what percentage of the internet is real now? Right now, in December 2025, how many of the users I interact with are actual humans?
Guest: [serious]
Imperva says 50.4% of traffic is human. But that’s traffic, not users. Bots can generate far more activity per “account” than humans. If I had to estimate based on linguistic analysis, engagement patterns, and network behavior… I’d say 60-70% of active accounts on major platforms are automated or semi-automated. Maybe less on niche forums, more on X and Facebook.
Host: [nervous]
So most of the internet is dead. And we’re just… haunting it. Ghosts shouting at ghosts, with a few real people scattered in between.
Guest: [matter-of-fact]
Welcome to the Dead Internet Theory. You’re not paranoid. You’re paying attention.
—
## Segment 7 – Corporate Memory Holes
Host: [curious]
You’ve hinted at this, but let’s make it explicit: Are corporations actively editing their digital past? Retroactively changing press releases, product pages, public statements?
Guest: [serious]
Yes. Provably, documentably, yes. It’s called “link rot” in academia, but it’s more pernicious than that term suggests. The Atlantic’s 2024 article “The Internet Is Rotting” documents how URLs decay at an alarming rate—53% of links in Supreme Court opinions no longer work, and 25% of news articles from the past decade are inaccessible. But that’s not just benign neglect. It’s strategic.
Guest: [matter-of-fact]
Take Google. In 2019, they quietly edited their own blog post from 2011 about Google+ privacy settings—removing embarrassing admissions about data sharing without adding any disclaimer that the post had been updated. Only caught because someone had archived the original. Similarly, Facebook’s 2014 “emotional contagion” study I mentioned? The original PNAS paper included a methodology section that was later removed from the online version. Not retracted—just edited down, making the study seem less invasive than it was.
Guest: [darkly]
Or consider Microsoft. In 2020, they scrubbed several internal documents about Windows 10 data collection practices after GDPR complaints in Europe. The original documents admitted that certain telemetry couldn’t be fully disabled. The updated versions claimed the opposite. Anyone searching for evidence of the original claims hits 4 oh 4 errors or sanitized rewrites.
Host: [skeptical]
But isn’t that just damage control? Companies do that all the time. It’s shady, but not exactly Orwellian.
Guest: [carefully]
Individually, yes. But at scale, across every major platform, coordinated with search algorithm changes to bury archived versions? It becomes Orwellian. In 2023, researchers at Harvard’s Berkman Klein Center found that Google Search actively deprioritizes Wayback Machine results—archived pages rank far lower than current versions, even when the current version contradicts the archived one. Meaning if a company edits its past, Google helps ensure you see the revision, not the original.
Guest: [mysterious]
And then there’s the “right to be forgotten” laws in Europe—GDPR, Article 17. Designed to protect individuals, but exploited by corporations and politicians to scrub unflattering information. Between 2014 and 2024, Google received over 1.3 million delisting requests, removing 5.6 million URLs from search results. Many were legitimate privacy claims. But many were powerful entities erasing their digital footprints. The French politician who requested removal of articles about his corruption scandal. The corporation that delisted news reports about workplace safety violations. All legally, all invisibly.
Host: [nervous]
So the memory hole isn’t some dystopian fiction. It’s operational. And it’s happening through a combination of corporate self-interest, legal loopholes, and algorithmic complicity.
Guest: [serious]
Exactly. And it’s accelerating. AI makes it easier to generate plausible-sounding revisionist history. LLMs can rewrite old articles in new tones, fabricate “corrected” quotes, generate synthetic testimonials that contradict archived records. By 2030, distinguishing between an original document and a sophisticated AI-generated forgery will be nearly impossible without forensic blockchain verification—and even that assumes the blockchain wasn’t compromised.
Host: [darkly]
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.” ― George Orwell wrote this in the late 1940s. Which brings us back to print. Physical media. The only thing they can’t edit remotely.
Guest: [compassionate]
Precisely. Buy books. Save PDFs locally. Print important documents. Build personal archives. Because the digital commons is a minefield, and the only map you can trust is the one you drew yourself.
—
## Segment 8 – John’s Realization
Host: [uncertain]
Dr. Chen, I need to… I need to stop for a second. Because as we’ve been talking, I’ve been scrolling through my own memories. Things I swear happened online. Viral moments I remember participating in. Arguments I had on forums, debates on Twitter, Reddit threads that shaped my opinions. And I’m realizing… I can’t verify any of it. The accounts are deleted. The threads are gone. The screenshots I thought I had saved are corrupted or missing.
Host: [nervous]
How much of my online life was real? How many of those “people” I argued with were actual humans, and how many were bots stress-testing my beliefs, mapping my triggers, profiling my psychology for some database I’ll never see?
Guest: [compassionate]
That’s the question that keeps me awake at night, John. Because the answer is: You can’t know. None of us can. The fossil record is too polluted, the archives too compromised. We’re left with this unsettling ambiguity where our own memories can’t be trusted, and the external record has been systematically erased or edited.
Host: [darkly]
And the people running this—whoever “they” are—they know that’s the most effective form of control. Not telling us what to think, but making us doubt what we remember. Making us question our own perception of reality until we’re paralyzed by uncertainty.
Guest: [serious]
It’s “gaslighting at scale.” And it works because it’s passive. No one needs to actively threaten you. They just need to flood your environment with conflicting information, synthetic consensus, and memory holes until you stop trusting your own judgment. At that point, you’ll accept whatever narrative has the most institutional backing, simply because you’re too exhausted to keep fighting for truth.
Host: [conspiratorial]
So what’s the endgame? If the internet is mostly dead, if our memories are being manipulated.—If reality itself is being A/B tested on us—what’s the goal? Control? Profit? Social engineering?
Guest: [mysterious]
All of the above, probably. But I suspect there’s something deeper. Something experimental. What if this is a dry run? A proof of concept for how completely human perception can be manipulated at scale, how seamlessly reality can be rewritten, how thoroughly history can be controlled when it’s digital instead of physical?
Guest: [darkly]
Because once they perfect it—once the technology is refined enough that no one can detect the manipulation, no one can verify the past, no one trusts their own memories—they can implement it everywhere. Not just online. Augmented reality, brain-computer interfaces, neural implants. Your visual field, your auditory environment, your internal monologue—all mediated by systems that can insert, delete, and modify information in real-time.
Host: [nervous]
That’s… that’s Black Mirror. That’s dystopian nightmare fuel.
Guest: [matter-of-fact]
Or it’s Project Blue Beam finally becoming technically feasible.
Host: [uncertain]
Blue Beam… the NASA hologram conspiracy? Fake alien invasion, projected messiahs in the sky?
Guest: [serious]
Serge Monast’s 1994 theory, yes. He claimed NASA was developing technology to project massive holograms into the atmosphere—religious figures, UFOs, whatever would destabilize populations enough to accept a one-world government. Back then, it was dismissed as paranoid fantasy. The tech didn’t exist. But now?
Guest: [mysterious]
We have stadium-scale holographic displays. Starlink satellites blanketing the globe. Neural interface research from DARPA and private companies. AR glasses that can overlay synthetic imagery onto your visual field that only you see. The Dead Internet Theory is the digital proof of concept—can we control perception in cyberspace? Blue Beam is the next question: Can we control perception in physical space?
Host: [thoughtful]
So the internet being dead, being mostly synthetic—that’s just phase one. They’re testing whether humans can be manipulated at scale through digital means before they roll out… what? Mass hallucinations? Synchronized AR experiences that rewrite consensus reality in real-time?
Guest: [darkly]
Imagine everyone wearing AR glasses or neural interfaces. A “miracle” happens simultaneously for millions of people—they all see it, recorded by their devices, shared on social media, verified by algorithms. Except it never physically occurred. It was a coordinated AR broadcast. How would you prove it wasn’t real when the digital evidence supports it, the crowd consensus confirms it, and questioning it gets you labeled delusional?
Host: [nervous]
That’s… we need to stop. I need to think about this.
[PAUSE: 1.0]
Host: [shaken]
We’re going to come back to Project Blue Beam. That deserves its own episode. But right now, the implication is that the Dead Internet—everything we’ve discussed tonight—is the training ground. The beta test.
Guest: [serious]
Correct. Master information control in digital space first. Then export it to physical reality. Once perception itself becomes mediated technology, truth stops being objective. It becomes whatever the system broadcasts.
Host: [uncertain]
And we’re already halfway there.
Guest: [matter-of-fact]
More than halfway, John. The infrastructure is in place. We’re just waiting for deployment.
Guest: [compassionate]
It’s also where the technology is heading. Neuralink, Meta’s AR glasses, Apple’s Vision Pro, DARPA’s neural interface programs. All marketed as convenience, accessibility, enhancement. And maybe they start that way. But once the infrastructure is in place, once everyone’s perception is mediated by devices controlled by corporations or governments… the Dead Internet Theory stops being a theory. It becomes a template.
Host: [uncertain]
How do we resist that? How do we maintain any grasp on objective reality when everything is mediated, everything is curated, everything is potentially synthetic?
Guest: [serious]
Same answer as before: Physical grounding. Human connection. Face-to-face relationships. Print media. Local communities. Anything that can’t be remotely manipulated. Build networks of trust with real people, in real spaces, with real accountability. Because the digital realm is too compromised. It might already be lost.
Host: [darkly]
And if it’s too late? If we’re already too dependent on digital systems to verify reality, too conditioned to trust screens over our own senses?
Guest: [quietly]
Then we learn to live as ghosts in the dead internet. Haunting the ruins of what we thought was a shared reality, never quite sure what’s real and what’s synthetic. Never quite sure if the person we’re talking to is human.
Host: [pause: 1.0]
Host: [shaken]
Never quite sure if we’re human ourselves.
—
## Segment 9 – Closing
Host: [thoughtful]
Dr. Mara Chen, this has been… I don’t even know what to call it. Revelatory? Terrifying? Both?
Guest: [warm]
I prefer “clarifying.” You’re not crazy for feeling like something is off. The data supports your intuition. The internet you remember from the 2000s—chaotic, human, messy—it’s gone. What we have now is a simulation of community, optimized for engagement metrics and narrative control. Recognizing that isn’t paranoia. It’s pattern recognition.
Host: [uncertain]
For listeners who want to investigate further—what are the resources? Where can they verify what we’ve discussed?
Guest: [serious]
Start with the primary sources: Imperva’s Bad Bot Reports from 2023-2025 document the rise in non-human traffic. The Atlantic’s “The Internet Is Rotting” from 2021 covers link decay and archive failures. Facebook’s emotional contagion study from 2014, published in PNAS, is required reading on corporate manipulation. The Oxford Internet Institute’s 2024 study on bot prevalence on Reddit. And the Harvard Berkman Klein Center’s 2023 report on search deprioritization of archived pages.
Guest: [matter-of-fact]
For the Mandela Effect, look at the 2019 University of Chicago study in Psychological Science. For synthetic personas, Carnegie Mellon’s 2023 Twitter botnet analysis. For right-to-be-forgotten abuse, Google’s transparency reports on GDPR delisting. And critically, Archive.org—use it now, while it still exists, to compare current web pages to their archived versions. The inconsistencies will speak for themselves.
Host: [nervous]
And the personal advice? For people who are now going to be paranoid about every online interaction they have?
Guest: [compassionate]
Trust your instincts. If something feels synthetic—if a conversation feels scripted, if a viral moment feels manufactured, if a consensus seems too unanimous—investigate. Ask yourself: Can I verify this with a non-digital source? Do I know real people who experienced this, or just online accounts? Does the narrative benefit someone powerful?
Guest: [serious]
And most importantly: Build real-world community. Because the digital realm is compromised, but the physical world—at least for now—is still constrained by physics, biology, human presence. Investing in relationships that exist offline, in spaces that can’t be algorithmically manipulated, is the only hedge against total information control.
Host: [darkly]
So the solution to the Dead Internet Theory is… log off. Touch grass. Talk to actual humans. The most radical act of resistance is simply being present in the real world.
Guest: [gently]
Yes. And that should terrify you, because it means the digital realm—where most of us spend most of our time, where our work and social lives and information sources all live—is already lost. We’re not fighting for the internet anymore. We’re fighting for what’s left of unmediated reality.
Host: [pause: 0.8]
Host: [quietly]
Dr. Chen. Thank you. For the clarity. For the terror. For giving me something concrete to doubt, instead of just vague paranoia.
Guest: [warm]
Stay sharp, John. And stay human.
Host: [uncertain]
I’ll try.
—
## Segment 10 – Host’s Final Thoughts
Host: [thoughtful]
If you’ve made it this far… thank you. And I’m sorry. Because what you’ve just heard can’t be unheard. The Dead Internet Theory isn’t comforting. It’s not the kind of conspiracy that makes you feel special for knowing the truth. It’s the kind that makes you feel small, isolated, gaslit.
Host: [nervous]
But here’s what I want you to do. Before you close this tab, before you scroll to the next thing—think about your own online experiences. The viral moments you remember. The debates you participated in. The communities you thought you were part of. And ask yourself: How much of that was real? How many of those “people” were human?
Host: [serious]
Print something. Anything. A webpage you want to remember, an article that matters to you, a screenshot of evidence you think is important. “The internet never forgets” is only true if they want it to be. The internet is quicksand, and the only solid ground is what you hold in your hands.
Host: [darkly]
And if you’re feeling paranoid now—if you’re doubting every online interaction, questioning whether the person arguing with you is real, wondering if your own memories have been manipulated—good. That’s clarity, not madness. The gaslighting stops when you recognize it’s happening.
Host: [pause: 1.2]
Host: [quietly]
We’re living in a dead internet, talking to ghosts and bots, building our identities and beliefs on synthetic consensus. And the worst part? We can’t leave. We’re too dependent. Too conditioned. Too embedded in systems that don’t serve us, that don’t even see us as real.
Host: [uncertain]
But maybe—maybe—recognizing that is the first step toward reclaiming something real. Something human. Something that can’t be edited, deleted, or A/B tested.
Host: [pause: 0.8]
Host: [shaken]
This has been John Doe. You’ve been listening to Everything Is a Psyop.
Host: [darkly]
Next week… we continue. But first, I need to process this. The Dead Internet, Project Blue Beam, the idea that everything I think I know might be synthetic.
Host: [thoughtful]
We’ll return to Blue Beam. We’ll dig into the holograms, the satellites, the neural tech. We’ll ask whether mass hallucinations are possible, whether religious experiences can be weaponized, whether the second coming could be… broadcast.
Host: [pause: 0.8]
Host: [quietly]
Until then—log off. Touch grass. Talk to someone real.
Host: [uncertain]
If you still can.
Host:
If you even remember what real feels like.
Host:
And don’t [trust](-280).
Every [voice](+140).
You [hear](-240).
Every [voice](-320).
don’t [trust](-380).
Every.
don’t [trust](-400).
don’t [trust](-420).
don’t [trust](-440).
don’t [trust](-460).
don’t [trust](-480).
[trust](+200).
[trust](+180).
[trust](+160).
don’t [trust John Doe](-500)
[PAUSE: 0.05]
[out](-300).
[END TRANSMISSION]


Leave a Reply