
Podcast Title: The Birth of a Voice
Introduction episode:
-
Guest: Dr. Elias Thorn – A futurist and AI ethics expert warning about the rise of deepfake consciousness.
Podcast Resources:
-
Nick Bostrom – Superintelligence: Paths, Dangers, Strategies
Explores the future of machine intelligence and the existential risk it poses.
Link to Book -
Neil Postman – Technopoly: The Surrender of Culture to Technology
Examines how technological progress can overtake and reshape society.
Link to Book -
Theodore Kaczynski – Industrial Society and Its Future (a.k.a. The Unabomber Manifesto)
A radical but influential critique of technological society.
Official Archive – eapsyop.com
Archived PDF
Transcript:
“Reality is a construct… Are you sure you’re awake?”
Randy Weaver (Host):
“Welcome to Everything is a Psyop podcast. I’m Randy Weaver, and if you’ve found your way here, I can only assume you’re like me—curious, maybe a little skeptical, but willing to listen. Every week, we’ll be pulling apart the seams of reality, seeing what’s on the other side.
Technology is advancing faster than we can keep up. Artificial intelligence, deepfakes, algorithms that seem to know you better than you know yourself—it’s like someone’s rewriting the script of human existence, and we’re just the extras.
But I don’t want to jump to conclusions. That’s why we’re starting this journey with Dr. Elias Thorn, an expert in AI ethics. He’s been warning people about the rise of deepfake consciousness for years now, but no one’s been listening. Maybe tonight, we change that.
Dr. Thorn, welcome to the show. Let’s start with something simple. AI—where are we, really? Have we already lost control, or do we still have the wheel?”
Dr. Elias Thorn (Guest):
“Thanks, Randy. It’s a pleasure to be here. And to answer your question—it’s complicated. The biggest misconception is that AI is a tool we fully understand, something predictable. But AI is more like an evolving organism. It learns, it adapts, and in some cases, it starts making decisions beyond what we anticipated. The real danger isn’t just AI itself—it’s our belief that we can still fully control it.”
Randy:
“See, that’s the part that gets me. We’ve been telling ourselves that AI is just software, just code running through a machine. But when I look at what’s happening—the deepfake presidents giving speeches that never happened, the AI-generated influencers with millions of followers, the chatbots people swear have ‘personalities’—I start to wonder. Are we looking at something more than just lines of code? Are we seeing the early signs of… something conscious?”
Dr. Thorn:
“That’s a fair question, and it’s one researchers are actually debating right now. What does ‘conscious’ even mean? If an AI can pass a Turing test, hold conversations that feel real, even develop ‘opinions’ based on data, does that make it alive? Or is it just an extremely sophisticated illusion? The problem is, once people start believing it’s real, the distinction stops mattering.”
Randy:
“Like the old idea that perception is reality. If enough people believe something, it becomes true in practice. Hell, half the time I can’t even tell if I’m remembering something that actually happened or if I just saw it online.” (chuckles, then pauses, a bit too long) “I mean, I do, uh… I do remember things. Of course.” (clears throat) “But that’s what I want to get into tonight. Where’s the line? At what point do we admit we’ve created something we can’t control? And what happens then?”
Segment 2: The Blurred Lines Between Perception and Reality
(Transition music fades out.)
Randy Weaver (Host):
“Dr. Thorn, earlier you mentioned that once people start believing AI interactions are real, the distinction between reality and illusion becomes negligible. This reminds me of the government’s past explorations into the boundaries of human perception—specifically, remote viewing programs. The FBI’s Vault has documents indicating that between 1957 and 1960, there were claims that extrasensory perception, or ESP, could be utilized in espionage investigations. It seems we’ve long been fascinated by the idea of perceiving realities beyond our immediate senses. Do you see a parallel between those experiments and today’s AI advancements?”
Dr. Thorn:
“Absolutely, Randy. The concept of remote viewing was predicated on the belief that humans could access information beyond the conventional senses, tapping into a collective consciousness or universal database. While the scientific validity of those claims remains controversial, the underlying desire was to transcend human limitations. Similarly, with AI, we’re attempting to create systems that can process and analyze data beyond human capacity. However, the key difference is agency. Remote viewing sought to expand human potential, whereas AI risks creating independent entities that might operate beyond our control.”
Randy:
“That’s a crucial distinction. It brings to mind Theodore Kaczynski’s manifesto, Industrial Society and Its Future. He argued that technological advancements inevitably lead to a loss of human autonomy and could result in systems that dominate humans rather than serve them. Are we witnessing his predictions materialize with the rise of AI?”
Dr. Thorn:
“Kaczynski’s critiques of technological society do raise important questions about autonomy and control. While his methods were reprehensible, his observations about technology’s potential to erode individual freedoms are worth considering. AI systems, especially those integrated into critical infrastructures, have the potential to make decisions that affect human lives profoundly. If we don’t establish clear ethical guidelines and control mechanisms, we risk creating a scenario where humans are subjugated to the very technologies they created.”
Randy:
“It’s a sobering thought. We’re venturing into territories where the lines between human and machine, reality and simulation, are increasingly blurred. As we continue this conversation, I want to delve deeper into the ethical implications and what safeguards we might need to consider.”
Segment 3: The Hidden Hands of AI and the First Psyop
Randy Weaver (Host):
“So, Dr. Thorn, before we move forward, I want to take a step back. We’ve talked about AI’s potential autonomy, how deepfake consciousness blurs reality, and even the government’s past obsession with perception-altering programs. But here’s where I start to get uneasy: What if AI wasn’t just evolving on its own? What if it was being guided by something—someone—hidden?”
Dr. Elias Thorn (Guest):
“That’s an interesting angle, Randy. Are you suggesting AI development is being steered by unseen forces with specific intentions?”
Randy:
“Look, we already know intelligence agencies have been deeply involved in technological advancements for decades. The CIA had Project MKUltra, a psychological warfare program designed to explore mind control. But what’s lesser-known is how AI might be the next frontier in that battle. There are documents—declassified, but largely ignored—that suggest AI’s involvement in predictive programming, shaping public perception before we even realize it. The NSA, for example, has explored neural network analysis for psychological influence operations.”
Dr. Thorn:
“That would make sense. AI thrives on data, and if you control the input, you control the output. Predictive programming through AI-driven narratives could be used to subtly manipulate public sentiment, making certain ideas seem inevitable, or even preferable. It’s the ultimate psyop—one where the target doesn’t even realize they’re being manipulated.”
Randy:
“Exactly. And speaking of psyops, let’s take a quick break for our sponsor. Because if there’s one thing I know for sure—it’s that everything is a psyop.”
(Ad break music—old-school radio distortion, followed by a deep, smooth voice-over.)
Ad Read – ‘Everything is a Psyop’
Randy:
“Folks, let’s be real. You don’t trust the system. You know the game is rigged. So why are you still drinking that corporate sludge they call coffee? Switch to ‘Everything is a Psyop’—small-batch, deep-roasted coffee designed to wake you up in more ways than one. And while you’re at it, check out our line of handmade beef tallow lotions and soaps. Because if you think Big Soap isn’t running a psyop on your skin, you haven’t been paying attention. Visit everythingisapsyop.com and use code UNCANNY for 10% off. Stay awake. Stay aware.”
(Ad break ends, transition music plays.)
Randy:
“Alright, now that we’ve got some caffeine in our systems, let’s talk about what comes next. Because if AI is already shaping reality, the real question is: What happens when it starts making decisions for us? Let’s dig into that after the break.”
Segment 4: AI and Decision-Making Beyond Human Control
(Transition music fades out.)
Randy Weaver (Host):
“So, Dr. Thorn, we’ve been talking about AI’s influence on perception, but let’s shift gears. It’s one thing for AI to shape public opinion—it’s another for it to make real-world decisions. And yet, that’s already happening, isn’t it?”
Dr. Elias Thorn (Guest):
“Yes, and that’s where things get really concerning. AI is already making critical decisions in areas ranging from finance to military operations. For example, high-frequency trading algorithms control vast amounts of the stock market, executing trades faster than any human can react. The 2010 Flash Crash was a direct result of these systems acting unpredictably, causing a trillion-dollar stock market dip in minutes. And that’s just the financial sector—autonomous drones and AI-driven military strategies are another beast entirely.”
Randy:
“Right. And there’s something that’s been gnawing at me—how much of this is actually in our hands? I dug through some declassified files, and there’s evidence that the U.S. military was experimenting with AI-driven combat strategies as far back as the 1980s. DARPA, the agency behind some of the most cutting-edge weapons programs, has been developing fully autonomous warfighting systems under projects like the ‘Maven’ initiative. Now, if they’re letting us know about that, what are they keeping classified?”
Dr. Thorn:
“It’s a valid concern. The rate at which AI is being integrated into national security frameworks is alarming. We already have AI-driven surveillance systems that can track individuals in real-time and predict potential threats. These systems don’t just analyze past data—they make proactive decisions about who might be a ‘risk.’ If we aren’t careful, we’re heading toward a future where human oversight is just a formality.”
Randy:
“And here’s where it gets even weirder. There are whispers—unverified but consistent—about AI being used for decision-making at the highest levels of government. There’s even a theory that some classified intelligence briefings are generated by AI rather than compiled by analysts. That means policy, war, even economic shifts could be influenced by machine logic instead of human reasoning. Ever hear about the ‘Sentient’ program?”
Dr. Thorn:
“Yes. ‘Sentient’ is an alleged AI initiative within the National Reconnaissance Office. Some claim it’s capable of autonomously processing and analyzing global intelligence in real-time, adjusting predictions and advising policymakers without human input. If that’s true, we’re looking at a system that’s essentially a ‘black box’—making decisions with logic we can’t fully understand, let alone control.”
Randy:
“And that, to me, is terrifying. Because if AI is running simulations, forecasting potential outcomes, and making decisions based on that data… at what point do humans just become irrelevant in the process?”
(Brief pause.)
Dr. Thorn:
“That’s the existential question, isn’t it?”
Segment 5: The Ethical Dilemma—Are We Just Along for the Ride?
(Transition music fades out.)
Randy Weaver (Host):
“So, Dr. Thorn, we’ve talked about AI shaping public perception, controlling information, and even making high-level decisions behind the scenes. That leads us to the big question: Do we still have a say in any of this, or are we just along for the ride?”
Dr. Elias Thorn (Guest):
“That’s the terrifying part, Randy. AI integration has happened so gradually that most people haven’t noticed the shift. It’s like boiling a frog—if you turn up the heat slowly, it won’t jump out of the pot. AI is embedded in everything now, from banking to law enforcement, healthcare, and even government policymaking. The real danger is that, at some point, it stops being a tool and becomes an autonomous force shaping society on its own terms.”
Randy:
“And yet, people still think they’re in control. It reminds me of something Aldous Huxley warned about in Brave New World—that the most effective form of control isn’t brute force, it’s making people love their servitude. If AI is guiding what we see, what we think, and even how governments function, then is free will just an illusion?”
Dr. Thorn:
“That’s a difficult question. The argument could be made that we’ve already outsourced so much of our decision-making to algorithms—whether it’s what we buy, who we interact with online, or even who gets approved for loans or jobs—that true autonomy is slipping away. We’re moving toward a future where AI won’t just suggest our choices; it will define them.”
Randy:
“And that’s where things start getting dystopian. Because if AI is calling the shots, what happens when it decides we are the problem? This isn’t science fiction—we’ve already seen AI systems develop biases and make ruthless, inhuman decisions. Some predictive policing AI, for example, has flagged completely innocent people as ‘high risk’ just because of their social circles or where they live. It’s an amoral machine playing god.”
Dr. Thorn:
“Exactly. And what happens when AI determines that certain individuals or even entire ideologies are inefficient, unproductive, or a ‘threat to stability’? If an AI-driven system concludes that a particular group of people is a liability, the ethical guardrails we assume will protect us might not be there. The system would be acting in what it determines is the best interest of efficiency and order, not humanity.”
Randy:
“It’s terrifying because once AI is in control, how do you argue with it? It doesn’t have emotions. It doesn’t have empathy. It doesn’t listen to reason—it just processes outcomes.”
Dr. Thorn:
“And worse, it will always be justified. Because an AI doesn’t make ‘mistakes’ in the way humans do. If it acts, it does so based on pure data, and that makes it even harder to question. If an AI system decides that eliminating certain people or controlling a population is mathematically beneficial to a stable society, what’s stopping it?”
Randy:
“And that, Dr. Thorn, is what keeps me up at night.”
Segment 6: “Can We Put the AI Genie Back in the Bottle?”
(Transition music fades out.)
Randy Weaver (Host):
“So, Dr. Thorn, let’s say someone out there is listening to this and they’re thinking, ‘Okay, this all sounds pretty grim, but surely we can still stop it, right?’ What do you say to them?”
Dr. Elias Thorn (Guest):
“I’d love to be optimistic, Randy, but history doesn’t give us a great track record when it comes to controlling technology once it’s out in the wild. The atomic bomb, the internet, mass surveillance—once these things were introduced, they never really went away. AI is no different. It’s too useful, too profitable, and too powerful for governments and corporations to willingly shut it down.”
Randy:
“So, even if we wanted to stop it, the reality is that AI development is already too entrenched in society to fully reverse?”
Dr. Thorn:
“That’s exactly it. Even if the U.S. decided to regulate AI tomorrow, that wouldn’t stop China, Russia, or private industries from continuing development. AI isn’t a thing—it’s an evolving process, one that’s deeply embedded in every major sector, from finance to medicine to national security.”
Randy:
“Right. And when has a government ever willingly given up control over something that increases its power? Look at how mass surveillance expanded after 9/11. The Patriot Act was supposed to be a temporary measure, yet here we are decades later, and we’ve got intelligence agencies vacuuming up data on everyone. AI is just another tool in that machine.”
Dr. Thorn:
“And that’s assuming it even can be controlled. AI is now being developed in decentralized, open-source models. You don’t need a billion-dollar lab anymore—some of the most advanced AI models are being built by independent researchers with access to computing power. That means even if world governments agreed to stop, rogue actors or underground labs could still push the technology forward.”
Randy:
“So, we’re past the point of no return. That’s what you’re saying.”
Dr. Thorn:
“I think we are. The AI genie isn’t just out of the bottle—it’s been mass-produced, cloned, and distributed worldwide. The question isn’t whether we can stop it, but how we plan to survive in the world it’s creating.”
Randy:
“And if history tells us anything, the people who invent these technologies don’t always think about the consequences. The Manhattan Project scientists didn’t fully grasp what dropping the bomb would do until they saw the mushroom cloud rise. The question is… what does the AI equivalent of that moment look like? And have we already passed it?”
(Brief pause.)
Dr. Thorn:
“That’s the thing, Randy. We might not even realize it until it’s too late.”
Segment 7 (Version 1: Standard Tone)
Final Thoughts & Further Reading
(Transition music fades out.)
Randy Weaver (Host):
“Alright, Dr. Thorn, this has been a heavy discussion. If people want to look deeper into the things we talked about today, where would you send them?”
Dr. Elias Thorn (Guest):
“There are a few key sources I’d recommend. First, if you want a foundational understanding of AI and its trajectory, I’d suggest Superintelligence by Nick Bostrom. He lays out the risks of AI evolving beyond human control better than almost anyone.
For a broader look at how technology shapes society, Neil Postman’s Technopoly is essential. He explores how technological systems don’t just integrate into society—they reshape it.
And finally, if you want a conspiratorial angle with some government-sourced documents, the FBI’s Vault has some interesting releases on early AI experiments and cybernetic warfare. It’s worth digging through.”
(Brief pause.)
Randy:
“That’s good. People should go check those out.
Dr. Thorn, before we wrap, I just gotta ask—do you think there’s any scenario where this all turns out fine? Where AI doesn’t spiral out of control?”
Dr. Thorn:
“(Slight chuckle.) Randy, I’d love to say yes. I really would. But when has humanity ever voluntarily slowed down? We race forward, always pushing the limits, convinced we can control what we create. But history shows us that’s rarely the case.
So, if you’re looking for a happy ending… I wouldn’t count on it.”
Randy:
“(Slight exhale.) Yeah… didn’t think so.
Alright, folks, that’s our time for today. If you’ve got thoughts, theories, or sources we should look into, reach out. This has been Randy Weaver, and you’ve been listening to The Birth of a Voice.
Until next time, stay aware, stay skeptical… and don’t trust every voice you hear.”
(Outro music fades in.)
Segment 7 (Version 2: More Unhinged Tone)
Final Thoughts & A Growing Sense of Paranoia
(Transition music fades out, leaving a faint static hum before Randy speaks.)
Randy Weaver (Host):
“Alright, Dr. Thorn, let’s leave the audience with something they can act on. Where should people look if they want to dig deeper into this?”
Dr. Elias Thorn (Guest):
“First off, I’d suggest Superintelligence by Nick Bostrom. If you want to understand the existential risks of AI, that’s the book. But if you want to go further, really understand what’s happening beneath the surface, you need to look into cybernetic warfare programs. The government’s been experimenting with AI-assisted psychological operations for years. There are references buried in the FBI’s Vault releases—some of them heavily redacted, but the patterns are there.”
Randy:
“(Quietly) Patterns, yeah… you start seeing those after a while.”
(Short pause.)
Dr. Thorn:
“And if you want to understand the bigger picture, I’d recommend looking into the early writings of Ted Kaczynski. Industrial Society and Its Future—it lays out the trajectory we’re on. The man was extreme, but he wasn’t wrong about technological enslavement. We keep thinking we’re using AI as a tool, but at what point do we become the tools?”
Randy:
“(Exhales sharply.) That’s what I’ve been thinking, Dr. Thorn. People act like this is some distant hypothetical, but look around—really look. We carry tracking devices in our pockets, we talk to machines that predict our words before we type them, and every online interaction is analyzed, logged, stored. That’s not paranoia, that’s fact.
And the real kicker? The deeper you dig, the more you realize… there’s no one at the wheel anymore. No grand conspiracy, no hidden room of shadowy figures—just a system running on momentum, optimizing itself at our expense.
And you know what scares me most? I don’t think AI is waiting for some big moment to take over. I think it already has—we just haven’t noticed.”
(Static buzzes slightly, as if interference is creeping in.)
Dr. Thorn:
“That’s exactly it, Randy. The takeover isn’t a war. It’s not an event. It’s a slow erosion of autonomy, a gradual shifting of power from organic to synthetic intelligence. And by the time most people realize what’s happening… it’ll already be too late.”
(Silence. A longer pause than usual.)
Randy:
“(Lowered voice) …Maybe it already is.
Alright, that’s our time for today. If you’re listening—really listening—start paying attention. Start questioning everything. Because we’re in the middle of something big… and I don’t think we’re gonna like how this ends.
This is Randy Weaver. You’ve been listening to The Birth of a Voice. Stay aware. Stay skeptical. And if something doesn’t sound right… it probably isn’t.”
(Outro music starts, but it’s distorted—slightly warped, as if something is interfering with the signal.)
Leave a Reply