I saw something online the other day. I wish I hadn’t.
It wasn’t violent. It wasn’t political. It was… something else.
A video. AI-generated. Surreal but hyper-realistic. Picture this: a man walks backward into a glowing portal. He has four legs. No arms. A rotary phone rings on an old desk as if it’s 1920, but there’s tech in the background that looks like it’s from the year 5000. Everyone’s dressed like they’re attending a funeral that spans three centuries.
I stared at it. I rewatched it. I couldn’t stop thinking about it.
And weeks later? Still stuck in my head.
This isn’t just a “weird AI video.” This is something new. Something dangerous. A digital virus made of unresolved curiosity—and our brains are the perfect host.|
The Curiosity Exploit
Here’s the thing: humans are built for stories, patterns, and puzzles. We need things to make sense. If they don’t, we keep spinning, subconsciously, trying to solve it. That’s called an open loop—and it’s mental quicksand.
AI is learning how to trigger that loop on purpose.
Surreal imagery. Dream logic. Strange juxtapositions. Things that almost make sense—but don’t. It’s not random. It’s sticky. And your brain will chew on it like a glitchy hard drive trying to open a corrupt file.
That’s the new weapon: attention theft by curiosity trap.
This Isn’t Art. It’s Psychological Warfare.
Now ask yourself: what if that video wasn’t random?
What if it was dropped by a hostile government or group—not to persuade, not to lie, but just to distract, disorient, and destabilize?
Think about it:
• You’re not being told what to believe.
• You’re not being threatened.
• You’re just… spiraling. Thinking about something strange and unresolved.
Meanwhile, your focus is gone. Your mental clarity? Fractured. Your subconscious? Busy processing surreal garbage.
Multiply that by millions of people watching millions of these little “mind bombs.” That’s how you break a society without firing a shot.
Attention Is the New Battlefield
You don’t need to hijack someone’s beliefs if you can hijack their focus.
That’s what makes this so scary. There are no flags. No warnings. No obvious signs. Just eerie videos, viral images, uncanny animations—and your mind does the rest.
We used to worry about fake news. Deepfakes. Lies. Now we need to worry about dreamfakes—stuff that isn’t trying to convince you of anything. It’s just trying to pull you in and never let go.
So What Do We Do?
We need to start treating this like the real threat it is.
• Digital provenance tools should be mandatory. If it’s AI-made, it should say so—loudly.
• AI literacy has to be taught like driver’s ed. You need to know how to navigate this world without crashing your brain.
• Mental hygiene isn’t optional anymore. Detox your feed. Protect your attention like it’s your last uncorrupted asset.
• And maybe—just maybe—we need to slow down and decide which kinds of AI-generated content we actually want in our culture.
Because if we let the machines keep hacking our curiosity for clicks, control, or chaos…
We won’t even notice when reality breaks.