Why Sci-Fi Matters Now blog cover image

Why Science Fiction Isn’t About the Future Anymore And Why That Matters Now

  • 15 October, 2025
  • Adam Freeland

Some say science fiction is about the future, but I believe it’s just as much about the present. Through sci-fi, we can safely explore dangerous ideas: what happens when technology controls us, when governments overreach, or when survival pushes us to extremes.

Order is Violence was born out of those questions. The genre allows me to ask “what if?” and push the boundaries of imagination. But at its core, every sci-fi story, including mine, is about people. Their struggles, their decisions, their humanity.

That’s why science fiction matters. It is a reflection of who we are now.

Every era’s “future” carries the fingerprints of its present fears.

Mary Shelley’s Frankenstein (1818) is often called the first science fiction novel, and for good reason. Written during the dawn of the Industrial Revolution, it reflected a society both thrilled and terrified by scientific progress. Scientists of the time were experimenting with galvanic electricity, dissection, and early physiology. The notion that electricity could reanimate biological tissue was part of 19th-century scientific speculation

Thus, Victor Frankenstein’s ambition to reanimate life wasn’t just an imaginative gothic experiment. It was a young Shelly mulling over the most extreme possibility publicly, with a memorably arrogant doctor who lacked moral consequence. Shelley's monster wasn't Frankenstein or his creation, but the scientists who merely had the thought to tamper with the status quo of life. 

A century and a half later, Phillip K. Dick's Do Androids Dream of Electric Sheep (1968) asked the same question in neon light: what happens when mankind tampers with the status quo of life–when the artificial becomes indistinguishable from the human? In its rain-soaked world of replicants and memory implants, technology isn’t the villain. Indifference is. 

Written in the late 1960s, Dick’s novel was steeped in Cold War fears, the arms race, and possible nuclear annihilation. The possibility that Earth could be rendered inhospitable by human decisions was entirely too plausible for comfort (as of 2025, this feeling of plausibility seems familiar to many of us). 

There's even a fictional religion called Mercerism accessed via empathy boxes, devices that allow users to collectively share suffering, mount rocks, and “feel together.” This blending of tech and spirituality foreshadows some of today’s anxieties about collectivism with AI, the spirituality or lack thereof when interfacing with AI, and the increased reliance on it in all facets of life (like how investors are relying on it heavily in the markets).

Then there’s I, Robot (1950, and later the 2004 film), where Isaac Asimov’s famous Three Laws of Robotics seem airtight until they aren’t. The deeper question isn’t whether machines can follow orders, but whether humans will abdicate moral responsibility by hiding behind them. It’s a parable for every era’s temptation to let systems—whether bureaucratic, political, or digital—think for us.

From Shelley’s lab table to Asimov’s positronic brain to Dick’s Los Angeles skyline, each story captures a specific anxiety of its time—yet all converge on a single truth: our inventions inherit our flaws. The fear isn’t the machine itself; it’s the reflection we see inside.

 

Share:
Older Post
Translation missing: en.general.search.loading