loader image
Resisting the Machine: A Human Solution to the Cyber Dystopia

Resisting the Machine: A Human Solution to the Cyber Dystopia

“Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters. Technology is a system.”

– Ursula Franklin, The Real World of Technology

The use (and abuse) of technology to reshape the individual, society, and the world is at the beating heart of every good cyberpunk story. The flashy augmented limbs and flying vehicles get all the love on the big screen, but the underpinning systems that stratify society and turn it into one huge profit machine are more interesting to me. Such systems are already present in our world today, and are a far greater force propelling us into a cyber dystopia than the drones encroaching on our streets and skies or the latest and greatest in surveillance technology. 

I’ve written before about what Ursula Franklin calls “prescriptive technology” and corporations as slow-moving AIs, so instead of retreading old ground, I want to take some time to explore, briefly, a different paradigm for how we think about technology.

Designed for Compliance

As a quick refresher, prescriptive technology involves looking at your final product, breaking it down into its component parts, and then figuring out the most efficient way to manufacture those parts as quickly as cheaply as possible. Humans within such a system function as parts of the machine, and are seen almost exclusively as the problem, rather than the solution. Prescriptive technologies are, by necessity, systems of compliance. We have reached a point as a society where most of the systems you encounter and inhabit on a daily basis are prescriptive technology to the extent that it can be difficult to imagine a system of technology that isn’t prescriptive. As Franklin points out:

“While we should not forget that these prescriptive technologies are often exceedingly effective and efficient, they come with an enormous social mortgage. The mortgage means that we live in a culture of compliance, that we are ever more conditioned to accept orthodoxy as normal, and to accept that there is only one way of doing “it”. 

If a world engineered towards compliance — one that by its very nature limits our ability to imagine new solutions — doesn’t give us pause, then its failure as a practical solution should. Prescriptive technologies demand products that can be broken down into discrete parts, are replicable on command, and result in the exact same thing every time. That’s wonderful if you’re building toasters, but when you are designing systems aimed at human welfare, the idea that people are best served by prescriptive systems would be laughable if it didn’t result in so much harm to very beings the systems purported to help.

Growth Technology

Instead of centering out efforts on systems that favor technology over the human, Franklin proposes a growth model over a production model. 

“Growth occurs,” she explains. “It is not made. Within a growth model, all that human intervention can do is to discover the best conditions for growth and then try to meet them.” 

If that seems a little abstract, imagine a plant, say a tomato plant for example. The gardener doesn’t get tomatoes by assembling roots and a stalk and leaves and flowers finally the tomatoes. Instead the gardener drops a seed in the ground and then creates an environment of good dirt, plenty of sunshine, and water. The seed handles the rest and in the end you get not only delicious, sun-ripened tomatoes, but also more seeds than you could ever need to plant. 

I find growth technology to be a fascinating solution to the cyber dystopia because it requires us to consider the wellbeing of the subject and the environment. It demands a holistic approach. It demands that we reckon with the many interacting systems that make up our world. 

So for example, if I want to create an environment in which to grow healthy humans, I can’t simply mandate that they walk 10,000 steps a day. Instead I start having to ask difficult questions such as “Is there good space for walking?”, “Is the air fit for breathing,” and “Is there time in the day for walking”, and above all, “if not, why?” Prescriptive technologies rip their subjects out of their contexts whereas growth technology doubles down on understanding the space. This is especially critical in our modern world as we need technologies that have inherent to them an environmental consideration. 

Rethinking Our Present

There is so much more to be said about growth technology (I cannot recommend Franklin’s The Real World of Technology highly enough) but for now, consider the space around you. Think about the systems that drive how, where, and when you work. Think about the food you eat: how it’s prepared and how it gets to you. Think about where you live, the architecture and the city planning. If you’re feeling really brave, think about your healthcare system. What was done in those spaces to make the product easier to fit into a prescriptive system? Did that make it better or worse for you, the human at the center of it all?

Prescriptive technologies are not in and of themselves bad, but they cannot be our first and only solution to a living, growing world.

Prescribing the Problem

Prescribing the Problem


“The act of rebellion left to us in a cyber dystopia is that we give a damn about people, even if it profits us nothing. Especially when it profits us nothing.”

I was living in LA at the time, commuting to a cubicle job down in Irvine on the five every morning, reading “The E-Myth Revisited”, and teaching myself to code on and off when I had the brain cells to spare. When you look at it that way, it seems almost inevitable that I started writing cyberpunk. The Glitch Logs was almost a foregone conclusion. What took me by surprise, however, was stumbling across a description of The Glitch Logs thought-project in a book written the same year I was born, by a Canadian experimental physicist who I will tragically never get to meet, due to her passing in 2016.

Ursula M. Franklin’s “The Real World of Technology” takes on the formidable task of looking at how technology has shaped and continues to shape the world in which we live. She does so clearly, eloquently, and there’s a lot I have to learn not only from examining what she says, but also how she says it. One of the interesting ideas of the book is the effect of what she calls “prescriptive technology” on society.

Prescriptive Technology

In order to understand this ideas, first consider technology not exclusively as a collection of electronic gadgets or lines of code, but rather, ways of doing things. These systems sometimes involve tools, and sometimes do not.

Prescriptive technology is a system in which the doing of a thing is broken down into clearly identifiable steps. Each step is carried out by a different worker or group of workers who are required to have only that one particular skill. The goal of such a system is a well defined, replicable product that reliably gives you the same result over and over again. If you’re picturing a factory assembly line right now, then yes, that’s a good touch-point.

Some hallmarks of this prescriptive technology are that it is a-contextual and that it’s necessary effect is to enforce compliance. Prescriptive technologies also have a tendency to replicate themselves by designing fresh prescriptive technologies for any problems they encounter along the way.

(Quick note before we go any further: I don’t hate prescriptive technology. It’s really useful in some cases. It does, however, create significant, systemic problems when applied too broadly or incorrectly.)

Building Your Business Like a Factory

To anyone who’s read “The E-Myth Revisited” by one Michael E. Gerber, this should sound very familiar. To anyone who hasn’t let me give you a quick run down. The point of the book is pretty well surmised in its delightful tagline “Why Most Small Businesses Don’t Work And What To Do About It”.

I’ll spare you the suspense by giving you the “why” and the “what” right now: they don’t work because they aren’t well defined or systematized and the way to fix it is to define and systematize it.

Specifically, the author wants you to find out what makes your product unique, and then break down every single facet of your business into its modular components, then quantify, refine, and categorize those components until a literate monkey with a three-ring-binder could do any given task. Do some vision-casting for your employees about how they really-super-are making the world a better place by being a cog in your machine so that they’ll comply with what you tell them to do. After that, automate what you can, hire what you can’t, and boom, you’re in business, baby.

Lather, rinse, repeat.

If you think that sounds suspiciously like a franchising model, you’d be right.

[There, I just saved you the $19.10 (on sale!) it’s going for on Amazon at the time of the writing of this blog. Also, in case that glowing recommendation prompts you to pick up the book, be warned that it’s both condescending and misogynistic. Recommend watching Mad Max: Fury Road in the background or something similar, for balance.]

In all fairness there are some useful ideas in the book. Read it if you want to understand how most businesses in America are run right now. Come to think of it, the condescending tone the book includes might be a pretty good indicator too. But anyway.

The E-Myth Re-Examined

The difference between myself and the author of The E-Myth Revisited is that Mr. Gerber is clearly delighted with the process he designed whereas I shifted from animated interest to slowly growing horror the longer I considered what Mr. Gerber was in effect proposing.

You’ll recall that I was attempting to teach myself to code during this time, and the similarity between the process Gerber described and the logic of code seemed suspiciously similar. Mr. Gerber proposes that all businesses design aggressive prescriptive technologies for themselves and their customers.

In effect, such businesses would become giant computer programs, run on the code of their carefully crafted policies and handbooks which describe in minute and painstaking detail exactly how a thing is done. What human elements remain after automation are still interchangeable parts in a machine.

This is very good for the machine, and very bad for the human.

Such a system assumes that people are the problem, and the system is designed to weed out problems.

As Franklin puts it, “Many technological systems, when examined for context and overall design, are basically anti-people. People are seen as a source of problems while technology is seen as a source of solutions.”

It is only a matter of time until we build a machine that can do “it” — whatever tiny incremental part “it” is in the system — harder, better, faster, stronger. We are engineering ourselves out of the systems we designed, and historically, that has not worked out well for the majority of us as humans.

When machines take over the job a human previously held in our society, the human is not sent home to rest, or make art, or contribute in some way to society. Instead, the human is out of a job and viewed as a drain on society. The extra resources are not put to work for the general good; they are hoarded by the system’s owners, who sometimes also happen to be the designers.

The AI of Business

I don’t think I’m spoiling anything if I tell you that one of the premises of The Glitch Logs is that corporations run the world. It’s one of the big markers of the cyberpunk genre, and doesn’t seem that unrealistic to me, given the current push to treat “government as business”. 

(Incidentally, Franklin also has a lot to say on how prescriptive technologies have reshaped modern governments, but that’s another blog post for another time.) 

I conceived of these corporations first as behemoths of our own making vying for dominance in a world becoming rapidly inhospitable to humanity. As my thought project developed, I later came to think of them instead as slow-moving AI following their prime directive — to amass wealth — regardless of consequence, scale, or scope. The two are not particularly dissimilar.

No one wants to live under such a system — those that say they do actually want to be living just outside it, where they profit from it, but the rules of the system do not apply to them.

The pressing question is “what are we going to do about it?”

First Steps

Systemic problems aren’t easy to unravel, but there’s a few places we can start.

First, we can rebel against the premises that got us to this point. We can stop assuming from the outset that people are the problem.

Second, we can act in opposition to the mindset that a person’s value is determined by the work they produce. The act of rebellion left to us in a cyber dystopia is that we give a damn about people, even if it profits us nothing. Especially when it profits us nothing.

Beyond that, we stop building for the goal of what Franklin calls “divisible goods” such as money, and start designing instead for “indivisible goods” like clean air, and justice. I’m also extremely interested in prescriptive technology’s counterpart, “growth technology”. There’s a really interesting case study I’m privileged to be a part of going on right now that shows some very promising results. More on that next time.

In the meantime, go out there and think about technology. Think about the systems and who or what they benefit. And the next time you’re put in a position to design a system, big or small, for a huge company, or a child’s daily routine, think about who it’s good for. Reach out. Help someone even if it doesn’t benefit you. 

After all, we’re all in this together.

This post is a follow up to “In Service of Technology“, which you can read here.

In Service to Technology

In Service to Technology

Changing the way we think about technology as a solution.

I spend a lot of my time researching/thinking about systems and technology and how they impact us both on a global and personal level. There’s been a lot written about what’s wrong with existing structures, but we’re a bit starved for practical solutions for those without political power or tremendous social clout. I want to talk about a shift in how we think about problems that yields some interesting results.

The People Problem

There is a tendency in modern society to treat people as the problem and technology as the solution. In education for example, an under-performing class is more often “treated” by the purchase of more computers, iPads, or learning software instead of the hiring of additional teachers, aids, and specialists.

In healthcare, instead of giving doctors more time to dedicate to patient care, hospitals are more likely to invest in updating databases to reclassify ailments and the people who have them. Very often patients themselves will be blamed for their illness. The prevailing view is that it is not the technology of drugs or diets or workout routines that has failed the patient, but the patient who has failed the technology.

Not-So-Adaptable Technology

One of the results of this thinking is that we have adopted a mindset which dictates that humans adapt to technology, instead of creating technology properly adapted to humans, their context, and their environment.

(This paradigm is also a natural consequence of what Ursula Franklin called “prescriptive technology” and our modern concepts of scalability, but that’s another discussion for another time.)

One famous example of humans-adapting-to-tech is the keyboard layout. Ever wonder why the letters are laid out in a fairly nonsensical manner? Early typewriters suffered from a problem where the keys and hammers would jam if a person was typing too quickly. The typewriter manufacturer did a study to find out which keys were used most sequentially from one another and then split them up as much as possible.

This made typing slower, harder to learn, and required more effort on the part of the user, but it solved the technology’s problem. Ironically, the problem of keys and hammers has long since vanished, but we are still stuck with the same nonsensical keyboard. 

The modern office space is another example of this. It is not good for a human to sit at a desk 8+ hours a day staring at a screen in 70 – 73 degree temperatures carefully isolated from any change in season or sunlight. It is, however, ideal for the computer and especially for the corporate system profiting from the human’s labor.

(There’s a lot more to be said about the office space example, but if I get going on corporations, I will never stop. I write cyberpunk for a reason.)

A Practical Approach

I propose to you instead that when we are trying to solve problems, we consider instead the human, with as much contextual specificity as possible, and then begin to build our solutions around that.

Obviously the way we do work in the broader sense is long-overdue for an overhaul, but I mean specifically in the way we look in our own lives — the things we do have control over.

Let me give you an example and then we’ll wrap up this post. A friend of mine hated doing the dishes. Her husband did too, so the dishes piled up. Dealing with the clutter was stressful, took up space, and a huge energy drain. They tried setting up schedules, taking turns, looking into better storage options, etc. Money was tight so buying off the problem (capitalism’s favorite solution) wasn’t an option.

In an act of frustration and brilliance, my friend looked at herself, her problem, and the dishes. She didn’t want to do the dishes. So, instead of trying to force herself into another energy-consuming system, she simply got rid of the dishes. They kept two plates, four cups, and a few silverware.

The solution struck me as brilliant on several fronts. 

1) When it was time to eat there was a maximum of two plates that needed washing. 

2) The extra plates were donated so instead of a new plate being made — consuming resources and encouraging a giant plate company somewhere to make more — the plates (the resources that had already been consumed) were re-used. 

3) The plates went to a family with low resources, freeing up resources they would have otherwise spent on plates. So in other words, the plates helped out the local community. 

4) When friends came over, they brought their own plates, which made everyone feel closer and like they were more a part of the evening.

5) My friend has the extra energy to deal with everything else that came her way in the day. More spoons, if you will (hah hah). 

The Point

The point is not that everyone should embrace minimalism. Quite the opposite. This is not a good solution for everyone, maybe even most people. The point is this: when faced with a problem, consider the human in the context of the problem. Do not assume the human is the thing that needs to be fixed or side-lined. New tech, or conformity to existing tech, is not always a good solution. People first. 

There’s so much more to be said, but I’m gonna wrap this up here. If you have further examples of the principle in action, I want to hear them. Also, if there’s a particular application of this idea to your field of expertise, tell me about it.

Seriously.

I’m still deep in the research trenches pulling everything into a coherent, useful whole. Fresh insights from outside my fields of expertise are incredibly welcome.

This blog post is a slightly-more-organized version of a twitter thread I did. You can find the original thread here: https://twitter.com/rachelthebeck/status/1194476507931955200