Updated on 3/25/2021: After this article was originally published, Meguey Baker graciously provided some additional insight into the PbtA engine. The article has been updated to include those insights in the “Enter the Apocalypse” section below.
One afternoon in summer, a second grader I’d been working with for a while plopped down at the table. She’s usually a pretty cheerful kid, but today she was in a bad mood. A little bit of gentle prodding revealed that she was upset about the summer tutoring program she’d been enrolled in during the morning. The following conversation took place:
Student: They’re the most strictest in the whole world. And they make us do standards.
Me: Oh yeah? What standards?
Student: You have to do a lot of standards. One boy, he did 80 standards.
Me: That sounds like a lot. What are they teaching you?
Student: Mostly, they tell us to be quiet.
It was one of the more scathing indictments of an educational program I’d ever heard. I was also, unfortunately, not surprised.
Education vs. Paperwork
It’s no secret that California has a teaching shortage, but what a lot of people might not know is that one of the side-effects is a thriving after-school tutoring industry. Parents who can afford it are increasingly seeking out supplemental education options. Where there is demand, there is money to be made. Businesses and franchises have sprung up everywhere with a sudden desire to ensure the education of the next generation (or at least high scores on their SATs). There is a killing to be made on parents’ desire for their children to have a good future.
If I sound upset about this, it’s because I am.
I have a pretty low opinion of the average tutoring franchise. Based on my students’ stories, they largely consist of putting as many students as possible into a room with tables. The teachers either lecture them for 45 minutes with no check for student comprehension or (more often) simply tell them to be quiet and hand them work packets. Student success is measured by paperwork produced rather than learning achieved, if it’s measured at all. Students hate it, teachers I’ve spoken to hate it, and the educational value is about what you’d expect, except maybe worse because it teaches kids to hate learning. Parents grit their teeth against the complaining and keep sending their kids because they’ve been sold the story that this will give their children, whom they love, better lives. Also some of them need childcare and this is cheaper (and more educational!) than hiring a nanny.
It’s a textbook example of what happens when you double down on a system wrought with prescriptive technology and try to apply it to a human.
We’ve already spent time talking about prescriptive technology and its downfalls, so instead today I’m going to tell you a story about a tutoring company that’s doing things differently, and the curious case of how a tabletop RPG helped shape that change.
In January 2019 the after-school tutoring company “Ready Study Go” (RSG) was acquired by one Karyn Keene. As a former private teacher and tutor herself, Keene gave an actual-damn about student education. The program she inherited ran on paperwork and packets with a staff that had largely been stripped of its agency. It was your quintessential prescriptive tech system: the curriculum breaks down the subject matter into work packets, tutors ensure that the kids worked on the packets, and success was measured by how many packets were completed. There was also a homework assistance program which was good — kids need help with their homework sometimes — but success was still measured in paperwork. If students finished their homework early they got, you guessed it, more packets. This is not at uncommon among tutoring companies but I can say with confidence that the only people who loved this system were the ones not subject to it.
Keene wanted better for the students that attended the program. She and I had worked well together when we were both teachers, and I’d done some freelance work for her in the past. I was hired as a freelancer to help design a training curriculum that would support student education and wellbeing on a systemic level, and then build it.
Freedom to Thrive or to Fail?
Since the system that needed to be designed was focused on humans, I set about exploring it through the lens of growth technology, balanced with the practical limitations of Ready Study Go’s existing resources and structures. If you haven’t been following along with this series of blog posts, growth technology is a term coined by Ursula Franklin. It describes a system which creates an environment where the outcome you are trying to create happens organically rather than trying to assemble the product piece by piece. This is especially important when dealing with human-centric systems.
Several interesting challenges presented themselves for reimagining RSG. Chief among them was the tutors themselves. There was a huge range of teaching experience across the staff. Some had been teaching in classrooms for years, and others were just starting their undergrads. One of the key design questions that needed to be answered was this: “How do you design a system that teaches people how to be good teachers and doesn’t get in the way of teachers who are already good at what they do?”
Good teaching is highly contextual. There is no one-size-fits-all for kids’ education. There are consistent underlying principles, but the application varies. Some days, students need to be pushed. Some days they won’t be able to learn anything until they explain why they were upset that their friend didn’t pick them first at recess. They struggle with a basic concept one day, and then race through three milestones the next. Kids are not machines systematically imbibing and outputting information. They grow in fits and bursts. Understanding this on a fundamental level is core to the patience, value judgments, and tenacity of a good teacher.
In order to enable good teachers, I needed a system with a lot of freedom for judgment calls. On the other hand, if I left the system too vague and wide open, the inexperienced tutors wouldn’t know what to do. They would default to what they had been trained on: paperwork. Without concrete guidance, they were liable to continue handing out packets and telling kids to focus and sit still.
Enter the Apocalypse
I discussed the problem with my husband in the car on the way to tutoring my students one day. More specifically, I was bemoaning the lack of growth-tech systems for me to use as a pattern. “Huh,” my husband commented, “sounds like an RPG.” He was right, and that basically amounts to the plot twist of this story. Apocalypse World was right there and I’d been missing it that whole time.
In case you’re not as deep in the weeds as I am in indie tabletop RPG nerdom, I’ll back up for a moment. Apocalypse World, designed by Vincent Baker and Meguey Baker is a game set in a post-apocalyptic world. Imagine playing Mad Max: Fury Road in all its grit and glory, except the story is new and no one knows what’s going to happen. The setting is implied rather than strictly defined and there is no preset narrative. Instead, players flesh out the world together and the story writes itself during play. In addition to being a lot of fun, it’s also a great example of growth technology at work.
Rather than instructing players to follow the deconstructed beats of a narrative (first write a hook, then some rising action, next a climax, etc.), Apocalypse World creates an environment where good stories happen. It does this using the Powered by the Apocalypse (PbtA) engine. Each player manages one of the story’s characters. They make Moves to interact with the fiction of the world and roll dice to find out if their efforts are successful or not. Moves include things like “Act Under Fire”, “Read a Sitch”, or “Go Aggro”. One of the players, referred to as the Master of Ceremonies (MC) has the job of managing the apocalyptic world and its non-player inhabitants. The MC follows three Agendas, which are like goals for the story. They are also given a set of Principles to help them figure out which Move would make for the most compelling story. The system reliably gets people to be good storytellers and say interesting things without reverting to prescriptive systems.
Meguey Baker also has this to say about PbtA:
The underpinnings of the design work in Apocalypse World is acknowledging the consent and agency of the participants in the conversation, treating people with courtesy, decency and respect, and understanding participants, in this case students, as the people best able to articulate their own experience. From this, we understand the roles in a productive conversation, one that creates new understanding or a shared fiction or space for authentic learning, are not all the same, and that structuring these roles in concrete and actionable ways produces better and more repeatable results than unstructured conversation.
This also takes into account different communication styles and learning modalities, as more out-going participants gravitate towards roles – called “playbooks” in AW – that are more action driven or performance-oriented, and quieter participants gravitate more towards observational or analytical roles. The affordances and constraints of each role allow the structure of the design to put conversational and psychological “guard rails” in place, giving each role its own space and tasks, allowing the participant to focus on their piece while being a contributing part of the whole. And by allowing participants to choose their own role, with more agency and control over their own degree of performance vs observation vs analysis, they can choose the role that most fits their comfort level or method of learning in that moment. Their engagement is higher vs roles or tasks being assigned that they feel less motivated to perform or complete. This also creates an increased comfort with trying out other roles in the future, as they experience positive results.
Meguey Baker, March 2021
For teacher training purposes, the Agendas, Principles, and Moves of PbtA interested me most. They were very clearly defined boundaries and yet the system ran on judgment calls. It was a system, a technology, that didn’t try to side-line humans whenever possible. Instead, it depended on them and supported them. It both drove players in a very particular direction, and gave them a staggering amount of freedom.
Armed with this working model of growth technology, I built a training curriculum aimed at equipping tutors with the skills of seasoned teachers without disrupting the good work and judgment of teachers already skilled at their craft.
I want to say very clearly here that what was done was not “gamification” of education. Gamification as it’s currently used implies the addition of external, artificial rewards (and sometimes artificial obstacles) in an effort to make the task at hand more “fun”. Attaching extrinsic rewards to learning is a devil’s bargain. It garners short-term gains and compliance at the expense of the long-term learning process. It 1) implies that the task is not worth doing for its own sake, 2) makes people reluctant to do the task when the reward is not present, and 3) has diminishing returns. Instead, this curriculum was designed more like a toolkit. It both defined what was acceptable in a learning environment, and helped tutors figure out what to do when they weren’t sure how to proceed.
In Apocalypse World, when a player doesn’t know what to do they look at their character sheet to see what moves are available to them. In this teaching toolkit system, tutors who didn’t know how to proceed looked at the list of moves and picked one.
For example, one of the Agendas the tutors were given was “Build a learning environment”. Some of the associated moves were “Limit a distraction”, “Set an expectation”, and “Give specific praise”. Another Agenda was “Help students achieve learning goals” and some of the associated moves were “Teach a hard skill”, “Teach a soft skill”, “Design and play a game”, and “Model work”. They weren’t referred to as Agendas and Moves within the training curriculum, but for simplicity’s sake, that’s what I’ll refer to them as here.
[As an aside, I’m prepared to put together a separate post detailing each move and what it accomplishes, but this post has already stretched long. If there’s enough interest, I’ll do a follow up. I’m also looking at submitting a paper about this project to the GENeration Analog Games and Education Conference, so we’ll see if anything comes of that.]
The new training curriculum was implemented in September of 2019, at the start of the school year. In December of that year, I met up with Keene to see how things were going. It had only been a few months, but there were already strong indicators for success. Staff turnover was way down, and student grades were on the rise. The program gave weight to both hard skills, such as adding fractions, as well as soft skills, such as organizing a backpack, and it was paying off. Homework wasn’t just being done: it was also being turned in. Enrollment was climbing then and has continued to climb since, primarily through word-of-mouth referrals.
For me though, the biggest victory wasn’t in the metrics. The tutors, Keene told me, weren’t referring to the kids as “the students” so much anymore. They were calling them “my students”. That, to me, is winning.
It’s been more than a year since the program launched, and even though Ready Study Go continues to grow and thrive to this day, the coming of the COVID-19 pandemic pretty much destroyed any reasonable kind of data collection for the project in early 2020. Even so, the project gives me a deep, aching feeling whenever I think about it, in a good way.
I write cyberpunk, and I spend a lot of time looking at how technology is systematically being used to control and destroy.
It was nice to set a system loose in the world that made it a better place for a change.
“Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters. Technology is a system.”
– Ursula Franklin, The Real World of Technology
The use (and abuse) of technology to reshape the individual, society, and the world is at the beating heart of every good cyberpunk story. The flashy augmented limbs and flying vehicles get all the love on the big screen, but the underpinning systems that stratify society and turn it into one huge profit machine are more interesting to me. Such systems are already present in our world today, and are a far greater force propelling us into a cyber dystopia than the drones encroaching on our streets and skies or the latest and greatest in surveillance technology.
As a quick refresher, prescriptive technology involves looking at your final product, breaking it down into its component parts, and then figuring out the most efficient way to manufacture those parts as quickly as cheaply as possible. Humans within such a system function as parts of the machine, and are seen almost exclusively as the problem, rather than the solution. Prescriptive technologies are, by necessity, systems of compliance. We have reached a point as a society where most of the systems you encounter and inhabit on a daily basis are prescriptive technology to the extent that it can be difficult to imagine a system of technology that isn’t prescriptive. As Franklin points out:
“While we should not forget that these prescriptive technologies are often exceedingly effective and efficient, they come with an enormous social mortgage. The mortgage means that we live in a culture of compliance, that we are ever more conditioned to accept orthodoxy as normal, and to accept that there is only one way of doing “it”.
If a world engineered towards compliance — one that by its very nature limits our ability to imagine new solutions — doesn’t give us pause, then its failure as a practical solution should. Prescriptive technologies demand products that can be broken down into discrete parts, are replicable on command, and result in the exact same thing every time. That’s wonderful if you’re building toasters, but when you are designing systems aimed at human welfare, the idea that people are best served by prescriptive systems would be laughable if it didn’t result in so much harm to very beings the systems purported to help.
Instead of centering out efforts on systems that favor technology over the human, Franklin proposes a growth model over a production model.
“Growth occurs,” she explains. “It is not made. Within a growth model, all that human intervention can do is to discover the best conditions for growth and then try to meet them.”
If that seems a little abstract, imagine a plant, say a tomato plant for example. The gardener doesn’t get tomatoes by assembling roots and a stalk and leaves and flowers finally the tomatoes. Instead the gardener drops a seed in the ground and then creates an environment of good dirt, plenty of sunshine, and water. The seed handles the rest and in the end you get not only delicious, sun-ripened tomatoes, but also more seeds than you could ever need to plant.
I find growth technology to be a fascinating solution to the cyber dystopia because it requires us to consider the wellbeing of the subject and the environment. It demands a holistic approach. It demands that we reckon with the many interacting systems that make up our world.
So for example, if I want to create an environment in which to grow healthy humans, I can’t simply mandate that they walk 10,000 steps a day. Instead I start having to ask difficult questions such as “Is there good space for walking?”, “Is the air fit for breathing,” and “Is there time in the day for walking”, and above all, “if not, why?” Prescriptive technologies rip their subjects out of their contexts whereas growth technology doubles down on understanding the space. This is especially critical in our modern world as we need technologies that have inherent to them an environmental consideration.
Rethinking Our Present
There is so much more to be said about growth technology (I cannot recommend Franklin’s The Real World of Technology highly enough) but for now, consider the space around you. Think about the systems that drive how, where, and when you work. Think about the food you eat: how it’s prepared and how it gets to you. Think about where you live, the architecture and the city planning. If you’re feeling really brave, think about your healthcare system. What was done in those spaces to make the product easier to fit into a prescriptive system? Did that make it better or worse for you, the human at the center of it all?
Prescriptive technologies are not in and of themselves bad, but they cannot be our first and only solution to a living, growing world.
One of the big features of the cyber dystopia is that everything has a price. Every market is eaten up by giant corporations and any scrappy underdogs who set out to make a new market quickly get gobbled up or put out of business by the corporate titans that run the world.
It’s because of this that free tools for creators bring me deep and abiding joy. They’re like independent acts of rebellion against the belief that spending power is the only power that matters. These tools help even the playing field. They’re a testament to the good out there in the world. They represent a whole ARMY of people who put in work and hours and resources and asked nothing in return so that other creators could be empowered.
I will grant you that many “free” tools out there are just lead generators to try and get you to spend money later on down the road. It’s also worth noting that I’m not saying every tool out there should be free. People gotta eat, and pay rent, and get sick sometimes, and take breaks, and all the other things that cost money in our society.
So with those disclaimers in mind, let’s get busy with our rebellion. Here’s a collection of free tools and resources for creators new and experienced alike to make use of.
NASA has released a large collection of space photos to the public, many of which are accompanied by a short explanation of the phenomenon in the picture. Check the individual license on each picture before using it.
That’s all the free tools I have for now. Obviously, this is just the tip of the iceberg in terms of what’s available out there. If there’s a tool you’d like to see added to this list, shoot me a message on social media and I’ll add it when I get a chance. Happy creating, everyone!
This is the story of a small act of kindness that made a difference.
I was maybe seven or eight years old, living in Brussels, and I was taking the metro with my mom and little sister. We were little, so naturally, we both suddenly took off running. I don’t remember why. I think we were racing to get to the platform first, or maybe we just wanted to go fast.
So there we were, two little girls running as fast as they could down an escalator with my mom chasing after us. There was a metro just leaving the station as we came pounding down. It started to pick up speed and then, to our collective surprise, it slowed down again.
The metro driver made eye contact with my mom and did the universal hand gesture for “This your train?” I guess it looked like we were all running to make the metro.
My mom shook her head, smiled a thank you, and the metro pulled out of the station. If I ever saw that driver again, I didn’t recognize him.
I remember my mom saying, “I have never seen anyone stop like that before.” I don’t think it ever happened again, but I think about it a lot. It was a formative moment for little me.
It made me believe that every now and again, you catch a break. An un-looked-for bit of goodness. Every day there is a chance, no matter how small, that someone might be kind, instead of just doing their job.
It made me a little less afraid and by proxy, a little more willing to bet on people. It was a really small thing, and I bet you that metro driver doesn’t remember it, but I still do.
“The act of rebellion left to us in a cyber dystopia is that we give a damn about people, even if it profits us nothing. Especially when it profits us nothing.”
I was living in LA at the time, commuting to a cubicle job down in Irvine on the five every morning, reading “The E-Myth Revisited”, and teaching myself to code on and off when I had the brain cells to spare. When you look at it that way, it seems almost inevitable that I started writing cyberpunk. The Glitch Logs was almost a foregone conclusion. What took me by surprise, however, was stumbling across a description of The Glitch Logs thought-project in a book written the same year I was born, by a Canadian experimental physicist who I will tragically never get to meet, due to her passing in 2016.
Ursula M. Franklin’s “The Real World of Technology” takes on the formidable task of looking at how technology has shaped and continues to shape the world in which we live. She does so clearly, eloquently, and there’s a lot I have to learn not only from examining what she says, but also how she says it. One of the interesting ideas of the book is the effect of what she calls “prescriptive technology” on society.
In order to understand this ideas, first consider technology not exclusively as a collection of electronic gadgets or lines of code, but rather, ways of doing things. These systems sometimes involve tools, and sometimes do not.
Prescriptive technology is a system in which the doing of a thing is broken down into clearly identifiable steps. Each step is carried out by a different worker or group of workers who are required to have only that one particular skill. The goal of such a system is a well defined, replicable product that reliably gives you the same result over and over again. If you’re picturing a factory assembly line right now, then yes, that’s a good touch-point.
Some hallmarks of this prescriptive technology are that it is a-contextual and that it’s necessary effect is to enforce compliance. Prescriptive technologies also have a tendency to replicate themselves by designing fresh prescriptive technologies for any problems they encounter along the way.
(Quick note before we go any further: I don’t hate prescriptive technology. It’s really useful in some cases. It does, however, create significant, systemic problems when applied too broadly or incorrectly.)
Building Your Business Like a Factory
To anyone who’s read “The E-Myth Revisited” by one Michael E. Gerber, this should sound very familiar. To anyone who hasn’t let me give you a quick run down. The point of the book is pretty well surmised in its delightful tagline “Why Most Small Businesses Don’t Work And What To Do About It”.
I’ll spare you the suspense by giving you the “why” and the “what” right now: they don’t work because they aren’t well defined or systematized and the way to fix it is to define and systematize it.
Specifically, the author wants you to find out what makes your product unique, and then break down every single facet of your business into its modular components, then quantify, refine, and categorize those components until a literate monkey with a three-ring-binder could do any given task. Do some vision-casting for your employees about how they really-super-are making the world a better place by being a cog in your machine so that they’ll comply with what you tell them to do. After that, automate what you can, hire what you can’t, and boom, you’re in business, baby.
Lather, rinse, repeat.
If you think that sounds suspiciously like a franchising model, you’d be right.
[There, I just saved you the $19.10 (on sale!) it’s going for on Amazon at the time of the writing of this blog. Also, in case that glowing recommendation prompts you to pick up the book, be warned that it’s both condescending and misogynistic. Recommend watching Mad Max: Fury Road in the background or something similar, for balance.]
In all fairness there are some useful ideas in the book. Read it if you want to understand how most businesses in America are run right now. Come to think of it, the condescending tone the book includes might be a pretty good indicator too. But anyway.
The E-Myth Re-Examined
The difference between myself and the author of The E-Myth Revisited is that Mr. Gerber is clearly delighted with the process he designed whereas I shifted from animated interest to slowly growing horror the longer I considered what Mr. Gerber was in effect proposing.
You’ll recall that I was attempting to teach myself to code during this time, and the similarity between the process Gerber described and the logic of code seemed suspiciously similar. Mr. Gerber proposes that all businesses design aggressive prescriptive technologies for themselves and their customers.
In effect, such businesses would become giant computer programs, run on the code of their carefully crafted policies and handbooks which describe in minute and painstaking detail exactly how a thing is done. What human elements remain after automation are still interchangeable parts in a machine.
This is very good for the machine, and very bad for the human.
Such a system assumes that people are the problem, and the system is designed to weed out problems.
As Franklin puts it, “Many technological systems, when examined for context and overall design, are basically anti-people. People are seen as a source of problems while technology is seen as a source of solutions.”
It is only a matter of time until we build a machine that can do “it” — whatever tiny incremental part “it” is in the system — harder, better, faster, stronger. We are engineering ourselves out of the systems we designed, and historically, that has not worked out well for the majority of us as humans.
When machines take over the job a human previously held in our society, the human is not sent home to rest, or make art, or contribute in some way to society. Instead, the human is out of a job and viewed as a drain on society. The extra resources are not put to work for the general good; they are hoarded by the system’s owners, who sometimes also happen to be the designers.
The AI of Business
I don’t think I’m spoiling anything if I tell you that one of the premises of The Glitch Logs is that corporations run the world. It’s one of the big markers of the cyberpunk genre, and doesn’t seem that unrealistic to me, given the current push to treat “government as business”.
(Incidentally, Franklin also has a lot to say on how prescriptive technologies have reshaped modern governments, but that’s another blog post for another time.)
I conceived of these corporations first as behemoths of our own making vying for dominance in a world becoming rapidly inhospitable to humanity. As my thought project developed, I later came to think of them instead as slow-moving AI following their prime directive — to amass wealth — regardless of consequence, scale, or scope. The two are not particularly dissimilar.
No one wants to live under such a system — those that say they do actually want to be living just outside it, where they profit from it, but the rules of the system do not apply to them.
The pressing question is “what are we going to do about it?”
Systemic problems aren’t easy to unravel, but there’s a few places we can start.
Second, we can act in opposition to the mindset that a person’s value is determined by the work they produce. The act of rebellion left to us in a cyber dystopia is that we give a damn about people, even if it profits us nothing. Especially when it profits us nothing.
Beyond that, we stop building for the goal of what Franklin calls “divisible goods” such as money, and start designing instead for “indivisible goods” like clean air, and justice. I’m also extremely interested in prescriptive technology’s counterpart, “growth technology”. There’s a really interesting case study I’m privileged to be a part of going on right now that shows some very promising results. More on that next time.
In the meantime, go out there and think about technology. Think about the systems and who or what they benefit. And the next time you’re put in a position to design a system, big or small, for a huge company, or a child’s daily routine, think about who it’s good for. Reach out. Help someone even if it doesn’t benefit you.
Changing the way we think about technology as a solution.
I spend a lot of my time researching/thinking about systems and technology and how they impact us both on a global and personal level. There’s been a lot written about what’s wrong with existing structures, but we’re a bit starved for practical solutions for those without political power or tremendous social clout. I want to talk about a shift in how we think about problems that yields some interesting results.
The People Problem
There is a tendency in modern society to treat people as the problem and technology as the solution. In education for example, an under-performing class is more often “treated” by the purchase of more computers, iPads, or learning software instead of the hiring of additional teachers, aids, and specialists.
In healthcare, instead of giving doctors more time to dedicate to patient care, hospitals are more likely to invest in updating databases to reclassify ailments and the people who have them. Very often patients themselves will be blamed for their illness. The prevailing view is that it is not the technology of drugs or diets or workout routines that has failed the patient, but the patient who has failed the technology.
One of the results of this thinking is that we have adopted a mindset which dictates that humans adapt to technology, instead of creating technology properly adapted to humans, their context, and their environment.
(This paradigm is also a natural consequence of what Ursula Franklin called “prescriptive technology” and our modern concepts of scalability, but that’s another discussion for another time.)
One famous example of humans-adapting-to-tech is the keyboard layout. Ever wonder why the letters are laid out in a fairly nonsensical manner? Early typewriters suffered from a problem where the keys and hammers would jam if a person was typing too quickly. The typewriter manufacturer did a study to find out which keys were used most sequentially from one another and then split them up as much as possible.
This made typing slower, harder to learn, and required more effort on the part of the user, but it solved the technology’s problem. Ironically, the problem of keys and hammers has long since vanished, but we are still stuck with the same nonsensical keyboard.
The modern office space is another example of this. It is not good for a human to sit at a desk 8+ hours a day staring at a screen in 70 – 73 degree temperatures carefully isolated from any change in season or sunlight. It is, however, ideal for the computer and especially for the corporate system profiting from the human’s labor.
(There’s a lot more to be said about the office space example, but if I get going on corporations, I will never stop. I write cyberpunk for a reason.)
A Practical Approach
I propose to you instead that when we are trying to solve problems, we consider instead the human, with as much contextual specificity as possible, and then begin to build our solutions around that.
Obviously the way we do work in the broader sense is long-overdue for an overhaul, but I mean specifically in the way we look in our own lives — the things we do have control over.
Let me give you an example and then we’ll wrap up this post. A friend of mine hated doing the dishes. Her husband did too, so the dishes piled up. Dealing with the clutter was stressful, took up space, and a huge energy drain. They tried setting up schedules, taking turns, looking into better storage options, etc. Money was tight so buying off the problem (capitalism’s favorite solution) wasn’t an option.
In an act of frustration and brilliance, my friend looked at herself, her problem, and the dishes. She didn’t want to do the dishes. So, instead of trying to force herself into another energy-consuming system, she simply got rid of the dishes. They kept two plates, four cups, and a few silverware.
The solution struck me as brilliant on several fronts.
1) When it was time to eat there was a maximum of two plates that needed washing.
2) The extra plates were donated so instead of a new plate being made — consuming resources and encouraging a giant plate company somewhere to make more — the plates (the resources that had already been consumed) were re-used.
3) The plates went to a family with low resources, freeing up resources they would have otherwise spent on plates. So in other words, the plates helped out the local community.
4) When friends came over, they brought their own plates, which made everyone feel closer and like they were more a part of the evening.
5) My friend has the extra energy to deal with everything else that came her way in the day. More spoons, if you will (hah hah).
The point is not that everyone should embrace minimalism. Quite the opposite. This is not a good solution for everyone, maybe even most people. The point is this: when faced with a problem, consider the human in the context of the problem. Do not assume the human is the thing that needs to be fixed or side-lined. New tech, or conformity to existing tech, is not always a good solution. People first.
There’s so much more to be said, but I’m gonna wrap this up here. If you have further examples of the principle in action, I want to hear them. Also, if there’s a particular application of this idea to your field of expertise, tell me about it.
I’m still deep in the research trenches pulling everything into a coherent, useful whole. Fresh insights from outside my fields of expertise are incredibly welcome.