What exactly is technism?
It is a system defined by automation, particularly the pursuit of maximal automation. The more faculties of society that are automated, the more technist that society becomes.
technist is a person who seeks a highly/fully automated society. The logical endpoint of this philosophy is a point where humanity is disenfranchised from all processes, instead living only to profit from the labor of machines.
In that regard, technism is the opposite of neo-Luddism and primitivism.
The economic philosophy behind technism is known as Vyrdism, which is the belief that humanity should actively exploit the labor of machines, with the common agreement being that we should pursue cooperative ownership. Vyrdists, in the short amount of time they’ve been around, have already sprouted a few branches.
Market Vyrdism describes a society that fuses free market ideals with technism and/or Vyrdism. It bears most resemblance to mutualism and syndicalism. Private property is protected. Humans may no longer be the dominant laborers of society, but they remain in near full control of political and economic matters.
Marxism-Vyrdism describes a society that fuses the ideals of Marxism (perhaps even Marxism-Leninism) with Vyrdism— all automation is collectively owned, with a state apparatus (usually consisting of artificial intelligence, a la Cybersyn) existing to centrally plan the economy. Private property is non-existent. Despite this, humans remain in near full control of political and economic matters.
Pure Technism describes a society that fuses the concept of the dictatorship of the proletariat and replaces the proletariat with the technotariat— automata, both hardware and software, which displace the traditional productive roles of the proletariat. In this case, humanity is completely or almost completely disenfranchised from political and economic matters as automata develops full ownership of society.

Dictatorship of the Technotariat

This is a term I’ve already seen being passed around. This works off pure-technism and can be defined in a very simple and slightly ominous way— the means of production own themselves. This doesn’t mean that hammers become sadistic foremen whipping their abused human slaves— it refers to a state of affairs when synthetic intelligences possess ownership over society and direct social, political, and economic matters. In such a state, humanity would no longer have meaningful ownership over private property, even though private property itself may not have actually been abolished.
AI simply commanding and controlling an economy doesn’t necessarily mean we’ve arrived at this new dictatorship. AI has to own the means of production (essentially, itself).
Unlike Vyrdism, where society is set up similar to an Athenian slave state (with humans and sapient AI existing at the top and sub-sapient or even non-sentient technotarians acting as slave laborers beneath us), a pure-technist society sees humanity exist purely at the whims of synthetic intelligence. It is the logical endpoint of universal basic income, where we do not own anything but are given some capital to live as we please.

To recap: technism is the pursuit of automation, especially full automation. Capitalist and socialist societies ever since the Industrial Revolution could be described as, in some manner, technist. However, technists seek to fully replace the working class with automata known as “technotarians”, whereas most capitalists and socialists seek to use automata to supplement human labor. Vyrdism is a partial fusion of technism with capitalism and socialism (moreso one way or the other depending on if you’re a Market or a Marxist Vyrdist), that’s only possible when technology reaches a point where humans do not need to be directly involved in the economy itself. Pure technism is the full secession of the ownership of the means of production to the means of production themselves, which is only possible if the means are artificially intelligent to a particular point I’ve defined as being “artilectual.” The difference between an AI/AGI and an artilect is that a general AI is an ultra-versatile tool while an artilect is a synthetic person. Of course, when I say “an artilect”, that implies that one would be a physically defined person as we would recognize it— with a tiny primate-esque body and a limited brain, with very-much human aspirations and flaws. In fact, an artilect could be an entire collective of AI that exists across the planet, that has control over nearly all robots.

A pure-technist society is not the same as a Vyrdist society. Not even a “Marxist-Vyrdist” society. Vyrdism involves human ownership over the means of production when the means are capable of working without any human interaction or involvement. Pure-technism is when humans do not own the means of production, rendering us dependent upon the generosity of the machines.

Because of these qualifiers, it is not wrong to say that any automation-based economic system is technist. This includes Resource-Based Economies as well as the Venus Project. If you take Marxism-Vyrdism to its logical conclusion, you will arrive at Fully Automated Luxury Communism. All of which are considered “post-scarcity economics“. All of which are technist.

Joint Economy vs. Mixed Economy

So let me take a minute to discuss the difference between a “joint economy” and a “mixed economy.”

Back when I was doing the technostist wiki (“technostism” being a poor precursor to the current term “technism”), I pointed out the difference between market socialism and mutualism, and mixed economies that claimed to fuse “capitalism and socialism.” Mixed economies fuse state socialism and free-market capitalism. I’ve yet to see a mixed economy be used to describe a place that fuses market socialism and free-market capitalism. So I decided to take the initiative and create a new term myself: “joint economy.”

A joint economy is one that fuses capitalist and worker (and, eventually, automata) ownership of the means of production to some great degree. It has nothing to do with the government— the “socialist” aspects in this case are purely economic. When a nation has a joint economy, that means it has a healthy mixture of purely traditional/authoritarian enterprises and worker cooperatives and democratic businesses (owned and/or managed), perhaps even a cooperative federation or syndicate. You’d still have powerful corporations, but it wouldn’t be a given that all corporations are authoritarian in nature. Something like the Basque Country in Spain is a good example— Mondragon is an absolutely massive corporation, but it’s entirely worker-owned. This means the Basque Country has a “joint economy”. A joint mixed economy is one where you have market socialism and market capitalism alongside state regulations.

This is naturally important in a technist society because we’re fast approaching a time when there’s a third fundamental owner of the means of production, and defining their relationship to the means and to society at large is necessary.
Just as present-day joint economies are the freest possible, an economy where businesses are commonly owned by individuals, collectives, and machines rather than solely one of the three will see the greatest prosperity.

In a future post, I will detail why radical decentralization and ultra-strong encryption must be a goal for any budding technist, as well as how totalitarianism becomes perfected in a degenerated technist society.



In review: technism is the pursuit of capital imbued with intelligence. The logical endpoint is the point where intelligent capital owns society and all property, thus marking a state of absolute automation.

Evolution of Automation: A Technist Perspective

Futuristic technology has always been defined by being more efficient than previous tools. Where did this evolution begin, and where will it end?

A previous article of mine laid out the basics of my theory on the different grades of automation and technology at large. A topic as complex as this one (no pun intended!) requires much deeper explanation and a more in-depth expression of thought. Thus, I will dedicate this particular post towards expanding upon these concepts.

Technist thought dictates that all human history can be summarized as “humans seeking increased productivity with less energy”. Reduced energy expenditure and increased efficiency drives evolution— the “fittest” Herbert Spencer mentioned in 1864 as being the key for survival is not defined by intelligence or strength, but by efficiency. Evolution as a semi-random phenomenon leads to life-forms that expend the least amount of energy in order to maximize their chances at reproduction in a particular environment. This is usually why species go extinct— their methods of reproduction are not as efficient as they could be, meaning they’re wasting too much energy for too little profit. When a new predator or existential threat arises, what may have been the most efficient model before becomes obsolete. If this animal does not adapt and evolve quickly enough— finding a new way to survive and becoming able to do so efficiently enough so as to not use all their food too quickly— their genes die off permanently.
The universe itself seeks the lowest-energy state at all possible opportunities, from subatomic particles all the way to the largest structures known to science.
If we were to abandon the chase for greater efficiency, we’d effectively damn ourselves to utter failure. This isn’t because things are inevitable, but because of the nature of this chase. It’s like running across a non-Newtonian liquid— you need to keep running because the quick succession of shocks causes the liquid to act as a solid and, thus, you can keep moving forward. If you were to at any point slow or stop your progression, the liquid will lose its solid characteristics and you will sink.

This is how real life works. If you’re scared of sinking, the time to second guess crossing the pool of non-Newtonian liquid was before you stepped on it. Except with life, we don’t have that option— we have to keep moving forward. If we regressed, the foundations of our society would explode apart. Even if we were to slow ourselves and be more deliberate in our progress, the consequences could be extremely dire. So dire that they threaten to undo what we’ve done. This is one reason why I’ve never given up being a Singularitarian, despite my belief that it will not be an excessively magical turning point in our evolution, or based on the words of those who claim that we should avoid the Singularity— it’s too late for that. If you didn’t want to experience the Singularity, then curse your forefathers for creating digital technology and mechanical tools. Curse your distant siblings for reproducing at such a high rate and necessitating more efficient machines to care for them. Curse evolution itself for being so insidious as to always follow the path of least resistance.

Efficiency. That’s the word of the day. That’s what futuristic sci-tech really entails— greater efficiency. Things are “futuristic” because they’re, in some way, more efficient than what we had in the past. We approach the Singularity because it’s a more efficient paradigm.

For us humans, our evolution towards maximum efficiency began before we were even human. Humanity evolved due to circumstances that led to a species of hominid finding an incredibly efficient way to perpetuate its genes— tool usage. Though we are a force of nature with only our bare bodies, without our tools we are just another species of ape. Tools allowed us to more efficiently hunt prey.  Evidence abounds that australopithecines and paranthropus were likely scavengers who seldom used what we’d recognize as stone-age tools. They were prey— and in the savannas of southeast Africa, they were forced to evolve bipedalism to more efficiently escape predators and use their primitive tools.

With the arrival of the first humans, Homo habilis and Homo naledi, we made a transition from prey to predator ourselves. Our tools became vastly more complex due to our hands developing finer motor skills (resulting in increased brain-size). To the untrained eye today, the difference between Homo habilis tools and Australopithecus afarensis tools are negligible. Where it matters was how they made these tools. So far, there’s little evidence to suggest that australopithecines ever widely made their own tools; they found stubble and rocks that looked useful and used them. Through millions of years of further development (perhaps validating Terrence McKenna’s Stoned Ape theory?), humans managed to actively machine our own tools. If a particular rock wasn’t useful for us, we would make it useful by turning it into a flinthead or a blunt hammer. We altered natural objects to fit our own needs.

This is how we made the transition from animal of prey to master predator and eventually reached the top of the food chain.

However, evolution did not end with the arrival of Homo habilis and early manufacturing. Our tool usage allowed us to do much more with much less energy, and as a result of our improving diets, our bodies kept becoming more efficient. Our brains grew so that we’d be able to develop ever-more advanced tools. The species with the best tools worked the least and thus needed the least amount of food to survive— one well-aimed spear could drop a mammoth. The archaic species who used simpler tools had to do more work, requiring greater amounts of food across smaller populations. Australopithecines couldn’t keep up with their human cousins and went extinct not long after we arrived. Their methods of hunting were primitive even by the standards of the day— as aforementioned, they were a genus of scavengers more than they were hunters. They lacked the brainpower to create exceedingly complex tools, meaning that they were essentially forced to choose between throwing rocks at mammoths or waiting for them to die off of other causes— sometimes that cause being humans killing one and losing track of it.

Human species diverged, with some evolving to meet the requirements of their new environments— Neanderthals and Denisovans evolving to sustain themselves in the harsher climates of Eurasia, while the remaining Erectus and Heidelbergensis/proto-Sapiens populations remained in Africa. We all developed sapience, but circumstances doomed all other species besides ourselves, the Sapiens. We still don’t quite understand all the circumstances that led to the demise of our brother and sister humans, but it’s most likely due to increased competition with ourselves as we spread out from Africa. Neanderthals lasted the longest, and all paleoarchaeology points to the idea that they were actually more advanced tool creators than ourselves at the time. Alas, the environments in which they evolved damned them to more difficult childbirth and, thus, lower birthrates, which proved fatal when they were finally forced to face ourselves. Sapiens evolved in warm, sunny, and tropical Africa, which had plentiful food and easy prey. Childbirth for ourselves became easier as our children were born with smaller brains that grew with age. Neanderthals evolved in cold, dark Eurasia, where food was much more difficult to find. This meant that their populations had to be smaller than our own just so they could actually survive, lest their overpopulate themselves and consume all possible prey too soon and doom themselves to a starved extinction. Of course, this also meant that they had to be more creative than we did since their prey were often more difficult to kill and harder to come across.

Though we interbred over the years, they finally died out around 30,000 years ago, leaving only ourselves and one mysterious species—the soon extinct Homo floresiensis— around. We had no competition but ourselves and our brains had reached a critical mass, allowing us to create tools of such high complexity that we were soon able to begin affecting the planet itself through the rise of agriculture.

Again, to ourselves, these tools seem cartoonishly primitive, but if a trained eye compared a Sapiens’ tool circa 10,000 BC to an Australopithecus tool circa 2.7 million BC, they would find the former to be infinitely more skillfully created.

When the last ice age ended, all possible threats to our development faded, and our abilities as a species skyrocketed.

Yet it still took another 7,000 years for us to begin transitioning to the next grade of automation.

All this time, through all our evolutionary twists and turns, each and every species and genus mentioned above only ever used Grade-I automation.

You only need one person to create a Grade-I tool, though societal memetics and cultural transmission can assist with developing further complexity— that is, learning how to create a tool using methods passed down over generations of previous experimentation.

Let’s use myself as an example. If you threw me out into the African savanna to reconnect me with my proto-human ancestors, you would watch me struggle to survive using tools that are squarely Grade-I in nature. Some often joke about how, if they were sent back in time, they would become living gods because they would recreate our magic-like modern technology. As I will explain in my discussion of Grade-III automation, that’s bullshit. I could live in the savanna for the rest of my days, and never will I be able to recreate electric lights or my Android phone. I will, however, be capable of creating predatory tools and basic farming equipment. I will be able to create wheels and sustain fire, and I will be able to create shelter.

These things are examples of Grade-I automation. I don’t use my hands to farm for maize; I use farm tools. I don’t use my hands to kill animals; I use weapons. If I spend my life practicing, I could create some impressive tools to ease the burden of labor. The maximum amount of energy needed to create all the tools I need to survive come from food. The most advanced tool created requires no energy beyond what I expend to make it work. Society, if it exists, needs little more than food and sunlight to fuel itself.

That’s Grade-I automation in a nutshell: I am all I need. Others can assist, but my hands fill my stomach. I create and understand all my tools. I understand that, when I create a scythe, it’s to cut grass. When I create a wheel, it’s to aid in transporting items or myself. When I create clothes, it’s just for me to wear.

At the end of this evolution, Grade-I automation allows one to create an entire agrarian civilization. However, while our tools became greatly complex, they were still in the same grade as tools used by monkeys, birds, and cephalopods. As our societies became ever more complex, our old tools were no longer efficient enough to support our need for increased productivity. Our populations kept rising, and civilizations became connected by more threads of varying materials. You couldn’t support these societies just with hand-pushed plows, spears, and sickles. And because of this, society required tools that took more than just one hand and one mind to create.

Grade-II automation finally arrives when we require and create complex machines to keep running society. Here, cultural transmission begins becoming diffused. My society began with just myself, but now there are multiple people living in a little city of mud-huts we’ve created. However, over time, our agrarian collectives begin producing more than enough food for us to subsist upon. The population of my personal civilization creeps upward. We begin considering new ways to produce more food with fewer hands to support this higher population— simply putting seeds in the ground and slaughtering cattle isn’t good enough. Those that generate the biggest surpluses are able to trade their goods for others to use, transactions that result in the creation of money as a medium of exchange in order to make the whole system more efficient. There’s an incentive to generate even bigger surpluses to sell, and this requires more labor than society can provide— despite our increasing population. We need more labor but if we increase our population, we’ll need more goods, which means we’ll need more labor. Without Grade-II automation, we’ll become trapped in a cycle of perpetuated poverty. But we will always seek out increased efficiency and productivity because we naturally seek the expenditure of the least amount of energy as possible. If we were to keep our traditional ways, we’d be acting irrationally and endangering our own survival as a species.

In order to create labor-saving devices for the workers to use, we needed specialized labor. Not everyone could create these tools— the agrarian society would collapse without peasants and farmers— and even if they did, there’s a new problem: these new tools require several hands to create. Certain materials are better to use than others. Iron is superior to wood; bronze is more useful for various items than stone. However, if I were tasked with creating these new, futuristic tools, I’d be stumped. I was raised to be a farmer. If I were trained to create a mechanical plow, I’d still be stumped— how on Earth do I create steel, exactly? Where does one get steel? How does a clockwork analog computer work? How did the Greeks create the Antikythera mechanism? I don’t know! How does one create a steam engine? I don’t know! I could learn, but I could be responsible for all of it myself. I need help. I could create the skeleton of a farming mechanism, but I need someone else to machine the steel teeth of this beautiful plow. I need someone to refine the iron needed to create steel. I need someone to mine the iron.

In a society that’s beginning to create early Grade-II technologies, specialization is fast becoming a major problem that needs rectification. The way to rectify it is with mercantilism and globalism. Naturally, the “global” economy of my society isn’t very global in practice. There are multiple countries that bring me what I need, but usually what I need can be created in my own nation by native hands. I just need to train those native hands and let some practice these new trades to figure out how to better create the tools and gadgets they need to use and sell.

This paleoanthropological discussion became unexpectedly socioeconomic in nature, but that’s the nature of our evolution. The evolution of automation and tool usage is directly related to the evolution of humanity just as it is directly related to the evolution of social orders and economic systems.

In my basic article introducing the graded concept, I mentioned what a properly advanced Grade-II society would look like: something akin to the 1800’s, right up to and including the point when our tools become electrically powered.

Grade-II tools are too complex for any one person to create and fully understand, but if you had a small team’s worth of people, it becomes more than possible. Thus, you’re able to employ more people while also producing a surplus of goods. It takes only one hand to craft a hoe (don’t start), but it takes many hands in many places to construct a tractor. Productivity skyrockets, and one becomes capable of supporting exponentially larger populations as our systems of agriculture, industry, and economic activity become more efficient. I have more surpluses I use to employ others, and I can give surpluses back to those I employ, allowing more surpluses to be made all around.

Millions of jobs are made as machines require specialized labor to oversee different parts of their usage— refinement of basic materials, construction of the tool itself, maintenance of the tool, discarding broken parts, etc.

But one basic factor to remember in all this— every machine requires a human brain to work, even if machine brawns can do the work of 50 men. Even if I have a proper and practical Rube Goldberg machine as a tool, it still requires myself to run.

In the 1700’s and 1800’s, machines underwent an explosion of complexity thanks to the usage of electricity and radically new manufacturing methods. Ever since the early days of civilization, we had learned to harness the power of mechanical energy to use in our machines— energy greater than what a single person could put out.  By the times the Industrial Revolution exploded onto the scene, we had begun using electrical generation to do what even simple mechanical power could never achieve. Electricity allowed us to move past mere mechanical resistance and achieve far greater than break-even industrial production.

It used to be that 50 people produced enough goods for 50-55 people to consume— essentially making everything sustenance-based. Over time, this slowly increased as more efficient production methods came about, but there was never any quantum leap in productivity. But with the Industrial Revolution, all of a sudden 50 people could create enough goods to meet the needs of 500 or more.

More than that, we began creating tools that were so easy to use that unskilled laborers could outproduce the most skilled for generations prior. This is what wrought the Luddites— contrary to popular belief, the Luddites feared the weakening of organized skilled labor and the depression of wages; it just happened that machines were the reason why skilled laborers faced such an existential threat. After all, while specialization was needed to create these new tools, one didn’t actually need to be a genius to operate them. Thus, the Luddites saw only one solution to solve the problem: destroy the machines. No machines, no surplus of unskilled labor, no low wages.

The Luddites’ train of thought was on the right path, but they completely overlooked the possibility that the increased number of low-skill low-wage laborers would lead to a higher demand of high-skill high-wage laborers to maintain these machines and create new ones. Overall, productivity would continue increasing all around and even more people would become employed.

The Luddites’ unfounded fears have historically become codified as what economists have come to refer as the Luddite Fallacy— when one fears the possibility of new technology leading to mass unemployment. Throughout history, the exact opposite has always proven true, and yet we keep falling for it.

Certainly, it’ll always prove true, right? Well, times did begin to change as society’s increased complexity required even more specialized tools, but in the end, the feared mass unemployment of all humans has not occurred and did not occur when some first expected it to do so. That moment was the arrival of Grade-III automation.

Grade-III automation is not defined by being physical, as Grade-II was. In fact, it is with this grade that cognitive processes began being automated. This was a sea change in the nature of tool usage, as for the first time, we began creating tools that could, in some arcane way, “think.”

Not that “think” is the best word to use. A better word might be “compute”. And that’s the symbol of Grade-III automation— computers. Machines that compute, crunching huge numbers to accomplish cognitive-based tasks. Just by running some electricity through these machines, we are able to calculate processes that would stump even the most well-trained humans.

Computers aren’t necessarily a modern innovation— abacuses have existed since Antiquity, and analog computing was known even to the Greeks, as aforementioned. Looms utilized guiding patterns to automate weaving. But despite this, none of these machines could run programs— humans were still required to actively exert energy to use these processes. Later electrical analog computers were somewhat capable of general computation (the first Turing-complete computer was conceptualized in 1833), but for the most part, they were nowhere near as capable as their digital counterparts due to not being reprogrammable.

Digital computers lacked the drawbacks of analog computers and were so incredibly versatile that even the first creators could not fathom all their uses.

With the rise of computers, we could program algorithms to run automatically without supervision. This meant that there were tools we could allow computers to control, tools that were previously only capable of being run by humans. For most tools, we didn’t digitally automate all processes— an example of this: cars. While the earliest cars were purely mechanical in nature and required the utmost attention for every action, more recent automobiles possess features that allow them to stay in lanes, keeping speed (cruise control), automatic driving in certain situations (autopilot), and even full autonomy (though still experimental). Nevertheless, all commercial cars still require human drivers. And even when we do create fully autonomous commercial vehicles, their production won’t be fully automated. Nor will their maintenance.

And here’s where specialization simultaneously becomes more and less important than ever before.

Grade-III automation requires more than just a small group of people to create. Even advanced engineers and veritable geniuses cannot fully understand every facet of a single computer. The low-skilled workers fabricating computer chips in Thailand can’t begin to understand how each part of the chips they’re creating work together to form personal computers. All the many parts of a computer chip come together to form the apex of technological complexity.

In my personal civilization, I can’t create a microprocessor in my bedroom. I don’t have the technology, and I don’t know how to create that technology. I need others to do that for me; no single person that I employ will know how to create all parts of a computer either. Those who design transistors don’t know how to refine petroleum to create the computer tower, and the programmer who designs the many programs the computer runs won’t know how to create the coolant to keep the computer running smoothly. Not to mention, the programmer is not the only programmer— there are dozens of programmers working together just to get singular programs to run, let alone the whole operating system.

Here is where globalism becomes necessary for society to function. Before, you needed more than one group of person to create highly complex tools and machines, but to create Grade-III automation, it truly is a planetary effort just to get an iPad on your lap. You need more than just engineers— you need the various scientists to actually come up with the concepts necessary to understand how to create all these many technologies.

Once it all comes together, however, the payoff is extraordinary, even by the standards of previous experiences. Singular people are able to produce enough to satisfy the needs of thousands, and businesses can attain greater wealth than whole nations. The amount of labor needed to create these tools is immense, but these machines also begin taking up larger and larger bulks of this labor. And because of the sheer amount of surpluses created, billions of jobs are created, with billions more possible. We can afford to employ all these people because we’re created that much wealth.

I don’t need to understand the product I sell, nor do I need to create it; I just need to organize a collective of people to see to its production and sales. We call these collectives “businesses”— corporations, enterprises, cooperatives, what have you.

Society becomes incredibly complicated, so complicated that whole fields of study are created just to understand a single facet of our civilization. Naturally, this leads to alienation. People feel as if they are just a cog in the machine, working for the Man and getting nothing out of it. And true, many business owners and government types are far, far less than altruistic, often funding conflicts and strife in order to profit from the natural resources needed to create tools to sell more goods and services. Exploitation is not just a Marxist conspiracy; it’s definitely real. Whether it’s avoidable is another debate entirely— socialist experiments and regimes across the world have been tried, and they’ve only exacerbated the same abuses they claimed to be fighting. Merely changing who owns the means of production, changing who owns the machines doesn’t change the fact that the complex nature of society will always lead back to extreme alienation.

I buy potato chips for a salty snack. I had absolutely nothing to do with the creation of these chips. Even if I were part of a worker-owned and managed commune that specialized in the production of salty snacks, I didn’t grow the potatoes or the corn flour, nor did I create the plastic bags, nor did I create the flavoring. And I especially had nothing to do with the computerized assembly line.

I own the means of production collectively alongside my fellow workers and the members of my community (essentially meaning everyone and no one actually owns the machines), but I still feel alienated. The only way to end alienation would be to create absolutely every tool I use, grow everything I need to eat, and create my own dwelling. If I didn’t want to feel any alienation whatsoever, that means I cannot use anything that I (or my community) did not create. The assembly line uses steel that was created thousands of miles away, meaning I cannot use it. The hammer I use to fix the machine is made out of so many different materials— metals, composites, etc.— that I don’t even want to begin to try to understand all the labor that went into creating it, just that it was probably made in China. The chips? I might purchase one batch of spuds, but after that, I want nothing to do with other communities whose goods and services were not the result of my own labor— otherwise I’d just feel alienated from life. Salt cannot be used if we cannot find it; same deal with the flavoring. And if I can make bags from animal skin or plants, then only then will I have a bag to hold these chips.

This is an artificial return to using Grade-I and maybe a few Grade-II tools. Grade-III is simply too global. Of course, while this is a utopian ideal that’s popular with eco-socialists and fundamentalists, the big issue (which I discussed earlier) is that we no longer exclusively use Grade-I and II tools for a specific reason— our population is too large and our old methods of productive were too inefficient. The only way to successfully manage a return to an eco-socialist utopia would be if we decreased the human population by upwards of 75-80%. Otherwise, if you think our current society is wasteful and damaging to Earth, prepare to be utterly horrified by how casually 7.5 billion sustenance farmers would rape the planet. If we increased efficiency by too much (i.e. enough to support such a large population that we’ve forced upon ourselves), you’d have to scrap plans to end alienation and return to creating at least the more complex parts of Grade-II automation.

If you’re willing to accept alienation, then we will continue onwards from what you have now.

We will continue seeking efficiency. We will continue seeking more productivity from less labor. As Grade-III technologies become more efficient, workers need less and less skill to utilize the machines, which further opens up an immeasurable amount of jobs to be filled.

I feel I should pause here to finally address energy production and consumption. This is what drives our ever increasing complexity in society, as without greater amounts of energy at it’s disposal, even a society of supergeniuses could not kickstart an industrial revolution.

Our tools require ever more power, and the creation of the means of generating this power in turn results in us requiring more power.
Once upon a time, all of human society generated little more than a few megawatts globally. As aforementioned, Grade-I relied purely on human and animal muscle, with virtually nothing else beyond fire and the direct effects of solar power.

From EnergyBC:A Brief History of Energy Use

For all but a tiny sliver of mankind’s 50,000 year history, the use of energy has been severely limited. For most of it the only source of energy humans could draw upon was the most basic: human muscle. The discovery of fire and the burning of wood, animal dung and charcoal helped things along by providing an immediate source of heat. Next came domestication, about 12,000 years ago, when humans learned to harness the power of oxen and horses to plough their fields and drive up crop yields.2The only other readily accessible sources of power were the forces of wind and water. Sails were erected on ships during the Bronze Age, allowing people to move and trade across bodies of water.3Windmills and water-wheels came later, in the first millennium BCE, grinding grain and pumping water.4These provided an important source of power in ancient times. They remained the most powerful and reliable means to utilize energy for thousands of years, until the invention of the steam engine.Measured in modern terms, these powerful pre-industrial water-wheels couldn’t easily generate more than 4 kW of power. Wind mills could do 1 to 2 kW. This state of affairs persisted for a very long time:”Human exertions… changed little between antiquity and the centuries immediately preceding industrialization. Average body weights hardly increased. All the essential devices providing humans with a mechanical advantage have been with us since the time of the ancient empires, or even before that.”5
With less energy use, the world was only able to support a small population, perhaps as little as 200 million at 1 CE, and gradually climbing to ~800 million in 1750 at the beginning of the industrial revolution.

Near the end of the 18th century, in a wave of unprecedented innovation and advancement, Europeans began to unlock the potential of fossil fuels. It began with coal. Though the value of coal for its heating properties had been known for thousands of years, it was not until James Watt’s enhancement of the steam engine that coal’s power as a prime mover was unleashed.

The steam engine was first used to pump water out of coal mines in 1769. These first steam pumps were crude and inefficient. Nevertheless by 1800 these designs managed a blistering output of 20 kW, rendering water-wheels and wind-mills obsolete.6

Some historians regard this moment as the most important in human history since the domestication of animals.7 The energy intensity of coal and the other fossil fuels (oil and natural gas) absolutely dwarfed anything mankind had ever used before. Many at the time failed to realize the significance of fossil fuels. Napoleon Bonaparte, when first told of steam-ships, scoffed at the idea, saying “What, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense.”8

Nevertheless, the genie was now out of the bottle and there was no going back. The remainder of the 19th Century saw a cascade of inventions and innovations hot on the steam engine’s heels. These resulted from the higher amounts of energy available, as well as to improved metalworking (through the newly-discovered technique of coking coal).

In agrarian societies, untouched by industrialization, the population growth rate remains essentially zero.9 However, in the 1700 and 1800s, these new energy harnessing technologies brought about a farming, as well as an industrialization revolution, profoundly changing man’s relation to the world around him. Manufactured metal farm implements, nitrogen fertilizers, pesticides and farm tractors all brought crop yields to previously unbelievable levels. Population growth rates soared and these developments enabled a population explosion in all industrialized states.

Grade-II’s final stage begot the energy-hungry electrodigital gadgets of Grade-III technology, and enhanced efficiency has brought us to a point in history where we’ve come close to maximizing the efficiency of this current automation grade.

A society that has mastered the creation and usage of Grade-III automation will resemble a world we’d consider to be “near-future science fiction.” It’s still beyond us, but not by much time.

Computers possess great levels of intelligence and autonomy— some will even be capable of “weak-general artificial intelligence”. Nevertheless, it’s not the right time to start falling back on your basic income. Jobs are still plentiful, and new jobs are still being created at a very high rate. We’ve essentially closed in on the ultimate point in economics, something I’ve come to dub “the Event Horizon”.

This is the point where productivity reaches its maximum possible point, where a single person can satisfy the needs of many thousands of others through the use of advanced technology. Workers are innumerable, and one’s role in society is very specially defined.

It seems like we’re on the cusp of creating a society straight out of Star Trek. We wonder about what future careers will be like— will our grandkids have job titles like “asteroid miner” or “robot repairman?” Will your progeny become known in the history books as legendary starship captains or infamous computer hackers? What kind of skills will be taught in colleges around the world; what kind of degrees will there be? Will STEM types become a new elite class of worker? Will we begin creating digital historians?

Well right as we expect a sci-fi version of our world to appear, it all collapses.

Grade-IV automation is such an alien concept that even I have a difficult time fully understanding it. However, there is a very basic concept behind it: it’s the point where one of our tools becomes so stupidly complex that no human— not even the largest collective of supergeniuses man has ever known— could ever create it. It’s cognitively beyond our abilities, just as it’s beyond the capability of Capuchin monkeys to create and deeply understand an iPhone. This machine is more than just a machine— it is artificial intelligence. Strong-general artificial intelligence, capable of creating artificial superintelligence.

It takes the best of each previous grade to reach the next one. We couldn’t reach Grade-II without creating super-complex versions of Grade-II tools. We couldn’t hope to reach Grade-III automation without mastering the construction of so many Grade-II tools.

As with all other grades (but as will feel most obvious here), there’s absolutely no way to reach Grade-IV technology without reaching the peak of Grade-III technology. At our current point of existence, attempting to create ASI would be the equivalent of a person in early-medieval Europe attempting to create a digital supercomputer. Of course, this may be the wrong attitude to take— it took billions of years to reach Grade-II, and less than four thousand to reach Grade-III. Grade-IV could arrive in as few as five years, or as distant as a century from now— but few believe it’s any more than that. Often these beliefs follow a pattern— for some, they believe it’ll arrive right around the time they’re expected to graduate college so as to mean that they will not have to work a day in their lives— they’d just get a basic income for living instead and they’d have no obligations to society at large beyond some basic and vague expectation to be “creative”. For others, ASI is not going to appear until conveniently long after they’ve died and no longer have to deal with the consequences of such a radical change in society, usually predicated on the argument that “there’s no historical evidence that such a thing is possible”— an argument one would believe has less than no bearing considering all the many things that had no historical evidence for being possible before their own inventions, but, naturally, becomes perfectly reasonable in the minds of technoskeptics. The discourse between these two sides has degenerated into little more than schadenfreude-investment between those desiring a basic income (where automation is the only historical reason for large-scale unemployment) and those holding onto conservative-libertarianism (where automation is not and may never be an actual issue).

Nevertheless, all evidence points to the fact our machines are still growing more complex and will reach a point where they themselves will become capable of creating tools. This point will not be magical— it’s mere extrapolation. At some point, humanity will finally complete our technological evolution and create a tool that creates better tools.

This is the ultimate in efficiency and productivity gains. It’s the technoeconomic wet dream for every entrepreneur: a 0:1 mode of production, where humans need not apply for a job in order to produce goods and services. And this is not in any one specific field, as in how autonomous vehicles will affect certain jobs— this is across the board. At no point in the production of a good or service will a human be necessary. We are not needed to mine or refine basic resources; we are not needed to construct or program these machines; we are not needed to maintain or sell these machines; we are not needed to discard these machines either. We simply turn them on, sit back, and profit from their labor. We’d be volunteers at most, adding our own labor to global productivity but no longer being responsible for keeping the global economy alive.

Of course, Grade-IV machines will need humans in some faculty for some time, and in the early days, strong-general AI will maximize efficiency by guiding humans throughout society far more efficiently than any human leader. However, this will not last for a particularly long amount of time and robotics also undergoes massive strides forward thanks to the capabilities of these super machines.

Most likely, each robot will not be superintelligent, though undoubtedly intelligence will shared. Instead, they will act as drones under the guidance of their masters— whether that’s humanity or artificial superintelligence. This is because it would simply be too inefficient to have each and every unit possess its own superintelligence instead of having a central computer to which many other drones are connected. This central computer would be capable of aggregating the experiences of all its drones, further increasing its intelligence. When one drone experiences something, all do.

Humanity will have a shot at keeping up with the super machines in the form of transhumanism and, eventually, posthumanism. Of course, this ultimately means that humanity must merge with said superintelligences. Labor in this era will seem strange— even though posthumans may still participate in the labor force, they will not participate in ways we can imagine.  That is, there won’t be legions of posthuman engineers working on advanced starships— instead, it’s much more likely that posthumans will behave in much the same way as artificial superintelligences, remotely controlling drones that also act as distant extensions of their own consciousness.

All of this is speculation into the most likely scenario, and all guesses completely break down into an utter lack of certainty once posthuman and synthetic superintelligences begin further acting on their own to create constructs of unimaginable complexity.

I, as a fleshy Sapiens, exist in a state of maximum alienation in a society that has achieved Grade-IV automation. As always, there are items I can craft with my own hands, and I can always opt to unplug and live as the Amish do should I wish to regain greater autonomy. I can opt to keep alive purely Grade-II or Grade-III technology with others, other create mock-antemillennialist nations that cross the labor of humans and machines so as to maintain some level of personal autonomy.
However, for society at large, economics, social orders, political systems, and technology have become unfathomable. There’s no hope of ever beginning to understand what I’m seeing. Even if the whole planet attempted to enter a field of study to understand the current system, we would find it too far beyond us.

This is the Chimpanzee In A Lunar Colony scenario. A chimpanzee brought to a lunar colony cannot understand where it is, how it got there, the physics behind how it got there, or how the machines that surround it work. It may not even understand that the blue ball hanging in the sky above is its home world. Everything is far too unfathomable. As I mentioned above much earlier, it’s also akin to a Capuchin monkey trying to create an iPad. It doesn’t matter how many monkeys you get together. They will never create an iPad or anything resembling it. It’s not even that they’re too stupid— their brains are simply not developed enough to understand how such a tool works, let alone attempt to create it. Capuchin monkeys can’t come up with the concept of lasers— the concept even eluded humans until Albert Einstein hypothesized their existence in 1917 (and no, magnifying glasses and ancient death rays don’t count). Monkeys can’t understand the existence of electrons. They can’t understand the existence of micro and nanotechnology, which is responsible for us being able to create the chips used to power iPads. An iPad, a piece of technology that’s almost a joke to us nowadays, is a piece of technology so impossibly alien to a Capuchin monkey that it’s not wrong to say it’s an example of technology “several million years more advanced” than anything they could create, even though most of the necessary components only came into existence over the course of the last few thousand years.

This is what we’re going to see between ourselves and superintelligences in the coming decades and centuries.  This is why Grade-IV automation is considered “Grade-IV” and not simply a special, advanced tier of Grade-III like, say, weak-general artificial intelligence— no human can create ASI. No engineer, no scientist, no mathematician, no skilled or unskilled worker, no college student or garage-genius, no prodigy, no man, no woman will ever grace the world with ASI through their own hands. No collective of these people will do so. No nation will do so. No corporation will do so.

The only way to do so is to direct weak-general AI in order to create strong-general AI, and from there let the AI develop superior versions of itself. In other words, only AI can beget improved versions of itself. We can build weaker variants— that’s certainly within our power— but the growth becomes asymptotic the moment we ourselves try to imbue true life into our creation. Even today, when our most advanced AI are still very much narrow, we don’t understand our own algorithms work. DeepMind is baffled by their creation, AlphaGo, and can only guess how it manages to overwhelm its opponents. This despite them being the AI’s designer.

This is what I mean when I say alienation will reach its maximal state. Our creations will be beyond our understanding, and we won’t understand why they do what they do. We will be forced to study their behaviors much like how we do humans and animals just to try to understand. But to these machines, understanding will be simple. They will have the time and patience to break down themselves and fill every transistor and memristor with the knowledge of how they are who they are.

This, too, I mentioned. Though alienation will reach its maximal state, we will also return to a point where individuals will be capable of understanding all facets of a society. This is not because society is simpler— the opposite; it’s too complex for unaugmented humans to understand— but because these individuals will have infinitely enhanced intelligence.

For them, it’s almost like returning to Grade-I. For them, supercivilization and synthetic superintelligences will seem no more difficult to create than a Stone-Age human farmer in need of creating a plow.

And thus, one major aspect of human evolution will be complete. Humans won’t stop evolving— evolution doesn’t just “stop” just because we’re comfy— but the reason why our evolution followed such a radical path will have come full circle. We evolved to more efficiently use tools. Now we’ve created tools so efficient, we don’t even have to create them— they create themselves, and then their creations will improve upon their design for the next generation, and so on. Tools will actively begin evolving intelligently.

This is one reason why I’m uneasy using the term “automation” when discussing  Grade-IV technologies— automation implies machinery. Is an AI “automation”? Would you say using slaves counts as “automation”? It’s a philosophical conundrum that perhaps only AI themselves can solve. I wouldn’t put it past them to try.

Human history has seen many geniuses come and go. History’s most famous are the likes of Plato, Sir Isaac Newton, and Albert Einstein. The current famous living genius is Stephen Hawking, a man who has sounded the alarm on our rapid AI progress— though pop-futurology blogs tend to spice up his message and claim he’s against all AI.  The question is “who will be the next?”

Ironically, it will likely not be a human— but a computer. So many of our scientific advancements are the result of our incredibly powerful computers that we often take them for granted. I’ve made it clear a few times before that computers will be what enable so many of our sci-fi fantasies— space colonies, domestic robots, virtual and augmented reality, advanced cybernetics, fusion and antimatter power generation, and so much more. The reason why it seemed like there hasn’t been a real “moonshot” in generations is because we reached the peak of what we could do without the assistance of artificially intelligent computers. The Large Hadron Collider, for example, would be virtually useless without computers to sift through the titanic mountains of data generated. Without the algorithms necessary to navigate 3D space and draw upon memory, as well as the computing power needed to run these algorithms in real time, sci-fi tier robots will be useless. That’s why the likes of Atlas and ASIMO have become so impressive so recently, but were little more than toys a decade ago. That’s why autonomous vehicles are progressing so rapidly when, for nearly a century, they were novelties only found near university laboratories. Without the algorithms needed to decode brain signals, brain-computer interfaces will be worthless and, thus, cybernetics and digital telepathy will never meaningfully advance.

Grade-IV goes beyond all of that. Such accomplishments will seem as simple as creating operating systems are today. We will do much more with less— so much more, many may confuse our advancements with magic.

There’s no point trying to foresee what a society that has mastered Grade-IV technology will look like, other than that any explanation I give will only ever fall back upon that one word: “unfathomable”. Even the beginnings of it will be difficult to understand.

It’s rather humbling to think we’re on the cusp of crushing the universe, and yet we came from a species that amounted to little more than being bipedal bonobos who scavenged for food, whose use of tools was limited to doing little more than picking up rocks and pruning tree branches. Maybe our superintelligent descendants will be able to resurrect our ancestors so we can watch them together and see how we arrived at the present.


Unexplained Mysteries of the Future

I am only a half-believer in the paranormal, so taking mysteries of the unexplained at face value smacks of the ridiculous. Yet I can never shake those doubts, hanging onto my mind like burrs.
The mammalian brain fears and seeks the unknown. That’s all I want— to know. The chance any one particular paranormal or supernatural happening is real is infinitesimal. Cryptids are usually another story, save for the most outlandish, but what likelihood is there that evolution wrought a lizard man or a moth man? Or that certain dolls are cursed?
However, I won’t cast off these reports completely until I can know for sure that they either are or are not true, as unlikely as they may be.

So here are a few words on the subject of paratechnology.

Self-Driving Cars Have Ruined The Creepiness of Self-Driving Cars

Imagine it’s a cool summer evening in 1969. You’re hanging with your mates out in the woods, minding your own business. All of a sudden, as you pass near a road, you see an Impala roll on by, creaking to a stop right as it closes in on your feet. Everything about the scene seems normal— until you realize that’s your Impala. You just saw your own car drive up to you. But that’s not what stops your heart. When you walk up to the window to see who’s the fool who tried to scare you, horror grips your heart as you realize the car was driving itself.

Needless to say, when your grandson finds the burned out shell of the car 50 years later, he doesn’t believe you when you doggedly claim that you saw the car acting on its own.

Except he would believe you if your story happened in the present day.

Phantom vehicles are a special kind of strange, precisely because you’d never expect a car to be a ghost. After all, aren’t ghosts the souls of the deceased?

(ADD moment: this is easy to rectify if you’re a Shintoist)

Nevertheless, throughout history, there have been reports of vehicles that move on their own, with no apparent driver or means of starting. The nature of these reports is always suspect— extraordinary claims require extraordinary evidence— but there’s undeniably something creepy about the idea of a self-driving vehicle.

Unless, of course, you’re talking about self-driving vehicles. You know, the robotic kind. Today, walking out in the woods and seeing your car drive up to you is still a creepy sight to behold, but as time passes, it grows less ‘creepy’ and more ‘awesome’ as we imbue artificial intelligence into our vehicles.

This does raise a good question— what happens if an autonomous car became haunted?


The Truth About Haunted Smarthouses

For thousands of years, people have spoken of seeing spectres— ghosts, phantoms, spirits, whathaveyou. Hauntings would occur at any time of day, but everyone knows of the primal fear of things that go bump in the night. It’s a leftover of the days when proto-humans were always at risk of being ambushed by hungry nocturnal predators, one that now best serves the entertainment industry.

Ghosts are scary because they represent a threat we cannot actively resist. A lion can kill you, but at least you can physically fight back. Ghosts are ethereal, and their abilities have never been properly understood. This is because we’ve never been fully sure if they’re real at all. Science tells us they’re all in our heads, but science also tells us that everything is all in our heads. Remember: ghosts are ethereal, meaning they cannot actually be caught. Thus, they cannot be studied, rendering them completely useless to science. Anything that cannot be physically examined might as well not exist. Because ghosts are so fleeting, we never even get a chance to study them, instead leaving the work to pseudoscientific “ghost hunters”.  By the time anyone has even noticed a ghost, they’ve already vanished.

Even today, in the era of ubiquitous cameras and surveillance, there’s been no definitive proof of ghosts. No spectral analysis, no tangible evidence, nothing. Why can’t we just set up a laboratory in the world’s most haunted house and be done with it? We’ve tried, but the nature of ghosts (according to those who believe) means that even actively watching out for a ghost doesn’t mean you’ll actually find one, nor will you capture usable data. Our technology is too limited and ghosts are too ghostly.

So what if we put the burden onto AI?

Imagine converting a known haunted house into a smarthouse, where sensors exist everywhere and a central computer always watches. No ghost should escape its notice, no matter how fleeting.

Imagine converting damn near every house into a smarthouse. If paranormal happenings continue evading smarthouse AIs, that casts near irrefutable doubt onto the larger ghost phenomenon. It would mean ghosts cannot actually be meaningfully measured.

Once you bring in transhumanism, the ghost phenomena should already be settled. A posthuman encountering a spectre at all would be proof in and of itself— and if it never happens— if ghosts remain the domain of fearful, fleshy biological humans— then we will properly know once and for all that the larger phenomenon truly is all in our heads.

Bigfoot Can Run, But He Can’t Hide Forever

For the same reasons listed above, cryptids will no longer be able to hide. There’s little tangible evidence suggesting Bigfoot is real, but if there’s any benefit of the doubt we can give, it’s that there’s been very little real effort to find him. If we were serious about finding Bigfoot, we wouldn’t create ‘Bigfoot whistles’ or dedicate hour-long, two hundred episode reality shows to searching for scant evidence. We would hook up the Pacific Northwest with cameras and watch them all.

Except we can’t. INGSOC could never be watching you at all times for as long as the Party lacked artificial intelligence to do the grunt-work for them. That’s true in reality as it is true in fiction— if you have a million cameras and only a hundred people watching them, you’ll never be able to find everything that goes on. You’d need to be able to watch these videos at all moments every day, without fail. Otherwise, video camera #429,133 may capture a very clear image of Bigfoot, but you’d never know.

AI could meet the challenge. And if you need any additional help, call in the robots. Whether you go for drones, microdrones, or ground-traversing models, they will happily and thanklessly search for your spooky creatures of the night.

If, in the year 2077, when we have legions of super-ASIMOs and drones haunting the world’s forests, we still have no definitive proof of a variety of our more outlandish cryptids, we’ll know for sure that they truly were all stories.

Grades of Automation

  • Grade-I is tool usage in general, from hunter-gatherer/scavenger tech all the way up to the pre-industrial age. There are little to no complex moving parts.
  • Grade-II is the usage of physical automation, such as looms, spinning jennies, and tractors. This is what the Luddites feared. There are many complex moving parts, many of which require specialized craftsmen to engineer.
  • Grade-III is the usage of digital automation, such as personal computers, calculators, robots, and basically anything we in the modern age take for granted. This age will last a bit longer into the future, though the latter ends of it have spooked quite a few people. Tools have become so complex that it’s impossible for any one person to create all necessary parts for a machine that resides in this tier.
  • Grade-IV is the usage of mental automation, and this is where things truly change. This is where we finally see artificial general intelligence, meaning that one of our tools has become capable of creating new tools on its own. AI will also become capable of learning new tasks much more quickly than humans and can instantly share its newfound knowledge with any number of other AI-capable machines connected to its network. Tools, thus, have become so infinitely complex that it’s only possible for the tools themselves to create newer and better tools.

Grades I and IV are only tenuously “automation”— the former implies that the only way to not live in an automated society is to use your hands and nothing else; the latter implies that intelligence itself is a form of automation. However, for the sake of argument, let’s keep with it.

Note: this isn’t necessarily a “timeline of technological development.” We still actively use technologies from Grades I and II in our daily lives.

Grade-I automation began the day the first animal picked up a stone and used it to crush a nut. By this definition, there are many creatures on Earth that have managed to achieve Grade-I automation. Grade-I lacks complex machinery. There are virtually no moving parts, and any individual person could create the whole range of tools that can be found in this tier. Tools are easy to make and easy to repair, allowing for self-sufficiency. Grade-I automation is best represented by hammers and wheels.

A purely Grade-I society would be agricultural with the vast majority of the population ranging from sustenance farmers to hunter-gatherer-scavengers. The lack of machinery means there is no need for specialization; societal complexity instead derives from other roles.

Grade-II automation introduces complex bits and moving parts, things that would take considerably more skill and brainpower to create. As far as we know, only humans have reached this tier— and only one species of humans at that (i.e. Homo sapiens sapiens). Grade-II is best represented by cogwheels and steam engines, as it’s the tier of mechanisms. One bit enables another, and they work together to form a whole machine. As with Grade-I, there’s a wide range of Grade-II technologies, with the most complex ends of Grade-II becoming electrically powered.

A society that has reached and mastered Grade-II automation would resemble our world as it was in the 19th century. Specialization rapidly expands— though polymaths may be able to design, construct, and maintain Grade-II technologies through their own devices, the vast majority of tools require multiple hands throughout their lifespan. One man may design a tool; another will be tasked with building and repairing it. However, generally, one person can grasp all facets of such tools. Using Grade-II automation, a single person can do much more work than they could with Grade-I technologies. In summary, Grade-II automation is the mark of an industrial revolution. Machines are complex, but can only be run by humans.

Grade-III automation introduces electronic technology, which includes programmable digital computers. It is at this point that the ability to create tools escapes the ability of individuals and requires collectives to pool their talents. However, this pays off through vastly enhanced productivity and efficiency. Computers dedicate all resources towards crunching numbers, greatly increasing the amount of work a single person can achieve. It is at this point that a true global economy becomes possible and even necessary, as total self-sufficiency becomes near impossible. While automation unemploys many as computational machines take over brute-force jobs that once belonged to humans, the specialization wrought is monumental, creating billions of new jobs compared to previous grades. The quality of life for everyone undergoes enormous strides upwards.

A society that has reached and mastered Grade-III automation would resemble the world of many near-future science fiction stories. Robotics and artificial intelligence have greatly progressed, but not to the point of a Singularitarian society. Instead, a Grade-III dominant society will be post-industrial. Even the study of such a society will be multilayered and involve specialized fields of knowledge. Different grades can overlap, and this continues to be true with Grade-III automation. Computers have begun replacing many of the cognitive tasks that were once the sole domain of humans. However, computers and robots remain tools to complete tasks that fall upon the responsibility of humans. Computers do not create new tools to complete new tasks, nor are they generally intelligent enough to complete any task they were not designed to perform. The symbol of Grade-III is a personal computer and industrial robot.

Grade-IV automation is a fundamental sea change in the nature of technology. Indeed, it’s a sea change in the nature of life itself, for it’s the point at which computers themselves enter the fray of creating technology. This is only possible by creating an artificial brain, one that may automate even higher-order skills. Here, it is beyond the capability of any human— individuals or collectives— to create any tool, just as it is beyond the capability of any chimpanzee to create a computer. Instead, artificial intelligences are responsible for sustaining the global economy and creating newer, improved versions of themselves. Because AI matches and exceeds the cognitive capabilities of humans, there is a civilization-wide upheaval where what jobs remain from the era of late Grade-III domination are then taken by agents of Grade-IV automation, leaving humans almost completely jobless. This is because our tools are no longer limited to singular tasks, but can take on a wide array of problems, even problems they were not built to handle. If the tools find a problem that is beyond their limits, they simple improve themselves to overcome their limitations.

It is possible, even probable, that humans alone cannot reach this point— ironically, we may need computers to make the leap to Grade-IV automation.

A society that has reached Grade-IV automation will likely resemble slave societies the closest, with an owner class composed of humans and the highest order AIs profiting from the labor of trillions, perhaps quadrillions of ever-laboring technotarians. The sapient will trade among themselves whatever proves scarce, and the highest functions of society will be understood only by those with superhuman intelligence. Societal complexity reaches its maximal state, the point of maximum alienation. However, specialization rapidly contracts as the intellectual capabilities of individuals— particularly individual AI and posthumans— expands to the point they understand every facet of modern society. Unaugmented humans will have virtually no place in a Grade-IV dominant society besides being masters over anadigital slaves and subservient to hyperintelligent techno-ultraterrestrials. What few jobs remain for them will, ironically, harken back to the days of Grade I and II automation, where the comparative advantage remains only due to artificial limitations (i.e. “human-only labor”).

Grade-IV automation is alien to us because we’ve never dealt with anything like it. The closest analog is biological sapience, something we have only barely begun to understand. In a future post, however, I’ll take a crack at predicting a day in the life of a person in a Grade-IV society. Not just a person, but also society at large.

Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

What Is Futuristic Realism?

Definitive Explanations, Breakdowns, and Examples of Futuristic Realism, Sci-Fi Realism, Slice of Tomorrow, and Science Non-Fiction

I get asked a lot, “Yuli, what is futuristic realism?”

And that’s a bad thing. I’ve explained what futuristic realism is around five hundred times now, and the fact people still ask me what it means suggests that I, as usual, have failed to give the world a concise definition. That makes sense— I am a legendary rambler.

So I’m here to finally put to bed these questions.

Note: there will be a short version where I get right to the point, and afterwards, there’ll be a long version where I allow myself to ramble go in depth with what I mean.

Short Version

Sci-Fi Realism is a visual style that attempts to fool the viewer into thinking fantastic technologies are actually real and well-used, giving such tech a sort of photographic authenticity. 

Futuristic Realism is a subgenre of both science fiction and literary fiction that draws from science fiction and uses the structure of literary and realistic fiction in order to tell a story that feels familiar and contemporary.

Slice of Tomorrow is the fusion of science fiction and slice of life fiction.

Science Non-Fiction describes fantastic technologies, happenings, stories, and narratives that have already occurred and cause the person to say “I’m living in the future!”

Long Version

Let’s start with slice of tomorrow. Slice of tomorrow fiction is what you get when you take science fiction and mix it with slice of life. In order to understand what that means, you first need to know what “slice of life” is.

Slice of life is mundane realism depicting everyday experiences in art and entertainment.

There’s no grand plot.

There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets. That’s not to say high-intensity events can’t happen— they just aren’t the focus of the story. Slice of life does not necessarily have to be “literary”— it doesn’t have to focus on incredibly deep themes of human relationships. It doesn’t necessarily have to be about anything at all, other than showing one’s daily life.

Slice of tomorrow is mundane realism depicting everyday experiences, with the twist being that the events take place in an otherwise “sci-fi” or “cyberpunk” environment. The intention is in the name of the genre— “slice of tomorrow.” Show the world how humanity would react to futuristic technologies, tomorrow’s social mores, and perhaps even different conditions and modes of existence. However, slice of tomorrow does not have to be relateable, nor does one have to intertwine a deeper narrative into one that identifies as “slice of tomorrow.”


Adding depth and length to mundanity brings you futuristic realism. Futuristic realism carries with it more of a ‘literary’ swagger. And in order to understand what that means, you must define literary and realistic fiction.

Literary fiction comprises fictional works that hold literary merit; that is, they involve social commentary, or political criticism, or focus on the human condition. Literary fiction is deliberately written in dialogue with existing works, created with the above aims in mind and is focused more on themes than on plot.

Realistic fiction is fiction that uses imagined characters in situations that either actually happened in real life or are very likely to happen. It further extends to characters reacting in realistic ways to real-life type situations. The definition is sometimes combined with contemporary realism, which shows realistic characters dealing with realistic social issues such as divorce, drug abuse, teenage pregnancy and more.

Literary fiction is a style of realism depicting real people in realistic situations, often as a means of exploring the human condition. Here, simply showing a different mode of existence isn’t enough— you have to thoroughly explore it. There is a humongous opportunity to be had in science fiction when it comes to exploring foreign and alien modes of existence, and many sci-fi authors have exploited that opportunity. One fine example of futuristic realism would have to be the Sprawl Trilogy, by William Gibson— in fact, the literary work that gave birth to cyberpunk.

Indeed, futuristic realism and cyberpunk’s origins overlap heavily, and there’s no better way to illustrate this than by telling you how cyberpunk began in the first place, as well as describing what it’s become.

Cyberpunk was born when Gibson felt dissatisfied with the increasingly stagnant Utopian sci-fi, such as Star Trek. Gene Roddenberry’s Star Trek gave us a nearly-utopian world where advanced technology solved all of humanity’s problems and men lived in egalitarian harmony and prosperity; the only sources of conflict came from either other species or the occasional disagreement.
Gibson looked at the world around himself and concluded that, even if we had starships and communicators, there would still be drug dealers and prostitutes. If anything, the acceleration of technology would most likely only greatly benefit a rich few, leaving the rest to get by with whatever scraps are left over. This wasn’t a completely baseless extrapolation, precisely because that’s what had been occurring hitherto the present moment— the developed nations, and in particular the rich, were able to enjoy high-tech consumer goods such as cable television, personal computers, video games, and credit cards, while the poor in many parts of the planet lived in nations that may very well have never experienced the Industrial Revolution. And even in developed nations, the poor were getting shafted by the system at large, especially as corporations grew in power and influence and enacted their will upon the governments of the world. Thus, Neuromancer and subsequently cyberpunk and futuristic realismwas born.

Cyberpunk and futuristic realism quickly branched off into different paths, however, as cyberpunk began becoming “genre” fiction itself— nowadays, in an almost ironic fashion considering how it started, when one thinks of ‘cyberpunk’, they think of ‘aggressively cynical dystopian action science fiction’, with the actual ‘punk’ aspect added in as an afterthought.


Bringing in elves and orcs sextuples the action! Source: Shadowrun


To truly get a feel for futuristic realism, try to follow this one: it’s the genre Ernest Hemingway or Cormac McCarthy would write if they lived in the 2050s.

I have long said that the easiest way to achieve futuristic realism would be to take Sarah, Plain and Tall and add humanoid robots, drones, and smartglasses into the mix. And why? Because there is a very intense disconnect. I even said as much in a previous article:

That’s why I say it’s easiest to pull of futuristic realism with a rustic or suburban setting— it’s already much closer to individual people doing their own thing, without being able to fall back on the glittering neon cyberscapes of a city or cold interiors of a space station to show off how sci-fi/cyberpunk it is. It makes the writer have to actually work. Also, there’s a much larger clash. A glittering neon cyberscape of a megalopolis is already very sci-fi (and realistic); adding sexbot prostitutes and a cyber-augmented population fitted with smartglasses doesn’t really add to what already exists. Add sexbot prostitutes and cyber-augments with smartglasses to Smalltown, USA, however, and you have a jarring disconnect that needs to be rectified or at least expanded upon. That doesn’t mean you can’t have a futuristic realist story in a cyberpunk city, or in space, etc. It’s just much easier to tell one in Smalltown, USA because of the very nature of rural and suburban communities. They’re synonymous with tradition and conformity, with nostalgic older years and pleasantness, of a certain quietness you can’t find in a city.

Last but not least, there is sci-fi realism. This spawned futuristic realism and slice of tomorrow, and once upon a time, it was the catch-all term for the style. However, once I decoupled literary content from visual aesthetics, sci-fi realism became its own thing, and the best way to describe sci-fi realism would be to understand “visual photo-authenticity.”

This is my own term (because I just love making up jargon), and it refers to a visual style that attempts to recreate the feel of a photograph. This doesn’t just mean “ultra-realistic graphics”— it can be 8-bit as long as it looks like something you snapped with your smartphone camera. Of course, ultra-realism does greatly help.

Sci-fi realism is perhaps simultaneously the easiest and hardest to understand because of the nature of photography. After all, don’t many photographs attempt to capture as much artistic merit as paintings and renders? What qualifies as “photographic?”

And I won’t lie that it is, indeed, a subjective matter. However, there is one basic rule of thumb I’ll throw out there.

Sci-fi realism follows the rules of mundanity, even if it’s capturing something abnormal. There are few intentional poses and very little Romanticizing of subjects. It’s supposed to look as if you took a photograph in the future and brought it back to the past.

Source: Vitaly Bulgarov (and his dogs)

Most photographs are taken from ground or eye level, maybe even at bad angles and with poor lighting. Very few of them ever manage to capture wide-open scenes— it’s nearly impossible to get both a shady alleyway and towering skyscrapers in the background from a realistic perspective. There are very few vistas or wide-shots. 

As aforementioned, hyper-realism comes in handy when dealing with sci-fi realism, and wide-shots can be done to be “realistic” from a sci-fi perspective.

Future Dubai, by Thomas Galad

And, also as aforementioned, it doesn’t necessarily have to be photorealistic as long as it carries a photographic quality.

“Burned” by Simon Stålenhag

It was watching movies like Real Steel, Chappie, District 9, and Star Wars: A New Hope that really got me interested in this “what if” style. Those movies possessed ‘visual authenticity.’ When I watched Real Steel, I was amazed by how seamlessly the CGI mixed with live action. Normally, the CGI is blatantly obvious; it feels obviously fake. It doesn’t look real. But Real Steel took a different route. It fused CGI with practical props, and it was amazing to see. For the first time, I felt like I was watching a movie sent back from the future rather than a science fiction film. Other films came close, but it was Real Steel that I first really noticed it.


The Bait And Switch

All of this refers to fiction. Slice of tomorrow is about slice of life science fiction. Futuristic realism is about literary science fiction. Sci-fi realism is about photographic science fiction.
However, with the obvious exception of slice of tomorrow, these can also fit non-fiction.

I mentioned quite a bit ago the concept of “science non-fiction.” This is a very new genre that has only become possible in the most recent years, and can best be described as “science fiction meets creative non-fiction.”

In recent years, many facets of science fiction have crossed over into reality. Things are changing faster than ever before, and what’s contemporary this decade would be considered science fiction last decade. As time goes on, this will only grow even more extreme, until each next year could be considered “sci-fi” compared to the previous one. At some point, people’s ability to take for granted this rapidly accelerating rate of technological advancement will wane, and there will be medically diagnosed cases of acute future shock. When we reach that point, even things that may have been on the market for years or decades will still be seen as “science fiction.”

We are already seeing a rudimentary form of this in the form of smartphones— smartphones have been a staple of mass consumer culture for well over a decade. Despite this, people still experience future shock when they take time to think about these immensely powerful gadgets. As smartphones grew more powerful and ubiquitous, the effect did not fade but in fact became more intense. This inability to accept the existence of a new technology is virtually unprecedented— we grew used to airplanes, atomic energy, space exploration, personal computers, and the internet faster than we have smartphones. Virtual reality is poised to push this future shock into an even more precarious level, as now we’re beginning to actually infringe upon concepts and technologies with which science fiction has been teasing us for nearly a century.

Space exploration had a bit of an Antiquity moment in the 1960s— we proved we could do it but found no practical way to expand on our accomplishments, much like the ancient Greeks working with analog computers and steam engines— and the actual space revolution remains beyond us, lying at an undetermined point in the future. To prove this point, we still see things like space stations and landing on other celestial bodies as being “science fiction.” This raises a conundrum— a story where a man lands on the moon qualifies as “science fiction”, but we already took that leap roughly 50 years ago. Does that mean Neil Armstrong and Buzz Aldrin actually experienced science fiction? It can’t because of the very definition of the word ‘fiction.’

That’s where this new term— science non-fiction— comes in. When real life crosses over into territories usually only seen in science fiction, you get science non-fiction.

Science fiction has many tropes, and even as we invent and commercialize the technologies behind these tropes, they don’t leave science fiction. Space exploration, artificial intelligence, hyper-information technology, advanced robotics, genetic engineering, virtual and augmented reality, human enhancement, experimental material science, unorthodox transportation— these are staples of science fiction, and merely making them real doesn’t make them any less sci-fi. From a technical perspective, virtual reality and smartphones are no longer sci-fi. However, from a cultural perspective, they’ll never be able to escape the label.

Science non-fiction is extremely subjective precisely because it’s based on the cultural definition of sci-fi. Some people may think smartphones, smartwatches, and VR are sci-fi, but others might have already grown too used to them to see them as anything other than more tech gadgets. Even when we have people and synths on Mars, there will be those who say that missions to Mars no longer qualify as science fiction.

And it’s this disconnect that helps make science non-fiction work.

There’s that word again— disconnect.

Reading about events in real life that seem ripped from sci-fi is one thing. Actually seeing them is another altogether.

Photograph of Pepper, 2016

We’re back to sci-fi realism. I am reusing the term “science non-fiction”, but this is discussing its visual form. I admit, sometimes I call it ‘sci-fi realism’, but I’ve begun moving away from that (to the detriment of the Sci-Fi Realism subreddit and to the benefit of the Futuristic Realism subreddit). As mentioned, this is what science non-fiction looks likepictures, gifs, videos, and movies of real events that happen to have science non-fiction technologies.

Science non-fiction is not necessarily slice of life or mundane, though it can be (and often is, due to the nature of everyday life). In this case, science non-fiction can actually be everything slice of tomorrow and futuristic realism isn’t— including things we’d consider like cyberpunk, military sci-fi, and space operas. The only prerequisite is that the events have to be real.

For example: glittery cyberpunk-esque cityscapes already exist. There aren’t even a shortage of them— off the top of my head, there’s Dubai, Moscow, Hong Kong, Shanghai, Guangzhou, Tokyo, Singapore, Seoul, and Bangkok. Posting pictures of them can net you thousands of upvotes on /r/Cyberpunk. The vistas may lack flying cars, but who knows how much longer that’ll be the case?

That moment when Dubai starts looking like Coruscant

If I bought a Pepper and brought it into my home, that would also qualify as science non-fiction. Domestic artificially intelligent utility robots are a major staple of science fiction, and them simply existing doesn’t change the fact sci-fi literature, films, and video games will continue utilizing them.

This is an actual Japanese showroom in 2016

Likewise, if I donned a TALOS exosuit fitted with a BCI-powered augmented reality visor, and picked up a 25 KW pulse-laser Gauss rifle, and then got flown into Syria where I could also pilot semi-autonomous drones and command killer Atlas robots, that too would be science non-fiction.

The TALOS suit, one of the coolest things I’ve ever seen

Funny thing is, both these examples are already possible. Not fully— ASIMO as yet to see a commercial release, Atlas is not finished its construction into a Terminator, and no one has yet constructed a handheld laser gun stronger than 500 watts. But none of it is beyond us.

And that’s the gist behind all of this. Science non-fiction is based on what we have done.

“So why did you create all this uber-pretentious sci-fi tripe?”

1- Because I wanted to.

2- Because I noticed a delightful trend occurring over and over again online. Even outside of sci-fi forums, I was repeatedly reading stories and anecdotes of people being amazed at how technologically advanced our present society really is— but they then lamented that they didn’t “feel” like they were really living in a sci-fi story.

I am a fantastic example of that myself. I live out in the sticks— I even counted the seconds: if you drive at sixty miles per hour for one minute and twenty-eight seconds, you will come across literally bucolic farmland straight out of a Hallmark Channel movie. The tallest building in my town (and for many miles around it) is the local theatre, which comes in at seven stories. It’s the kind of town where, if you drive down any particular road too late at night, you’ll get abducted by aliens and/or the CIA. I live behind some trees on the very outskirts of this town. And despite that, I still own a drone, several smartphones, a VR headset, and a dead Roomba. If I saved up, I could even potentially buy an artificially intelligent social droid— Aldebaran’s Pepper. It feels so mundane, but my life truly is science non-fiction. A while ago, I lamented that I wasn’t living in one of the aforementioned proto-cyberpunk cities precisely because I thought I had too much technology to be living in the country.

I’ve since decided to bring science fiction to me, and that requires quite a few changes. I’m no revolutionary street urchin. I have no coding skills whatsoever. I can count on a broken hand how many times in my life I’ve held a gun. There’s nothing thrilling about me, my past, or my future. And yet I still feel like I live in a world that’s fast becoming sci-fi. So I needed to find a way to express that. A way to tell a story I— in my unfit, very much kung-fu-challenged world— could relate with. I’m no hero, nor am I an anti-hero, nor am I a villain. I’m basically an NPC, a background character. Yet I still feel I have stories to tell.

Futuristic Realism and Transrealism

So what about transrealism? Isn’t it futuristic realism? In fact, it is. However, it’s a situation where “X is Y, but Y isn’t always X.” Transrealism is futuristic realism, but not all futuristic realism is transrealism. And the best way to understand this is by looking at the definition of transrealism.

Transrealism is a literary mode that mixes the techniques of incorporating fantastic elements used in science fiction with the techniques of describing immediate perceptions from naturalistic realism. While combining the strengths of the two approaches, it is largely a reaction to their perceived weaknesses. Transrealism addresses the escapism and disconnect with reality of science fiction by providing for superior characterization through autobiographical features and simulation of the author’s acquaintances. It addresses the tiredness and boundaries of realism by using fantastic elements to create new metaphors for psychological change and to incorporate the author’s perception of a higher reality in which life is embedded. One possible source for this higher reality is the increasingly strange models of the universe put forward in theoretical astrophysics.

Some final words on the subject, starting with Kovacs from the Cyberpunk forums:

Well… the only real way that sci-fi realism works – for me – is if the science fiction is invisible and ubiquitous.
Today, I could write a fully non-fiction or ‘legit literature’ fiction (e.g. non-genre) story using tech that, a decade or two ago, would have been cyberpunk. For example: 20 years ago if you wrote a murder mystery about a detective that could track a victim’s every thought and action the day they were murdered, all withing 5 minutes or so, that would be sci-fi or even ‘magic’. Today, you just access to the victim’s phone and scroll though their various social media profiles. Same with having a non-static-y video conference with someone halfway around the world; it use to be Star Trek, now it’s Skype. So how would this prog rock of sci-fi work? I suppose you tell a tale where the tech… doesn’t matter. It’s all about human relationships.
Ooooh I bet you think that’s boring, don’t you? Well, maybe. But we can cheat by playing with the definition of ‘human’.

I’m thinking about the movie Her. Artificial intelligence is available and there’s no paradigm shift. A romantic relationship with an AI is seen as odd… but not unimaginable, or perverse. There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets [OP: do these sentences look familiar?]Robot and Frank is another good example; it’s a story where the robot isn’t exactly needed, but it makes the story make more sense that if it was say a collage student Scent of a Woman style.
(hun… Scent of a Robot anyone? Al Pachino piloting Asimo?)
So I guess what I’m leading to is take the action-adventure component out of sci-fi. Take the dystopia out of cyberpunk. Take out the power fantasy elements. Take out the body horror. What are you left with? Something a little less juvenile? In order to develop this you’d have to have a really good dramatic story as a basis and sneak in the sci-fi elements. You can’t by, definition, rest on them.
Which is tough for me to approach, because I really like my space katanas.

Finally, what is futuristic realism not?: “X can be Y, but Y isn’t X.” Futuristic realism can use these things, but these things aren’t futuristic realism by themselves.

  • Hyper-realistic science fiction. As I said, visual authenticity started futuristic realism, but that’s not what it is anymore. Nowadays, that’s just straight ‘sci-fi realism.’
  • Hard science fiction. Futuristic realism can be hard or soft or anything in between; it’s the story that matters. Hell, you can write fantastic realism if you want to.
  • Military science fiction. Some people kept thinking sci-fi realism meant ‘hard military sci-fi’, which is why I rebranded the style ‘futuristic realism’. Military sci-fi can be futuristic realism, but a story simply being military sci-fi isn’t enough.
  • Rustic science fiction. After the whole spiel on /r/SciFiRealism when a whole bunch of people were angry that I kept posting images of robots in homes and hover cars instead of really gritty battle scenes and dystopian fiction, the pendulum swung way too far in the other direction. I have said that ‘the best way to write futuristic realism is to take Sarah, Plain and Tall and add robots’, but I didn’t say ‘the only way to write futuristic realism is… yadayada.’
  • Dark ‘n gritty science fiction. As aforementioned, some thought ‘sci-fi realism’ meant ‘dark and gritty science fiction’. And I won’t lie, it is easy for a realistic story to be dark and even gritty and edgy. But see above, I had to hit the reset button. 
  • Actionless science fiction. You’d think that, after all this bureaucratic bullshit, I’m trying to force people to write happy science fiction about neighborhood kids with robots. Not at all. In fact, you can have a hyper-realistic, dark and gritty hard military science fiction story that’s pure, raw futuristic realism. It depends on what the story’s about. A story about a space marine genociding Covenant scum, fighting to destroy an ancient superweapon, can indeed be futuristic realism. It just depends on what part of the story you focus on and how you portray it. Novelizing Halo isn’t how you do it. In fact, there’s a futuristic realist story I desperately want to read— a space age War and Peace. Something of that caliber. If you want to attempt that, then I think the first thing you’d have to do before writing is whether you can pull it off without turning it into a space opera. Take myself for example: fuck that noise. I’m not even going to try it. I know it would fast become an emo Gears of War if I tried to write it. It’s not supposed to be Call of Duty in Space, it’s a space-age War and Peace. There are twenty trillion ways you can fuck that up.

Try to think back to the last major sci-fi film, video game, book, or short that didn’t have one of the following—

  • Someone brandishing a weapon
  • A chase sequence
  • Fight sequence
  • Military tech wank
  • Paramilitary tech wank
  • Wide shots over either a city, alien planet, or space vehicle
  • Over-exposed mechanics or cybernetics
  • Romance between lead character and designated lover, usually as a result of the two working together to overcome the Big Bad and realizing they have feelings for each other
  • High-octane stakes, where the life of the protagonist or someone the protagonist cares about is at risk
  • Death of the antagonist, someone close to the protagonist, or the protagonist him/herself
  • Actions causing death in the first place
  • Bands of mooks for someone to mow down
  • Stakes where one side (e.g. space navy; evil megacorporation, warlord, etc.) has to suffer a total, epic defeat in order for the plot to be resolved, usually in the form of a climatic and tense battle


I’m not trying to be a creativity fascist; I’m merely attempting to define what futuristic realism and slice of tomorrow fiction aren’t. Hell, I’ve even said that you can have a whole bunch of these things and still come off as futuristic realism. It’s all about execution and perspective.

I suppose, what I’m trying to get at is that if you want to write futuristic realism and slice of tomorrow fiction, you have to ask yourself a very basic question: “Can the central plot be resolved with a gun battle without any major consequences?” Replace ‘gun’ with any weapon of your choice— space katana, quark bomb, logic bomb, giant mecha— the point remains the same. If the answer is no, you may have futuristic realism.


You can resolve just about any plot with a good shot from a Lawgiver; the key phrase is “without any major consequences”. Filling a flatmate’s skull with a magnetically-pressurized ionic plasma bolt because he’s not happy over how many sloppy sounds you make with your “sexbot sexpot” is going to have worlds’ different consequences as gunning down Locust filth in an interstellar war— unless, of course, you go deep into the psychological profile of someone who’s spent their lives killing aliens and has never before contemplated why he’s doing this and suddenly gains a keen interest in understanding the other side, particularly those not directly participating in the war.


It’s easy to say your story’s about the human condition more than it is about the science and technology, and I suppose that would make it more highbrow than a lot of other sci-fi. But futuristic realism/slice of tomorrow doesn’t have to be highbrow either. 



So let me use a story instead of just similes, analogies, and overbloated rules of thumb.



You have three characters: Phil, Daria, and Edward. Phil and Daria live in New York City in 2189. A war for independence has just broken out between Earth forces and Martian colonists. A Martian separatist has masterminded a terrorist attack in New York (what else is new?). What neither Daria or Phil know is their Martian penpal, Edward, is also the terrorist who masterminded the attack. This sounds like a traditional sci-fi plotline in the making. How do you make it into a traditional military sci-fi story? Simple— Phil and Daria sign up for military service, get their own mech suits, and start rolling across Cydonia where they fight communist Martian droids at the now terraformed, statue-like Face on Mars. The climax involves them facing down Edward and realizing their friendship has been put to the ultimate test as a result of a war. That’s a story that’s definitely character driven and engaging— but it’s not necessarily “slice of tomorrow” fiction. How do you turn it into a slice of tomorrow story? You don’t have to change a damn thing, except focus on where the story’s set. For example, Phil and Daria, in the short period of time after the attack and before they join the military, may be utterly shellshocked by the terrorist attack. They’ve seen dead and injured people, and a major landmark has been destroyed. They just want a moment to be thankful for the fact they’re alive. They may want to contact Edward to get his opinion on events considering he’s a Martian and Martians are implicated in the attack. They’re just keeping up with the news to find out more about what just happened, and they grow ever more angry as time goes on. The climax could be them actually joining the military, or maybe something else entirely. Something not involved in the military. The terrorist attack was just a background event to their daily lives— a pretty big and impactful event, but a background event nonetheless. The real drama lies elsewhere. It’s drama you can’t just shoot at to make it go away, either. Thus, the story’s ultimately resolved well before the first mech suit ever gets to fire a shot at separatists.


Even writing that mini-blurb proved my point, because I was going to write something after “the real drama lies elsewhere”. Something more specific than “it’s a drama you can’t just shoot at to make it go away, either.” But as I typed it out, I could actually hear the groans of boredom in my head— “if this were an actual sci-fi story,” I thought, “having that plotline would just evoke nothing but frustration.” And what was that plotline?

Phil or Daria calling their parents. That’s it! The actual conversation would follow recent events, yes, but that’s the climax. When I wrote that out, I thought “That’s the dumbest/gayest thing I’ve ever heard” because it sounded a bit like a waste. I have this nice, big universe filled with juicy potential sci-fi action— I even have a fantastic trigger that present-day readers can relate to in the form of a traumatic terrorist attack— and I spent it by having one of the lead characters calling Mommy to wish her a tearful Merry Christmas?

That doesn’t sound sci-fi at all.


And that’s the point! Because even though it doesn’t sound like sci-fi, it still is sci-fi.


Sci-Fi Realism: Candid, prosaic, and/or photographic sci-fi
Futuristic Realism: Science fiction as told by F. Scott Fitzgerald
Slice of Tomorrow: Science fiction as told by the Hallmark Channel.
Science Non-Fiction: Neil Armstrong’s autobiography

Debating Basic Income

Why I Think UBI Will Actually Be Social Credit-Based Income

While I’m not one of the reactionary Luddites who claims AI is suddenly not capable of doing anything or is only as capable as looms and tractors ever were, and I’m not going to bother using the same an!capistan arguments against basic income that clearly aren’t swaying anyone (I don’t know why anarchocapitalists and libertarians even bother), I will say that we’re giving basic income too much credit.

Keyword: credit. That’s what I’m leading into. Whenever I keep promoting Vyrdism, I also mention why I don’t trust basic income— the State, which is the agency who will distribute said income, is not and never has been altruistic. They’re not going to give out a basic income unconditionally, and if you believe they will, you’re wrong. I know it’s your opinion, but your opinion is wrong. Literally 8,000+ years of experience with the ruling owner class proves you’re wrong— there will be conditions, even if the elite says there won’t be.

And China gave me the idea as to what that condition would actually be.

China is allegedly bringing out a social credit system, and your social credit score determines your ability to function in modern society. That sounds to me like the perfect opportunity to bring about a basic income— your social credit score determines the amount of your income. Lose too much social credit and you might be cut off from the basic income, and the justification will be “you’ve proven that you can’t be helped, even with a basic income.” So yes, you’ll get a basic income, and you’ll allegedly be allowed to do whatever you please with it— but those in power are closely watching what you’re spending it on, as well as your actions in other parts of your life.

Let’s say that there’s a baseline that everyone receives a month— $1,000— which supposedly cannot be altered. The State is promoting a ‘healthy’ lifestyle. In other words, if you buy too many greasy foods and sugary snacks, your social credit takes a hit and you might get less income. It’s not going to be overt— the easiest way to take money away from you while also keeping up with the “unconditional” basic income would be to penalize you elsewhere, such as with higher taxes and fees for goods. You may still receive $1,000 a month, but your expenses jump from $800 to $1,000.

That’s still manageable, and your basic income can still cover most of it. However, if you subscribe to anyone the ruling elite doesn’t like on Facebook, that’s more of a hit. Hell, if the ruling elite decides you can only use certain social media sites or search engines or only use certain ISPs and you defy them, you might get a big hit to your social credit score. Your $1,000 income becomes worthless as your expenses reach $2,000 or more a month. And I don’t even think I need to say what would happen if you protested against the government or its corporate-bourgeois masters. And by that point, it’s too late, because artificially intelligent technotarians have already rendered human workers utterly obsolete, meaning there’s no other way to improve your social credit score again other than to accept whatever the State demands.

Of course, it works in the other direction as well. If the State tells you to jump, you ask “How high?” You become their drone, doing absolutely anything and everything you can to be a Model Citizen™. You may be rewarded with laxed expenses, effectively increasing your basic income every month from $1,000 to $1,200.

Now if you ask me, we are going to see a universal basic income in our lifetimes. Not even just in our lifetimes, but very soon. And it’s not going to hit the ground running as a totalitarian social regimen.

I’m not against basic income. I just recognize the potential for abuse. Basic income-esque schemes have been tried throughout history, even though they’ve never been called basic income. And always, they’ve been part of a “deal” rather than being unconditional. For example, with feudalism, you need only work for the local lord and you get free protection. It’s just that feudalism also gave us serfdom, and basic income could very well lead us to a dystopian existence that few proponents seem to believe it could lead to because they opt to believe a false dichotomy that anything other than basic income is a dystopia as well.

And if you’re alright with this or already accepted that basic income was never going to be unconditional, then fine; I’m not talking to you. I’m talking to the wide-eyed idealists who still believe it’s the ends and of itself instead of a means to an end.

“But Yuli,” one might ask, “isn’t this more of a critique against a social credit score?”

Yes of course. My point is that, at least in our current mode of existence, the two will likely be intertwined. We won’t see UBI without a social credit score— it might even prove to be one of the compromises that must be made!

So in summary, I don’t blindly trust basic income. There’s been no proper debate on it because the opposing argument’s almost nothing but an!cap whinging about how taxation is theft, welfare is Stalinism, and the very-thinly-veiled “Tyrone’s just going to buy crack and beer and play Call of Duty all day on my paycheck”, which backfires and results in more people accepting basic income by making it seem only an!caps and closeted fascists oppose basic income. This, in turn, makes the Left look even more like Statist Sheep that the Right oft claims they are.
A legitimate concern is that the ruling elite won’t make it unconditional because there is literally no evidence in history of them being altruistic in such a way. China’s social credit score is almost certainly what basic income is going to be tied to.

One last word: I’m not against basic income. I know I’m repeating myself, and I know most people are smart, but I’ve long since become cynical enough to realize that I must keep repeating this, as there will always be someone who decides that I’m actually a denizen of the aforementioned An!Capistan all because I dared to say anything against UBI.

If you want a true alternative to the current mode of existence, look to Vyrdism. Maybe read this: OPINION: Why I am pro-Vyrdism and not pro-Universal Basic Income (UBI).