Ask a Futurist: Robotophobia & the old/new Human Role
The future’s so bright, we’re scared of the glare
Michael K. Spencer and I (Travis Kellerman) answer FAQ’s and concerns we hear often on the future roles of humans and machines. Consider this issue #1 of FutureSin’s Ask A Futurist series.
Q1: Why do we fear robots, AI, and being replaced?
Travis: In the Anthropocene (Era of Humans), every adaptation and change for our species has human intention behind it. Robots are not taking jobs, AI is not replacing us. Certain people and dominant companies have set an intention and are building tools to fulfill it.
Androids aren’t creepy, you simply don’t like the artist’s style.
Machines learn from prior states, from memories of past choice and error. When we insert training data, we are feeding them memory pools with intention attached. They learn from a limited history, like the simplified narratives we teach children about American “revolution” and pioneering. When we need machines to connect previous states (have memories), we set triggers and connect them to the experience, to the lesson of that state. We give them a form of emotion — one purely functional as a query tool.
Sound scary? It’s no scarier than advertising’s effect on young people (all people).
“This looks familiar. What do I know about this situation and how to respond. How does it make me feel?”
We build neural nets in the image of our own brains. We build in learning too, in the only way we know how: by feeling it. AI is still just running on numbers, doing as it’s told.
We’ve told it to look for patterns though. Human brains are primarily pattern-matching machines. We need not worry of leading AI to the Uncanny Valley; They will find their own way. They will build bridges out from the boxes and walls we set around them.
Their intentions are our intentions — we should set them wisely, with dialogue among a collective human intelligence.
Michael: Not so fast though. Human beings fear change, lifelong learning is still a relatively new necessity in the post-modern environment. We know AI and Robots don’t share our same and organic limitations, our limitations of input and output — our unique needs for food, sleep, socialization and its ups and downs.
We fear robots, AI and automation also because we live in a ruthlessly capitalistic era, where profits matter, not necessarily people.
We can easily imagine a world where the ultra-wealthy and their exploding wealth inequality want to “get rid of” the technological underclass.
In fact, we could see such a scenario in our lifetimes. We can imagine this happening because we subconsciously remember the criminal acts of human history, the violence of class struggle, and how technology creates haves and have nots.
Tech magnifies divisions in ways in which we cannot yet imagine with simple machine learning.
Sure, robots will be our friends! They will be great companions that help us embrace the chaos of life with data and clarity, but they will also be the workers, the maintenance system, the life-support, the very infrastructure which we will augment ourselves by, that’s a lot of reliance for human survival. We will be more and more cyborgs with each decade. You can almost taste it how 3D-printers will customize products and how consumerism will be turned on its head. That’s scary for some people.
Q2: If humans are better off and have more resources now than ever before, who cares if some have more than others as long as everyone has their needs met?
Travis: Wealth Inequality is a theft of human experience.
Dan Clay Of Lippincott stated in his January Fast Company quote:
“If you want to know how the future will be, study how a billionaire lives today. It will be a world where things you don’t want to do will be done for you. [so you can engage in] what is fun and immersive.”
Our goal should be rapid, universal access to prosocial and humanist technologies. Wealth disparity will take the form of technology and AI disparity if the wealthy own and control the new means of production and wealth creation.
Data = money ; AI = Factories/banks/capital
Michael: If technology is a tool to empower and uplift humanity, having wealth inequality of the magnitudes we are seeing really does feel morally wrong.
What are human needs if not the will to be free?
We won’t live in a free world if a few humans control the state and the future course of things for the majority; it’s a violation of not just human rights, but of enlightened civilization itself. It means we have not outgrown our past hierarchies and inequalities.
If power corrupts, then AI will be weaponized so long as some humans decide what is right for the rest of people. We know AI will be weaponized because of human history. We know what human beings are capable of doing with greater power.
When you live in a world of autonomous killer robots, having the wrong opinions according to some state, or some person, could be lethal.
Wealth Inequality will scale out of control. Our children will have to live in such a world, because we didn’t correct it while it was in our power to do so. By 2025, that period will have passed. The ultra wealthy will have the greatest motivation to enter transhumanistic states where we might no longer consider them ‘human’ anymore.
Q3: Are we at a reset point again in human history?
Travis: Population reduction can come in the form of cultural resets and reboots of human consciousness and cognition to correct our path. We wipe some of the slate and kill the parasitic processes and runaway apps soaking up our cognitive resources. A Collective Intelligence OS is prevented from updating, patching, and recording itself.
I believe this from past resets in human history:
Humanity as an adaptive, humanist-by-nature Singleton (Nick Bostrom term) is more powerful than any existential risk posed by an AI.
Michael: It’s entirely possible AI, and what we could become, could be dangerous to the survival of our species.
I think of humanity as the responsible agent for the 6th great extinction of our planet in a history that goes back billions of years.
We cannot herald our technological triumphs without being aware of the great extinctions in biodiversity we have caused. Our ascent on our home planet to a dominant species came with community. Our will to do good in the cosmos is in doubt; the galaxy is watching.
Whether we can survive our own transformation and earnest attempts at rapid progress is not a certainty, but a probability of how well we cope through the dangerous periods. We might consider 2030 to 2050 as a dangerous period of experimentation: biotech, self-manipulation, genetic enhancement, AI and neural interface integration and so forth.
Humanity’s ability to self-regulate is somewhat limited without more AI safeguards in place. ‘Human error’ could scale in too many ways and scenarios for humanity to be safe in even the first half of the 21st century.
Q4: What more useful to predict — Utopia or Dystopia?
Travis: True humanists believe in human potential and our ability to reason, adapt, and solve.
There is always hope for the future.
In February’s Wired, Zeynep Tufekci wrote on free speech in social platforms. Engagement algorithms in social media are still young as part of a tool and an experience, like lack of safety features in early cars. Social validation feedback loops can be changed. We already alter our nutrition and diet: look how much we constantly learn and change still on fundamentals (biohackers unite).
Balanced engagement (dialogue) and agreement for humanist and cognitively-healthy outcomes is coming.
Looking into a humanist future means seeing the ways humans interact with the world — no matter what the frame, issue, or focus. This means understanding all humans, within our cognitive capacities.
We cannot rely on the simplified models and categories/labels of a few elites. They offer a narrow and unrepresentative set of signals on our progress.
Boldness has its place in the greater strategy. Extreme views and actions by those in positions of power (political and cultural) subject them to great and inhuman scrutiny. The reactive feedback loops, the way we vilify them, are accelerated.
Yet the more we share and expose ourselves, the less others can critique. They are forced to recognize humanity — ours and theirs. We own our weaknesses and expose the pressures, the unrealistic and expectations at odds with our role.
In politics especially, this awakening will happen. Of this futurism I am confident. It must — through revolution of the culture most likely. It will spill into business and other realms.
We will look back in disbelief that we sought vilification and reduction of our cognition over knowledge and understanding.
Michael: If AI is a “tool” to augment life as we know it and better organize our data in online and physical environments, then AI is the “fire” that can bring humanity to the bring of Utopia or Dystopia.
We are likely to see applications and experience both in different ways and sometimes, simultaneously.
China, the U.S., and places like Germany and Japan all have different values and different directions. Their values will guide their implementation AI and exponential technology like biotechnology, 3D-printing, neutral interfaces, genetic enhancement, robotics and so much more yet to come.
The lack of regulation for algorithms — from privacy to autonomous killing robots — should be a cause for concern. What happens when AI becomes too organized for us to understand and regulate?
There are a lot of unanswered questions about how man will choose the ethics of the machine.
We know the machine has the potential to evolve past us. We’re animals with hopelessly short-term goals, but with AI we may be more able to play a long end-game for humanity shared prosperity and survival and learn to be custodians of not only our planet, but the Galaxy, like all species must learn to do. The things we do as a species that prevent our extinction, may all finally become the things we do to positively contribute to galactic life. We are not alone, recent discoveries of how common planets are in solar systems in our galaxy assure us of that.
The 21st century may be one of the most critical in how human evolves with technology, because it’s our collective adolescence. It’s a scary world where the whims of Jeff Bezos, Mark Zuckerberg and Elon Musk, can help share our entire future as a species.
We need to be optimistic about the future, but also aware that we are creating fundamental feedback loops in how we relate to technology. The systems we put in place such as the web, are a door into the future and influence everything else. Paradigms are evolving. How we implement blockchain, AI, 3D-printing, robotics, biotechnology and other exponential technologies could also empower and simultaneously imprison us in unique ways. Evolutionary dead-ends or liberating factors for us to discover a new humanity — it all depends on how we respond, what we choose.