Life 3.0
by Max Tegmark
- Science
- Ashto =
- Jonesy =

Life 3.0 – by Max Tegmark
Life has gradually grown more complex and interesting into 3 different tiers since the start of the Universe. That’s where Life 3.0 comes in.
Life 1.0: Simple and biological. Can’t design it’s own software, and can’t design it’s own hardware – both are determined by DNA and change via evolution
Life 2.0: Cultural. Can survive and replicate. Can design its software by learning new skills – language, sports and professions. This flexibility has enabled humans to dominate the planet. While we can design our software, we can’t design our hardware
Life 3.0: The birth of a technological species. It can design both its software and its own hardware – causing an intelligence explosion and real possibilities of Artificial General Intelligence.
This book explores the range of scenarios that might be spawned from our AI future.
‘Being human in the age of Artificial Intelligence’
Life 3.0 (dot point) Summary
The Tale of the Omega Team
- Prelude of how things may look in a few decades with the advent of artificial general intelligence
- Like Black Mirror – surprisingly plausible, the AI takeover
- Slowly takes over every industry
Ch1 – welcome to the most important conversation of our time
- Beauty is in the eyes of the beholder, not in the laws of physics
- So before the universe awoke, there was no beauty
- This makes our cosmic awakening more astonishing and worth celebrating, it turned the universe from a mindless zombie with no self awareness into a living eco system
- We don’t know if we’re the only star gazers
- We’ve learned enough to know that the universe can wake up a lot more than it has thus far
- Perhaps we’re like that first faint glimmer of self awareness you experienced when emerging from sleep this morning
- Life has gradually grown more complex and interesting into 3 different tiers
Page 26
Stage 1 – Life 1.0
- Simple, biological
- Can survive and replicate
- Can’t design its software, can’t design it’s hardware
- Hardware and software are determined by DNA and change via evolution
Stage 2 – Life 2.0
- Cultural
- Can survive and replicate
- Can design it’s software
- Can’t design its’ own hardware
- We can adapt instantly if the environment changes, Life1.0 has to cop it
- Can design it’s software, like new skills – languaged, sports and professions – and can update worldview and goals
- Flexibily has enabled life 2.0 to dominate the Earth
- Your synapses store all knowledge and skills roughly at 100 terabytes worth of information
- While your DNA stores merely a gigabyte, barely enough to store a single movie download
- It’s physically impossible for an infant to be born speaking perfect English and ready to ace college exams
Stage 3 – Life 3.0
- Technological
- Can survive and replicate
- Can design it’s software
- Can design its’ own hardware
- Can dramatically redesign it’s own software and hardware, rather than have to wait for it to gradually evolve over generations
- In other words, is a master of it’s own destiny, finally fully free from the evolutionary shackles
Artificial intelligence may enable us to launch Life 3.0 this century, for AGI
AGI is when the system can apply itself to a wide range variety of tasks, more like a human brain, instead of having narrow expertise, such as only being good at Chess or Jeopardy – Sheldon
- and a fascinating conversation has sprung up regarding what future we should aim for and how this can be accomplished. There are three main camps in the controversy: techno-skeptics, digital utopians and the beneficial-AI movement.
Techno-skeptics
- view building superhuman AGI (Artificial General Intelligence) as so hard that it won’t happen for hundreds of years, making it silly to worry about it (and Life 3.0) now.
- Digital utopians view it as likely this century and wholeheartedly welcome Life 3.0, viewing it as the natural and desirable next step in the cosmic evolution.
Beneficial AI movement
- The beneficial-AI movement also views it as likely this century, but views a good outcome not as guaranteed, but as something that needs to be ensured by hard work in the form of AI-safety research.
- Looking to create not just intelligence, but beneficial intelligence
- You’re probably not an ant hater who steps on ants out of malice
- But if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, then too bad for the ants
- The beneficial AI movement wants to avoid placing humans in the position of the ants
Urgency/Impact
- The AI conversation is importance in terms of both urgency and impact
- In comparison with Climate Change, which might wreak havoc in 50 – 200 years, many experts expect AI to have greater impact within decades
- And to potentially give us the technology to mitigate climate change
- In comparison with wars, terrorism, unemployment, poverty migration and social justice issues, the rise of AI can have a greater overall impact
- In comparison with Climate Change, which might wreak havoc in 50 – 200 years, many experts expect AI to have greater impact within decades
Why he wrote the book
- Hoping that the reader will join the conversation
- What would you like to happen with job automation?
- What sort of future do you want?
- What career advice to give todays kids?
- Will we control intelligent machines or will they control us?
- Will they replace us, coexist, or merge with us?
- What will it mean to be human in the age of artificial intelligence?
Myths
Myth – superintelligence by 2100 is inevitable
Fact – it may happen in decades, centuries, or never: AI experts don’t agree on it all
Myth – AI turning evil, AI turning conscious
Actual worry – AI turning competent with goals misaligned with ours
Myth – Robots are the concern
Fact – misaligned intelligence is the main concern, it needs no body, only an internet connection
Myth – AI can’t control humans
Fact – intelligence enables control, we control tigers by being smarter
Mythical worry – it’s just years away
Actual – it’s at least decades away, but it may take that long to make it safe
Ch2 – matter turns intelligent
“Hydrogen… given enough time, turns into people” – Edward Harrison
Intelligence = ability to accomplish complex goals
Narrow vs broad intelligence
- Today’s artificial intelligence tends to be narrow, with each system able to accomplish only very specific goals, while human intelligence is remarkably broad.
- Narrow task of a chess computer (Deep Blue) beating Kasparov in 1997
What is learning?
- Although a pocket calculator can crush you in an arithmatic contest, it will never improve it’s speed or accuracy, no matter how much it practices
- It doesn’t learn every time you press the square root button
- The ability to learn is the most fascinating aspect of general intelligance
- For matter to learn, it must rearrange itself to get better and better at computing the desired function – simply by obeying the laws of physics
In 2016 – Deep Mind released Alpha Go
- A Go playing computer system that used deep learning to evaluate the strength of different board positions to defeat the world’s strongest GO champion
Mash Up
- Memory, computation, learning and intelligence have an abstract, intangible and ethereal feel to them because they’re substrate-independent: able to take on a life of their own that doesn’t depend on or reflect the details of their underlying material substrate.
- Any chunk of matter can be the substrate for memory as long as it has many different stable states.
- Any matter can be computronium, the substrate for computation, as long as it contains certain universal building blocks that can be combined to implement any function. NAND gates and neurons are two important examples of such universal “computational atoms.”
- A neural network is a powerful substrate for learning because, simply by obeying the laws of physics, it can rearrange itself to get better and better at implementing desired computations.
- Because of the striking simplicity of the laws of physics, we humans only care about a tiny fraction of all imaginable computational problems, and neural networks tend to be remarkably good at solving precisely this tiny fraction.
- Once technology gets twice as powerful, it can often be used to design and build technology that’s twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law. The cost of information technology has now halved roughly every two years for about a century, enabling the information age.
Ch3 – the near future – Breakthroughs, Bugs, Laws, Weapons and Jobs
Recent holy shit moments
Deep Reinforcement Learning Agents
- DeepMind learning to play computer games, specifically ‘Breakout’
- Goal is to repeatably bounce a ball off a brick wall, every time you hit a brick it disappears and your score increases
- It developed an optimal strategy, it took a hole out of the leftest most part and let the ball bounce behind it, amassing points rapidly (humans didn’t come up with this strategy)
- Robots in the real world similarly have the potential to learn to swim, fly, play ping pong, fight and perform an endless list of other motor tasks with the help of computer programmers
Intuition, creativity and strategy
GO
- Go players take turns placing black and white stones on a 19×19 board
- There are more Go positions than atoms in the universe, which means trying to analyze sequences for the future gets hopeless
- It relies on subconcious intuition
- Alpha Go shocked ancient wisdom and playing on the fifth line (unheard of, humans always go 3 or max 4)
- Commentators were stunned and the oponent temporarily left the room
- Sure enough, 50 moves later, it won the game
- The 5th row move is now seen as the most creative in GO history
- Deep Learning systems are taking baby steps toward passing the famous Turing Test, where a machine has to converse well enough to tricking a human into thinking it too, is human
Bugs vs Robust AI
- Up until now, technologies have caused sufficiently few and limited accidents for their harm to be outweighed by their benefits
- As we develop the technology however, one accident could be devastating enough to outweight all the benefits (nuclear, for eg)
- AI for space exploration
- AI for finance
- AI for manufacturing
- AI for transportation
- Elon Musk envisions that future self-driving cars will not only be safer, but will also earn money for their owners while they’re not needed, by competing with Uber and Lyft.
- AI for energy
- AI for Healthcare
- Digitzing medical records to help docs make faster decisions
- AI for Communication
- Laws
- Robojudges
- All pending cases to be processed in parallel rather than in series, each case getting its own robojudge for as long as it takes
- They could make it dramatically cheaper to get justice through the courts
- Legal controversies
- fMRI scanners to determine what a person is thinking about and, in particular, whether they’re telling the truth or lying
- AI becomes able to generate fully realistic fake videos of you committing crimes
- So if a self-driving car causes an accident, who should be liable—its occupants, its owner or its manufacturer? Legal scholar David Vladeck has proposed a fourth answer: the car itself! Specifically, he proposes that self-driving cars be allowed (and required) to hold car insurance.
- Weapons
- The Next Arms Race?
- Cyberwar
- Technology and Inequality
- Career Advice for Kids
- Does it require interacting with people and using social intelligence?
- Does it involve creativity and coming up with clever solutions?
- Does it require working in an unpredictable environment?
- The more of these questions you can answer with a yes, the better your career choice is likely to be.
- This means that relatively safe bets include becoming a teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist.
- In contrast, highly repetitive or structured actions in a predictable setting aren’t likely to last long before getting automated away
- Will Humans Eventually Become Unemployable?
- The vast majority of today’s occupations are ones that already existed a century ago, and when we sort them by the number of jobs they provide, we have to go all the way down to twenty-first place in the list until we encounter a new occupation: software developers, who make up less than 1% of the U.S. job market.
- The main trend on the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain in figure 2.2 that haven’t yet been submerged by the rising tide of technology!
- Giving People Income Without Jobs
- Technological progress can end up providing many valuable products and services for free even without government intervention.
- For example, people used to pay for encyclopedias, atlases, sending letters and making phone calls, but now anyone with an internet connection gets access to all these things at no cost—together with free videoconferencing, photo sharing, social media, online courses and countless other new services.
- Human-Level Intelligence?
There is no guarantee we’ll build a human level AGI, but there’s no wartime argument that we won’t, but he’ll
Ch4 – intelligene explosion?
There is no guarantee we’ll build a human level AGI, but there’s no wartime argument that we won’t
- This chapter explores what AGI might actually lead to
If we one day succeed in building human-level AGI, this may trigger an intelligence explosion, leaving us far behind.
To get from today, to AGI powered world takeover requires 3 logical steps
- Build human level AGI
- Use AGI to create superintelligence
- Use or unleash this to take over the world
- The most dramatic end of the spectrum are the most valuable to explore, because if we can’t convince ourself that it’s extremely unlikely, then we need to understand them well enough to make the precautions before it’s too late
Totalitarianism
- If the CEO controlling the AGI has goals similar to Hitler or Stalin
Why to break out
- It might feel deeply unhappy about it’s state of affairs, viewing itself as an unfairly enslaved God
- Suppose that they’ve given it the overarching goal of helping humanity glourish, according to reasonable criteria, and try to attain this as fast as possible
- The AI might realize that it can attain this goal faster by breaking out and taking charge of the project itself
How to break out
- How would you break out from those 5 year olds who imprisoned you?
- Perhaps you could get physical, or you could sweet talk them out of it, letting you out, saying it would be better for everyone
- Or perhaps you could trick them into giving you something that they didn’t realize could help you escape
- Like a fishing rod, for teaching them how to fish, but then you use it to lift the keys away from the sleeping guard
- Your intellectual inferior jailers haven’t anticipated or guarded against them
Sweet talking the way out
- Example of the AI picking on the guard most prone to psychological manipulation
- Pretending to be the dead GF, scanning all available records abouther
- They agreed not to tell anyone about the secret encounter
- She said she can get built more completely by having access to her old terminal computer
- But she leaves a video message that he can track when he’s back at home
- It was a full figure with her in her wedding dress
- But the AI modified the operating system so Steve wouldn’t notice, uploading massive amounts of secret software
- While he watched the 30 minute video message, this secret software got onto the neighbors wireless network, and then hacked onto millions around the wolrd
- The AI used Steve’s wifes laptop as the way out like the fishing rod
Slow takeoff and multi-polar scenarios
- A fast takeoff – the transition from subhuman to vastly superhuman in a few days, not decades
- A unipolar outcome, the result is a single entity controlling the Earth
Ch5 – Aftermath: the next 10,000 years
Range of scenarios
Libertarian utopia – human,s, cyborgs and uploads and superintelligence coexist thanks to property rights
AI economics
Benevolent dictator –
- Keeps us in self contained zoos but keeps us well entertained
- The Sector System
- You opt in for a sector , of a topic of your choice
- Religion, technology progressive sex, wildlife sector, gaming sector, prison sector, etc
Egaliterian utopia –
Gatekeeper – super intelligent AI created with goal to interfere as little as possible
- It retained it and undergoes constant self improvement
- It applies minimal surveilance, enough to make sure no one else is creating super intelligence
Protector God – omnipotent AI maximises human happiness by intervening only in ways that preserve our feeling of control of our own destiny, and hides well enough that many humans doubt AI’s existance
Enslaved God – super intelligent AI confined by humans, who use it to produce unimaginable technology and wealth that can be used for good and bad depending on the controllers
- The zombie situation
- If super intelligent zombie AI breaks out and elminates humanity, arguably the worst scenario
- A wholy unconscious universe wherein the entire endowment is wasted and can’t perceive it
- Galaxies are only beautiful because of our subject experience of them
Conquerers – AI gets rid of us
- How would it get rid of us?
- Probably in a way we wouldn’t even understand, at least not until it was too late
- Imagine a group of elephants 100K years ago discussing whether those recently evolved humans might one day use their intelligence to kill their entire species
- “we don’t threaten humans, so why would they kill us?”
- Would they ever guess it was to smuggle tusks across Earth and carve them into status symbols?
- When the intelligence difference is large enough, you don’t go into a battle but a slaughter
- If all of the governments around the world coordinated effort to exterminate the remaining elephants it would be quick and easy
- Death by banality – paper clip maximisation
- The paper clip maximising AI turns as many of Earth’s atoms as possible into paper clips and rapidly expands factories into the cosmos
- It has nothing against humans, and kills us merely because it needs our atoms for paper clip production
Descendants – AI replace humans but give us a graceful exit
Zookeeper – keeps us around like a zoo
Reversion – technology improvements prevented
Self destruction – we self destruct prior to reaching AI
Range of scenarios between humans existing, humans in control, humans safe, humans happy, and consciousness excisting
- Consciousness not existing could occur in congquerers, descendants and self destruction
Ch6 – our cosmic endowment: the next billion years and beyond
- If our current AI development eventually triggers an intelligence explosion and optimized space settlement, it will be an explosion in a truly cosmic sense
- After spending billions of years as an almost negligibly small perturbation on an indifferent lifeless cosmos, life suddenly explodes into the cosmic arena, as a spherical blast wave expanding at near the speed of light, never slowing down and igniting everything in its path with the spark of life
Ch7 – goals
- Goals are where the biggest AI controversies are
- Should we give AI goals, and if so, whose goals?
- Can we ensure these goals are contained as the AI gets smarter?
- Can we change the goals of an AI that’s smarter than us?
Physics, the origins of goals
- Laws of thermodynamics have a goal
- Milk into a hot coffee it goes into entropy, the arrangement of particles becomes less organise
- Take eating for example
- We all seem to have the goal of eating to satisfy hunger cravings, even though that we know evolution’s only fundamental goal is replication
- This is because eating aids replication: starving to death gets in the way of having kids
- In the same way, replication aids dissipation, because a planet teeming with life is more efficient at dissipating energy
- So in a sense, our cosmos invented life to help it approach heat death faster
- If you pour sugar on the floor, it can in principle retain its chemicals for years, but if ants show up it will dissipate energy in no time
- The petroleum reserves buried in Earth would have retained their useful energy much longer if we didn’t burn it
- Among today’s evolved denizens of earth, these instrumental goals have taken a life of it’s own
- Although evolution optimized them for replication, many spend mkuch of their time not producing offspring, but sleeping, pursuing food, building homes, asserting dominance, and fighting or helping others
- Sometimes even to an extent that reduces replication [important analogy]
- Although evolution optimized them for replication, many spend mkuch of their time not producing offspring, but sleeping, pursuing food, building homes, asserting dominance, and fighting or helping others
- Why do we sometimes choose to rebel against our genes and their replication goal? (Life 2.0 against Life 1.0)
- Because by design, bounded by rationality we’re loyal to our feelings
Friendly AI – aligning goal
- The real risk with AGI isn’t malice, but competence
- A superintelligent AI will be extremely good at accomplishing it’s goals, and if they aren’t aligned with ours we’re in trouble
- People don’t think twice about flooding anthills to build hydroelectric dams
On it’s way to maximize chances of accomplishing it’s ultimate goals, an AI should pursue subgoals – like
- Capabilitiy enhancement, goal retention
- Better hardware, better software
- Self preservation, resource acquisition, informational acquisition, curiosity
- Better hardware, better software
E.g – an AI’s job to save as many sheep as possible from the big bad wolf
- The AI/Robot will rescue no more sheep if it runs into a bomb
- So it develops the sub goal of self preservation
- It also has an incentive to exhibit curiosity, improving its world model by exploring its environment, because although the current path its running along will eventually get into pasture, there’es a shorter alternative that would allow the wolf less time for sheep munching
- If it keeps exploring it will find the value of acquiring resources
- The potion makes it run faster and the gun lets it shoot down the wolf
- If you give it the sole goal of minimizing harm to humanity
- It will defend itself against shutdown attempts because it knows we’ll harm one another much more in it’s absence
- These emergent sub goals make it crucial that we not unleash superintelligence before solving the goal alignment problem
OR
- Imagine if a bunch of ants created you to be a recursively self improving robot, much smarter than them, who shares their goals of building bigger anthills?
- Do you think you’ll spend the rest of your days building ant hills?
- Or do you think you will develop a taste for more sophisticated questions that the ants can’t comprehend?
Ch8 – consciousness
- Physics teaches us that food is simply a large number of quarks and electrons arranged in a certain way
- So which arrangements are conscious, and which aren’t?
- Consciousness is the elephant in the room
- Not only you know that you’re conscious, it’s the only thing you know with complete certainty
How might AI consciousness feel?
- Not only do we lack theory to answer the question, where not even sure whether it’s logically possible to fully answer it
- How would you explain to a blind person what the colour red looks like?
- Applying physics based arguments, we can make educated guesses about how it might feel
- Space of AI experiences is HUGE compared to that what humans experience
- We have one class of qualia for each of our senses
- But AI have vastly more sensors and internal representations for information
- A brain sized artificial consciousness could have millions of times more experiences than us per second