Life 3.0 and the Path to AI Consciousness

I just read a section of Life 3.0 that discussed the three stages of life. Life 1.0 is when life's hardware can slowly evolve to overcome problems or obstacles. Life 2.0 is Life 1.0 plus the ability to rewire its own software, to learn new behaviors and overcome problems almost instantly. Life 2.0 is where humans currently stand. We are the first living species that can learn language to communicate and collaborate with each other, and enforce culture and traditions upon ourselves to integrate into a lifestyle that maximizes our survivability.

Despite our intellect, however, our biggest shortcoming is that we still depend on centuries of evolution to overcome threats to our hardware. We live very short lives and get afflicted with many diseases, some curable and some not, conditions like cancer and worse. But many people believe that eventually, humans will figure out how to rewire their own hardware just like we can rewire our software. Once we reach this point, we become Life 3.0, and most people think AI is what will get us there.

Personally, after having used AI extensively for a year now, not just in research but in development too, I agree with the notion that AI will get us to Life 3.0. I don't speak from having done any groundbreaking research on the future of AI, nor do I have any bold predictions on what's next after transformer models. I only say this after observing people use Copilot on CI/CD pipelines in their GitHub repositories.

A GitHub Copilot Revelation

Although it might sound silly to use GitHub Copilot CI/CD to justify why I think AI will bring about Life 3.0, just hear me out. I recently got into open-source contributions on GitHub, so I found a bunch of random issues across some big open-source projects and decided to participate in them. I pushed out a couple PRs in some repos, which were double-checked by other maintainers of the project and merged. In both PRs, the human maintainers left me comments on my changes, either asking me to fix something or asking a simple question about a change they didn't understand. The comments were fine for the most part, just a little difficult to understand at first.

I moved on to pushing PRs for the next few issues, and for one of them I got comments almost instantly. I went to check the comments on the PR and realized it was an automatic check done by Copilot. I was expecting the comments to be mediocre, but then I was shocked to see that every concern Copilot had raised on my changes was completely valid and easy to understand. Not to mention, it even gave me the suggested changes I should make. It had done a better job than the human maintainers, who commented later asking me to do the exact same thing Copilot had already asked me to do.

The Self-Improving Loop

What really struck me about this experience is how it demonstrates AI's ability to fix itself at its core. Copilot was essentially debugging code that could have been written by another AI. Now couple this with AI agents that write code autonomously, and you have a full development cycle where AI writes code and debugs its own code constantly. It's a fully automated loop of intelligence that continuously improves itself. The reason I think we reach Life 3.0 from this is because I eventually see us turning into cyborgs that continuously improve ourselves, or getting replaced by AGI that have their own self-improving robotic bodies.

The Only Missing Piece

It was at that point I realized that the only thing stopping us from being completely replaced by AI is its lack of consciousness, its dependence on being "activated" by a CI/CD pipeline or some human. Imagine someone made an AI that didn't have to be turned on through some orchestration layer or shut off right after it had finished streaming a block of text; an AI that knew when to switch on or off. The excitingly scary thing is, the more you read up on it, the more possible it seems. All I'm waiting for is some cracked research team to come out with a jaw-dropping paper, or some geeked-out students to make AI conscious in some summer project of theirs. From there, there's really no going back.


These are my reflections after reading Life 3.0 by Max Tegmark. The book continues to challenge how I think about AI's role in our future.

← Back to home