A I D I A

We build your ideas

You give them life

AGI

Recently, Dario Amodei, the CEO Anthropic (company behind Claude) claimed that AGI would be coming in either 2026 or 2027.

Sam Altman, the CEO of OpenAI claims AGI is expected sometime in 2025.

Regardless on whether you believe AGI is truly around the corner, or whether this is just an attempt to raise capital, I think it’s important to take a step back and ask a few questions:

What’s the correct balance between freedom and security?

An AGI that can just match the performance of an average human is an incredible tool, simply because you can theoretically run those same tasks at the speed of the computer rather than the speed of our fleshy neurons.

Things which used to take time could be carried out incredibly fast.

On the one side, it means that activists fighting corrupt governments would be able to distribute the truth, analyze corruption, and create incredibly sophisticated plans to bring the government back to the side of the people.

On the other side, it means that terrorists fighting to undermine public safety have the ability to warp the truth via propaganda, develop hard-to-crack protocols that prevent the cells from being detected and squashed, and increases the daily threats that citizens might have to face.

  • Would citizens benefit from AI being regulated? Or does that increase their risk?
  • Does regulation cause a centralization of power to the ideas of one idealogical group?
  • How easy is it to access an unregulated version of AI?
  • Is unregulated AI or regulated AI more likely to stay competitive in the intelligence landscape?

What does it mean to be intelligent?

If an AGI can recite the facts from any history book, pass all college entrance exams, and correctly diagnose fractures from X-rays, does that make it intelligent?

There’s no question that AI is becoming increasingly capable at previously solved tasks.

Maybe it’s just “copium” on my part, but… I have yet to see true examples of “creativity”. And maybe it’s because creation is more than just pattern matching.

What is truly valuable?

Let’s make the following assumptions:

  • AGI, as predicted, will be able to carry out 90% of the tasks humans are currently employed to do faster and more effectively.
  • AGI, as predicted, will be able to discover and implement 100% of the follow up tasks / value sources that are first and second order effects of this transformation.
  • Robots, as predicted, equipped with AGI will encroach on the remaining 10% of tasks.

What would be of value?

  • Intelligence / knowledge would mean nothing.
  • Production would be worth nothing.
  • Labor would be worth nothing.

As far as I can see, if production / intelligence is unbounded, then there are only a few valuable things that remain:

  • Social Influence
  • AGI Influence
  • Ownership over raw resources

Value in other things is only retained if production, intelligence, or access to either is bounded.

Who would make those decisions of limits?

What are our unalienable rights?

Consider the same assumptions as above.

  • Do we each have equal rights to AGI?
  • If not, what determines rights to AGI?
  • Do we each have equal rights to production?
  • If not, what determines rights to production?

Currently, the answer is no, and rights are determined by cash, and cash is earned by creating value.

If the result of this means we as humans cannot create value, cash becomes more or less worthless.

Where does that leave us?

Where does this leave us?

Seriously, if you look back, it leaves us asking some of the most foundational philisophical questions we’ve been asking since the dawn of time.

And it terrifies me to think that we haven’t found an answer to those questions we can all agree on.

If AGI is truly around the corner, we need to get our act together and think.

chess

Like this? Join the email list.

Micro-thoughts on operational strategy straight to your inbox.

* No, we don't spam. We hate spam. A lot.

Browse More Newsletters