Intro
So that's Wise AI. But Wise AI is only part of the problem of getting AI and humanity working well together. In this chapter I'll show three other parts of the problem, and what we propose.
Storytime
So: imagine we had some Wise AIs. We had them in the kind of existing ecosystem of AI labs, hedge funds military the military industrial complex and so on. Probably some individuals would love to use the Wise AIs — they help them live more meaningful lives, they’re meaning coordinators, and so on. That’d be great.
But: lots of people won’t want to use the Wise AIs. Here's some examples:
- Imagine you run a hedge fund and your job is to make the most money for your investors. Now you could get a Wise AI, but the wise AI will will recognize its moral situation as running a hedge fund and will refuse to do a lot of the things that’d actually make money. So, if you have the option of buying a not-so-wise-AI instead, one that just does as its told and makes money, you'd want to get that one, not the wise AI.
- Same, if you work for the US Pentagon. Or the military of a country with a more precarious situation. You want to be defended by the most ruthless, sociopathic AI you can find!
- Let's say you're the campaign manager for a politician, in a country with intense ideological warfare. Your way of operating so far has been to get the voters angry and scared, then turn that into the idea that “the people have spoken” and they chose your candidate. Do you want to use a wise AI to run such a campaign? To develop messaging for it? Even if the Wise AI tried to show the people a better path, using its reasons and values, the people are used to being scared and outraged. That’s their frame of mind, so neither the campaign manager — who’s used to winning a certain way — nor the people — who are used to thinking a certain way — would necessarily choose the better path immediately.
So, because of all that, there'll be demand for non-wise AIs. The AI labs will continue to race to make those as well. They’ll race to super-intelligence, not super-wisdom. Super-wisdom might, at best, be a kind of a niche product for people who are already values-driven and who want to work with wise AIs.
Contents
So, clearly, wise AIs aren’t enough! At least three more things need doing.
- Popular change. People are used to thinking from fear and outrage in politics, and from goals and preferences in the market. They need an introduction to thinking in values and sources of meaning. For two reasons: first, that grows the market for Wise AI. As they understand their own flourishing in this more robust, less manipulable way, they’ll see that Wise AI helps them with this truer kind of flourishing. It also make it possible to move away from the ideological warfare of current political systems. This popular change is the base layer of Full Stack Alignment.
- Race to Superintelligence. Next is the race of the AI labs. Can we switch them to from racing towards super-intelligence, to racing towards super-wisdom? That's the middle layer of full stack alignment.
- Geopolitics and finance. Finally, the top layer concerns those hedge funds that just want to make money, and the geopolitical actors who’d want to hire a sociopathic defense AI. Can we lessen or eliminate those dynamics?
In this chapter, I'll go through these three layers, and make some proposals, starting at the top. They’re a bit less worked out than Wise AI, but I hope they’re still useful.
Top Layer: Wise Collectives
Let’s turn first to those financial, geopolitical, and ideological actors. I want to start by pointing out how these things compete.
Now, on the surface, financial actors win by making more money; geopolitical actors win wars; ideological actors win votes.
But there’s a deeper way to think about this.
See, you can think of any kind of organization, government, or company as a thing that's there to take care of certain people.