Intro

Hi. This is the first of three little video essays about how AI currently isn’t on the right path, and is set to make some of the biggest problems of the 20th century even worse.

“Alignment” is one of the words people use for trying to fix this problem. There’s a lot of talk about “aligning AI with human values”, or “aligning AI with human flourishing”.

(examples)

So, what do these people mean by human values? What do they mean by human flourishing?

Mostly these terms are undefined. But I believe taking these terms seriously is actually where to start in addressing these concerns.

I'm Joe Edelman. I'm known for XYZ. So I'm the guy to clear this up. I'll be assisted in this series by Ellie Hain, Joel Lehmann, Oliver Klingefjord, and Ivan Vendrov.

Contents

This is a video essay in three parts.

Hope that makes sense. Let's go ahead with Chapter One.

It Matters How We Talk of Flourishing

So. In this first part of the talk, I want to show something that might sound super-abstract. But I'll try to show it in a concrete way.

The thing I want to show is, it matters how we talk about flourishing.

How we conceptualize what we want out of life... Like, what vocabulary we use for talking about it, makes a huge difference.

This is where we have to start.

Many of you probably know that “AI Alignment” is a field, and that, in this field, they use vague terms like "aligning AI with human values" or "aligning AI with human flourishing."