Inspiration for this post:
Automatically downgrading every single thing SBF believed in is an error. It’s important to actually think and figure out which things contributed to the fraud and which things didn’t.
Don’t be the guy who would have tried to cancel vegetarianism in 1945.
— vitalik.eth (@VitalikButerin) November 15, 2022
(My obtuse brain needed to be reminded that he’s referencing the fact that Hitler was a vegetarian).
So. Here’s my catalog of some of the bad ideas I’ve found lurking
around in the SBF and Effective Altruism universe. I’m not going to
make a claim about causal direction – which came first, the idea or
the fraud – because I think they were commingled. The ideas enabled
the SBF story, and the SBF story amplified the ideas. In any event, I
hope we can use this moment to permanently exorcise these intellectual
horrors.
I’m intentionally using strong language commensurate with the scale of
damage that was done.
Horror 1: Hard Consequentialism
There’s a sort of folk interpretation of consequentialism that goes
like this:
Try to do the most good for the most people.
That’s fine. I mean, who would possibly argue with that? It’s so inane
as to be pretty much useless. Or if useful, perhaps as one folk
aphorism in a collection of many others. Or perhaps cast as the virtue
of compassion.
But I’m taking claim against the SBF/EA interpretation of
consequentialism. Let’s call it hard consequentialism. It goes
something like this:
You can quantify the amount of good that you create in the world.
You have a moral obligation to maximize that number. This is your
primary moral compass. All decisions (but especially the important
ones) should be put through the following equation: multiply the
amount of good any action will create times the probability that the
action will succeed. This is the Expected Value of the action. Your
goal is to constantly search for actions with the highest Expected
Value.
Here’s how SBF described his decision-making after leaving Wall Street1.
Sam Bankman-Fried: This is looking at 2017, when I left Jane Street.
Again, I don’t want to portray this as being more confident than it
was, because it wasn’t super confident — this is all me just trying
to make the best decisions I could, given incomplete information.
But basically I just got out a piece of paper and forced myself for
the first time in three and a half years — basically the first time
since I joined Jane Street — to think quantitatively and moderately
carefully about what I could do with my life. I just got out a piece
of paper and wrote down what are the 10 things that seem most
compelling to me right now, and evaluate the Expected Value of each
of them, just ballpark it.
(I think it is clear from the context elsewhere in the interview that
when he says “expected value” he means utility – creating good in the
world – and not financial earnings. But in any event, this point
stands on its own.)
Now, to state a non-controversial point:
It is patently impossible to quantify the amount of good you create
in the world or to even “ballpark it.”
It doesn’t mean that you can’t create goodness. You just can’t
quantify it. Nobody has a Utility Created Score.
Not to mention that you definitely can’t quantify the second and third
and nth order effects of your actions on “total goodness.” Say you
donate a kidney to save a stranger’s life, and that stranger ends up
having a grandchild who’s a white collar criminal. Do you lose points
in the Utility Game because of your donation? There are endless
thought experiments like this happening at this very moment among
Philosophy 101 college students.
I find it helpful to compare this dilemma to Hayek’s Knowledge
Problem. Just
like it’s too hard to centrally plan an economy due to the explosion
of agents and preferences and possibilities, it’s impossible to
centrally account for utility in practice.
In practice, that means it’s delusional to make any actual decision on
a purely consequentialist basis. You simply Can Never Know How One
Action Will Impact The Sum of Happiness in the World. Period.
So, what do you do? You try to do your best. You listen to your
conscience. You try to think clearly. You compare to other situations.
You put yourself in other people’s shoes. You draw from something
you’ve read. But there sure as hell isn’t a formula.
Because while we know that you can’t possibly sum up the Utility
Impact of your actions, we do know that honesty is good. We do know
that being pro-social is good. We do know that solving widely-accepted
problems