9 Comments

Just on the beauty point, I find the Berksons paradox especially interesting, in the phenomenon of models etc thinking they're not beautiful enough. Applies elsewhere too.

Expand full comment

The default human response to challenge is to defend or justify. It is interesting that we have fields that do not exhibit this behavior - math and science. Except of course that people working in math and science DO exhibit this behavior, it is just slightly mediated.

Expand full comment

In the sciences, defensive behavior can be quite . . . heated.

Ad Hominems are common, especially from elite tenured professors and nobel winners

Expand full comment

In my experience defensiveness is somewhat less in math and the sciences. If only because in these domains when you're wrong (and everyone is at time) you tend to eventually get totally smacked in the face with your wrongness, and you'll just humiliate yourself trying to squirm out of it. (I speculate this trend is stronger in some fields where it's easier to just be proven wrong in a way you can't escape.) But indeed, when people reach a stage where people stop pointing out when you're wrong, you can easily un-learn this lesson.

Expand full comment

> Are you in a social group where it’s beneficial for you to look down on someone? You’ll soon find it easy to furnish yourself with reasons why this person sucks. If it’s beneficial to admire someone, you’ll soon find all sorts of reasons why that person is amazing. (See: Teenagers.)

This example works in two ways. Teenagers (or perhaps, more specifically, modern high school students) are famously concerned with popularity and reputation. Also, teenagers are a socially acceptable target for hostile generalizations; it can be socially beneficial to look down on them to show off one's superiority and maturity.

Expand full comment

Ha, well played! Personally, I tend to look at my teenage self as a kind of clarifying glimpse of lots of instincts that remain with me to this day. It's just that as an adult I've somewhat moderated (or maybe just masked) them.

Expand full comment

> Maybe right and wrong don’t “really” exist.

In my peer group - white collar professionals - this is basically taken as a given nowadays.

What would it look like if right and wrong really _did_ exist, 'right' was effectively 'traverse the steepest gradient of the valence manifold' (i.e. make yourself feel good over long periods of time), but these were more or less impossible to compute directly?

I'd think the result would be that groups which believed in right and wrong, of some sort, could effectively outcompete other groups in numerous situations. Nobody would agree on what precisely was right and wrong, lots of groups would claim to have 'the answer', each answer would lead to really weird consequences in some situations, and lots of people would swear up and down the whole thing was meaningless.

> here’s cases like the ones we’ve examined here, where it’s straight-up beneficial to have a distorted view of the world.

This would only be true if the world were stable over long periods of time. If your environment doesn't really change, then the ideal worldview is essentially a map of the rewards available in that environment. But if you're in a constantly changing environment with unknown risks and rewards, whatever illusions you have that work well in one situation are going to eventually fail elsewhere.

So if you want to continue prospering even in deeply unstable world, you really _do_ want your beliefs to line up with reality. And a good way to get there: continually seek out new experiences, set goals, fail at them, and then learn from the failures. If you have sufficiently broad experiences to cover so many domains that no false heuristics reliably work, then the only real option you have is what is actually the truth.

Expand full comment

> What would it look like if right and wrong really _did_ exist, 'right' was effectively 'traverse the steepest gradient of the valence manifold' (i.e. make yourself feel good over long periods of time), but these were more or less impossible to compute directly?

>

> I'd think the result would be that groups which believed in right and wrong, of some sort, could effectively outcompete other groups in numerous situations. Nobody would agree on what precisely was right and wrong, lots of groups would claim to have 'the answer', each answer would lead to really weird consequences in some situations, and lots of people would swear up and down the whole thing was meaningless.

In that case then I agree that the world wold look a whole lot like it did now. But then, I think it would also look a lot like it did now if right and wrong didn't exist. So either way it's the fact that right and wrong help groups (and individuals) outcompete others that really explains what we see, and it's a sort of philosophical argument about what it means for something to "exist".

I like your point about the environment being unstable, but I wonder what you mean by "long" periods of time. Like, I guess we're pretty much stuck with whatever part of our moral instincts come from genes, but it seems like our moral instincts have a pretty big cultural component, and culture can evolve a lot in just a few generations.

In general, I feel like always going for truth will be the way to go, assuming you're sufficiently good at it. But if there's a chance you're wrong, I guess the Bayesian thing to do would be to use a kind of blend?

Expand full comment

> In that case then I agree that the world would look a whole lot like it did now.

Hah! Yeah, i think we agree there.

> I wonder what you mean by "long" periods of time. Like, I guess we're pretty much stuck with whatever part of our moral instincts come from genes, but it seems like our moral instincts have a pretty big cultural component, and culture can evolve a lot in just a few generations.

This is a great question. I don't know if you saw the relationship to "the case against reality" but i think there's a similar argument to be made.

I think it's likely that when we are dealing with situations that have been consistent for > 1 million years, our perception is more likely a map of rewards. But for things that have existed only for

~50 years or so, I trust that our understanding of them is accurate.

For Example, I don't think we evolved to build photonanolithography machines, so i'm pretty sure our understanding of how those work is accurate. I DO think we evolved to work in roughly hierarchical packs, so our perceptions of other people, of the beliefs of our tribes, etc, are likely to be much more 'maps of reward structures' as you describe in your article.

The stuff that gets really tricky is, time frames in between. Say, things that have been consistently true for ~10,000 years. My intuition is to place those things more on the side of the evolutionary history (i.e. we perceive maps of rewards more than reality), than on the side of the photonanolithography machine.

One thing I do a lot of is learning about history and then trying to 'backtest' my worldview to places and times in history. This approach has ultimately made me much more conservative after getting myself killed participating in basically every resolution everywhere. Where I ended up with was, it's generally better to try and stay out of large scale conflicts all together, as much as possible.

>I feel like always going for truth will be the way to go, assuming you're sufficiently good at it. But if there's a chance you're wrong, I guess the Bayesian thing to do would be to use a kind of blend?

I've loved this conversation, FWIW. It helped me come up with the following perspective: I think truth and effectiveness of worldviews aren't orthogonal, so much as skew. Whenever I find approaches that work, what i tend to do is assume that they work because they are approximations of some truth that i don't yet understand. I try to work out what the underlying truth is, under something like an immutable baysean prior that "without sacrificing effectiveness, a more accurate worldview is better, and there's never a _need_ to sacrifice effectiveness, because if a technique is effective, understanding precisely why it's effective will let me make the thing even more effective"

For example, if overestimating how good i look makes me appear more desriable to others, maybe the overestimation works because it lets me radiate confidence and joy, and if i can learn to radiate confidence and joy regardless of how other people feel about me, i'll be even _more_ attractive than if i simply think i look like a 9 but really i'm a 7.

In other words, i don't think delusions could outcompete _perfectly_ accurate worldviews, but i do think poor approximations of complex truths can outcompete accurate but _incomplete_ worldviews which miss some important but complicated truths. "I look like a 7" is more accurate, but less effective than "I look like a 9", which is less accurate AND less effective than "i have achieved sufficient levels of self authorship that i can reliably emit joy and charisma, and people love this because everyone wants to be happy and feels naturally inspired by seeing other people navigate the world in a manner which is both entirely authentic and at peace with who they are."

Thus, rather than focusing _purely_ on being less wrong, i think it ultimately makes more sense to focus on being _more_ effective, even if this means, at first, pretending to believe things you don't really believe, and then trying to understand _why_ the effective "delusional" approaches are effective. At some level, the easiest way to be less wrong is just restrict the set of things you experience so narrowly that you can't possibly be wrong. You could spend a lifetime coming up with increasingly accurate models of some grass growing in a field, you'd be less and less wrong, but you'd be miserable.

Expand full comment