Monday, July 9, 2012

Toward reflectively consistent desires

The LW cluster is full of debates about what we really want, how to model it, meta-wantsif we can be said to actually want anything, and what we would want if we were smarter, thought faster, and were more the sort of person we wished to be. I think there's an answer.


It pains me to see people declaring war on themselves and talking about "conscious wants" opposing "unconscious wants" with the implied "I am the conscious, fuck that other guy" attitude. Hell, I don't even like it with the "That subconscious guy is cool, we're friends and we're better off trading" attitude.

The reason it irks me is that it implies we have more than one terminal utility function that disagrees. Even if that were true, trading is obviously and immensely better - for both parties. But it doesn't look like it's true.

While you can always trivially fit a utility function to any set of behaviors, people are fairly bad at acting on a consistent utility function. You can catch a lot of people out with the Allais paradox. You can write it off as a simple error coming from overvaluing small probabilities, but there are more worrying errors too. Talking about things in far mode gives very different answers than near mode.

We also appear to optimize for something other than stated desires a lot of the time. We're a freaking mess.

But there is good news. When cornered and forced to be reflective, we do come up with answers. We don't reach impossible to break stalemates.

Our desires/preferences/wants/feelings/thoughts/and emotions are all subject to change upon new information - all of them. It often seems like this isn't the case, when people don't "listen to reason" or you fail to talk them out of their fears. Or when you address the stated objection and their belief doesn't change.

These aren't cases of immunity to reason, they're cases of not getting to the right reasons - not getting to the reasons they're moved by. Either by not visibly taking into account the objections, or by not addressing the real concerns, or something similar. It sure as hell ain't always easy - but I've never seen a case that
looked impossible.

For example, people often treat tastes as if they are determined by an immutable piece of cognitive machinery. It's just a Stimulus-Response, not a reflectively coherent value, let alone my reflectively coherent value.

And though it is a S-R, some values are better than others. It affects my actions, and it is part of my map. So I want it to make sense. Liver is incredibly nutritionally dense. And it's delicious. If your tastes say otherwise, your tastes are wrong. Fix them.

Communicate with that piece of cognition by keeping in near mode mind the health benefits. As you eat, or imagine eating liver, associate it with all the wonderful nutrients you're giving yourself. I didn't always like liver, but now that I know it's good for me, it tastes good for me - because I did the work needed to prevent passive compartmentalization.

Sometimes it's more complicated than that. Perhaps both 'parts' involve more cognition than a single S-R. When you remove barriers to communication, it tends to work out. Maybe someone has a fear of dogs that refuses to go away with a simple Fast Phobia Cure designed to remove simple S-R. It's probably because that fear is being reinforced from the 'top down' by a belief that they're dangerous. And perhaps the trip to the hypnotherapist is motivated by a contradictory belief that they are not dangerous. All the hypnotist has to do is get him to make up his damn mind about whether they're dangerous. Don't let him say "they're safe" without committing to petting the dog. Don't let him say they're dangerous without kicking him out and declaring the problem solved and a mistake avoided. Force him into both thinking modes until there's an answer - and you're done.

It looks just like we don't know what we want - and that we've considered some pieces more than others. They're just disagreements about instrumental goals that go away once you dig deeper. No reason to assume  multiple utility functions, no reason to conspire against ourselves so that "our" instrumental goal wins out... And a hell of a good reason not to.

There was no war to be fought - I don't will power through disgustingly nutritioius foods. There are no compromises to be made - I'm not begrudgingly accepting shit. I just think things through until all the pieces fit. And in the end, they always do.

I mean, sometimes you have conflicting values - in a sense. Say you want the new toy, but you don't want to pay. So be it. It's not that you have two optimal worlds, it's that you have to choose between two sub-optimal worlds. It is what it is.

All this can seem like just one more way to frame things - no more or less valid than the others - just leading to different outcomes. But there's one important distinction. If you've declared war on your "subconscious" goals in favor of some more socially acceptable goals, I bet you're flinching away from answering "but what do I really want" - in fact you're probably doing it by jumping to "There's no such thing as 'really' want".

So, um... Good news, you want stuff...I guess?

No comments:

Post a Comment