brin_bellway: forget-me-not flowers (Default)
[personal profile] brin_bellway
Today I was *trying* to determine whether a poem in an old birthday card was an Established Thing or something the sender wrote herself, and instead I stumbled across this hardcore hedonic utilitarian critiquing Brave New World and talking about what a *properly* done utopia would be like.

My first thought was "how is this dude coming across so badly? I know utilitarian arguments *can* be made well, I've seen them, so what exactly is the difference that's making him give off such creepy vibes?"

My second thought was "wow, you can really tell this dude's not autistic".

I tried pulling at that thread a little more, and I don't think it's that he's *allistic* necessarily--autism is a big umbrella--but that, specifically, he talks like someone who has *never been overstimulated*.

He thinks that if variety is good, then more of it is better; if evolution-in-the-colloquial-sense is good, then more of it is better; if pleasure is good, then more of it is better. This *absolutely does not* fit with my own experience: even *pleasure itself* can be bad if it is too intense. Not in some abstract Catholic-guilt way, but viscerally aversive.

(This does *not* seem to be a two-directional sign overflow: I am very not a masochist and have never had sufficiently intense suffering wrap around and become enjoyable. Admittedly, there are absolutely forms of pain far worse than anything I've ever experienced, but I'm hardly going to test them out: besides, masochists of my acquaintance generally indicate it doesn't need to be that intense to start working.)

---

...honestly, I'm not sure I even value happiness that much? It's...nice, sure, in moderate quantities, but it hardly seems like something to base a value system around. Possibly part of it is that I'm so accustomed to operating at the second level of the Hierarchy of Needs that I can't wrap my head around the concept of wanting more than safety, but I don't think that's the whole story.

I look at that article and I think, why is *this* what he wants? He dismisses other desires as the legacy of "selfish DNA", but why latch on to *happiness* as the desire to be endorsed? What makes *that* special?

(I wonder if he would just sputter and go "It just...*is*! How could it *not* be?!". I know I tend to sputter at people who don't have a strong will to live.)

Date: 2020-12-21 08:51 am (UTC)
From: [personal profile] contrarianarchon
... I mean it seems like a fair read of his statements about optimizing-subjective-experience as a process that he'd be fine with people who did not prefer to feel some particular emotion constantly not preferring to do so; that the ideal happiness drug is the one that stops perfectly just short of being "too much" for a person for whom that is conceptually possible. (but also that your concern with the things of petty mortals like "survival" or "values" is itself an outmoded thing of the past and you won't need the impulses related to it in the glorious post-human future).

He mostly just seems unbelievable naive to me WRT the entire notion of post-darwinism as a concept, I think on net. The kind of person to whom bridges are sold. This is esp problematic given how many pitfalls there are between us and his dream. We would have to pass through the kind of utopia I tend to plan for (and consider dangerously optimistic itself) and probably several more grand cycles of world-fixing-and-healing-the-conception-of-what-good-we-can-do-for-the-great-work to even kinda try something like that but the technology will be there and dangerous much sooner.

How do you think of - the thing-which-you-get-which-isn't-self-preservation then, if not as happiness? I don't get the impression that you make literally every decision with survival as the sole and primary element of utility?

Date: 2021-01-09 08:57 am (UTC)
From: [personal profile] contrarianarchon
This is a really cool insight but I've been sitting here trying to formulate a response for a while, and like. This sounds like a really good model for why and how to do things; in terms of being-able-to-predict-yourself it might even predict me better than the concious models I use for deciding how to follow my own preferences (which tend to be world-state-optimization and/or resource-optimization shaped)

"Why do people care about things" seems to be the hardest and the easiest question sometimes, but I guess it's not actually that useful to answer unless you're distinguishing first-order wants from nth-order derivative wants.

Profile

brin_bellway: forget-me-not flowers (Default)
Brin

May 2025

S M T W T F S
    123
45678910
11121314151617
18 192021222324
25262728293031

Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 15th, 2025 05:04 pm
Powered by Dreamwidth Studios