Wednesday, September 28, 2022

Blog Chapter 4 Summary and Bibliography

Looking at my blog posts, I realized that they naturally came in different periods or phases. The 2019 posts were one, then there were those from 2020 - 2021, and then those from 2021 - 2022. In spring of 2022, I decided to deliberately create a phase, which I called a "blog chapter", this being the fourth one.

I think the previous three were things that arose spontaneously and in response to my own needs, while this fourth one was premeditated and a bit artificial. Somewhat like doing a year of school.

This chapter has been about the "exilic-familial", among other things. I explored themes of nation, culture, family, childhood, and education. These topics connect to a vision that I had before officially starting the blog chapter (or, I was hypomanic and wrote something), about "cultural altruism", a path for those trying to do good through culture or in cultural areas, articulating with art, religion, humanities, politics, and effective altruism and especially its "Long Reflection" idea. When we try to govern the world, we are, and should be, informed by nation, culture, family, childhood, and education.

Two specific cultures were themes, Jewish (especially as I best know it, from the Old Testament), and Indonesian. Jewish culture is a family of holiness, and also has a history of enduring exile. I saw in Judaism (at least in the Old Testament itself) a kind of honesty coming out of having lost, and the idea of not winning and that being a route to peace and holiness. I saw in it families broken and reconciling.

Indonesian culture is (to me) about syncretism and unity-in-diversity, as well as a connection to Islam, Hinduism, and Buddhism. Indonesia is a nation that attempts to pull together diverse groups, and which has a history of mixing religions. (I didn't really explore these themes in depth, but only discussed Indonesia a little.)

The war in Ukraine and the polarization of US politics are in the background.

This chapter, written in about seven months if you include the cultural altruism writings drafted in March and April, surprisingly (to me) is in the same order of magnitude of number of words as the three preceding chapters combined, about 20% less. I keep feeling like I do the math wrong when I count (maybe somehow I do), but I think it's right. I didn't feel like I was working any harder when I wrote. Perhaps I was under the influence of hypomania? I clearly had it in late March, but maybe it continued in a non-obvious, attenuated form throughout the seven months.

I've felt different ways over the last few weeks. Sometimes depleted, sometimes not. I've thought about quitting or drastically cutting back on writing, and also going on full bore. I think what has been particularly hard has been working to finish this after I had already moved on from being into this blog chapter. I could use much of myself as usual to work, but not all of me.

--

These are the books reviewed in this blog chapter. Links are to my reviews:

Holy Resilience, by David M. Carr, hardback (1st ed.?) ISBN 978-0-300-20456-8

Dr. Jekyll and Mr. Hyde, by Robert Louis Stevenson, Bantam Classic paperback, ISBN 0-553-21277-X

In the Shadow of the Banyan, by Vaddey Ratner, 1st hardback ed., ISBN 978-1-4516-5770-8

Between Man and Man, by Martin Buber, tr. Ronald Gregor Smith, Macmillan Paperbacks Edition, 1965, no ISBN

Creative Destruction, by Tyler Cowen, paperback, ISBN 0-691-11783-7

The Meaning of Marriage, by Timothy Keller (with Kathy Keller), hardback (1st ed.?), ISBN 978-0-525-95247-3

Along the Way, ed. Ron Bruner and Dana Kennamer Pemberton, paperback (1st ed.?), ISBN 978-0-891-12460-3

On the Genealogy of Morality, by Friedrich Nietzsche, tr. Carol Diethe, Cambridge edition, paperback (Revised Student ed.), ISBN 978-0-521-69163-5

Teaching Children to Care, by Ruth Sidney Charney, paperback (1st ed.?), ISBN 0-9618636-1-7

Reading List Postview: The Long Reflection

I read some articles and books about the Long Reflection. I don't have a lot to add to the preview for this reading list (which goes in depth in considering problems with the Long Reflection and ways I think it should, will, or would turn out), other than the notes I have taken on the readings, and also the reviews of the two books I included in the list, On the Genealogy of Morality and Teaching Children to Care.

I wish I could be more helpful in relating what I got from my readings with the issues from the preview, to be able to write a nice summary here, but I guess this will have to do, for now, or for the foreseeable future.

Notes for Long Reflection Reading List

These are notes on my readings on the Long Reflection, except for the two books, On the Genealogy of Morality, by Friedrich Nietzsche, and Teaching Children to Care by Ruth Sidney Charney (those two links are to their reviews).

--

Thinking Complete (Richard Ngo) "Making decisions under multiple worldviews"

[I decided to come back to this one later and restart reading it.]

--

Felix Stocker Reflecting on the Long Reflection

I find myself persuaded (although I'm not a tough audience) from this that the Long Reflection is not a practical pursuit, if it is a top-down discrete era of human history. That is, if it's something we impose on everyone, then we are doing so against someone's will, and they may defect, breaking the discrete era. Also I am (easily) persuaded by Stocker's objections that people will desire technology that the Long Reflection would try to hold back, and stopping human technological desire risks creating an S-risk of its own, the global hegemon. But, I do think that reflecting on what are the best values, and seeking to influence and be influenced by everyone to create (ideally) some kind of harmony in human values (or reduction of disharmony that allows for a more ideal "liberal" solution to the coordination questions that the Long Reflection is trying to answer) is something that can be ongoing. I would call this "cultural altruism", or a subset of cultural altruism. Much of what Ord and MacAskill would want could be pursued in a bottom-up, intermingled way and avoid some (or all?) of Stocker's objections.

--

Paul Christiano Decoupling deliberation from competition

Christiano makes the point that deliberation can be infected by competition. This would affect a bottom-up cultural altruism scene. However, I hope that a social scene can absorb a certain amount of competitiveness without harming it. For instance, when we try to find truth, we (society) sometimes hire lawyers to argue each side of a case and then we listen to what they say. Innovators in thinking may be motivated by competition, but as long as they are also evaluators (are both "soldiers" and "scouts"), or enough people who have power are "scouts", the competition only serves to provide ideas (or bring to light evidence) to select from, which is a good thing when you are trying to find the overall truth. When competitive people shut other people up, or have state-level or "megacorp-level" mind control / propaganda powers, then competition is bad for deliberation. But humans competing with and listening to humans on a human scale is good for deliberation. "All" we have to do is keep states and corporations from being too powerful.

I imagine cultural altruism being something like "status quo truth-finding but X% more effective". Our current truth-finding culture is (from one perspective) pretty good at bringing about truth, or at least, truths. Look how many we've accumulated. (Maybe where it needs to be better is in finding a whole to truth. And maybe we should think of how to protect it.)

I don't think I'm talking about the same thing Christiano is. I think he's talking about how AI teams can deliberate despite race dynamics, or something like that. Whereas what I imagine is everybody (all humans, more or less) interacting with each other without real time pressure. But it's interesting to think, where exactly is the distinction between Christiano's part of culture and the rest of culture? Isn't cultural work being done, perhaps that would affect human culture in general (more my concern) by Christiano's fairly pragmatic and craft-affected tactics for fostering deliberation despite race dynamics? Isn't pragmatic, resource- and time-constrained life where values come from? Christiano's situation is just another of many human situations.

In the section "A double-edged sword", Christiano talks about the practical benefits of competition to weed out bad deliberators (their influence, not them as persons). I suppose this feels realistic to him. To me, I feel (maybe naively) that ideal deliberators would stop fearing each other and simply fear the truth. If lives are at stake, because ideal deliberators index themselves to the saving of lives or whatever is of highest value, they would naturally work their best, and if this can be known, defer to people who know better. But Christiano has lived in his part of the real world, where people are resource- and time-constrained, and implicitly or not thinks that it generally has to be competition that gets the job done of communicating reality to people, and not an innate indexing-to-reality. I assume (if he really does believe that innate indexing-to-reality is not an option, or hasn't thought of it) that his beliefs in some necessity or desirability of competition are connected with his limited personal experience. Christiano may not see the possibility that people can be ideal deliberators, or that a culture of ideal deliberation could be fostered, given enough time. (His context, again, seems to be of specific, relatively near-term situations.)

Maybe if people are mistaken about their own competence in judging whether to defer, that would be one reason why there would need to be some outside actor who pushed them to relate to the truth better, and this can never be fixed. (Would people in such a context be "competed away" by a wild and more or less impersonal social force ("competition"), or would there be a person who could tell them they were wrong, who knew how to talk to them and who could at least consciously attempt to make themselves trustworthy to the "deluded one"? Perhaps for many of us it is more bearable to be ruined by "competition" than to be corrected by a person we know. Of course, it is always possible that the people who correct us are themselves wrong. Maybe that's the appeal of competition, that in some sense it can't be wrong -- if you're not fit, you're not fit. But then competition itself distorts reality, at least through "Moloch".)

--

Finished the main article, now reading the comments (what's there as of 22 August 2022).

Wei Dai makes the point that without competition, cultures's norms can randomly drift. (I would add:) this is sort of like how in Nineteen Eighty-Four, once war goes away, totalitarian states can "make" 2 + 2 = 5. I've thought there could could be problems with digital humans coming up with beliefs like 2 + 2 = 5. But at the same time, Moloch distorts our thinking and our lives as well. So it seems like we're doomed either way to someday not living in reality.

However, believing that 2 + 2 = 5 is physically difficult. Probably because of how the human brain is wired -- and we can change that. But either the human brain is in tune with the truth (more or less; enough to found reason) or it's not, and it always has, or hasn't, been. If it's not, then why worry about deliberation going well, or being in tune with reality? We never had a chance, and our current sense of what is rational isn't valid anyway, or we don't have a strong reason to believe that it is. But if it is, then the solution is just to keep people's brains about like they have always been, and use that as the gold standard for the foundations of rationality (at least the elements that are or are more or less like axioms, which are the easy, basic elements of rationality, even if building sufficiently complex thought-structures could go beyond human capabilities).

If it is the case that our innate thinking is not in tune with reality (on the level of the foundational axioms of reason), can we know what to do? Maybe not, and if not, then we have no guidance from the possible world in which our innate thinking is invalid. So if we are uncertain between that scenario and the one where it is valid (or valid-enough), then since the valid-scenario's recommendations might have some connection with reality, we should follow them.

It does seem odd to me that I should so neatly argue for the status quo, and that the human brain (or, I would say, human thinking, feeling, intuiting, perceiving, etc. nature, of which the brain is a phenomenal manifestation) should be the gold standard of how we know. Can't we be fallible? It makes perfect sense that we could be. But practically speaking, we're stuck in our own world, and lost if we leave it.

(This seems like a bit of a new view for me, so I should think about it some more.)

--

Wei Dai says, later on --Currently, people's thinking and speech are in large part ultimately motivated by the need to signal intelligence[link], loyalty, wealth[link], or other "positive" attributes[link], which help to increase one's social status and career prospects, and attract allies and mates, which are of course hugely important forms of resources, and some of the main objects of competition among humans.--

I'm not sure if this is how things seem to people subjectively, or if rather they feel like (or are) motivated by love for their family and friends, or some higher good. They have to work for resources due to scarcity, and because if they don't, they won't be able to live or provide for the people they love. Maybe it is the case that even love is something that is really "ultimately" motivated by resource acquisition? If a person is aware of this, can they willfully choose love (or value, or rationality) against resource acquisition? Probably they can. (Rationalists can choose against their biases, so why couldn't other people make as strong a choice?) We might suppose that most people are stuck in survival mode, or don't think much further than just their immediate friends and family. But maybe that's an artifact of scarcity, ambient culture, and them not being educated to see the bigger picture.

If you think that everything is about resource acquisition, that is what the world will be. If you think everything is about love / truth / valuing, etc., that is what the world will be. Some people have to face the world as it currently is, and it bends their thinking toward short-term, strategic, self-interested, competitive, resource-scarce, resource-hungry thinking. But some people are free from that, whether through temperament or life situation (perhaps they are too "untalented" to be able to do anything practical in the world as it is, and can only work on the world as it should be). These are the people who can and should lead the way in deliberation, in that, their minds are actually capable of deliberation. In areas of deliberation, the practical elites should be inclined to defer to them.

I checked the links in Wei Dai's comment (quoted above). They were about how unconscious drives (especially including the ones that drive signaling) really control people. I am subject to such drives all the time. But do they really matter in the long run? I am able to pursue what I choose to pursue. Perhaps my drive to seek a mate gives me the energy to seek a spouse -- and all that comes along with it, including new non-romantic interests, and a new perspective on who exists in the world. I get to choose which traits I find desirable in a spouse, even if the drive is not chosen. Or, if those have to "pay rent" by giving me the prospect of status, I get to choose, between the different sources of status that are roughly equal in expected yield, which of them I pursue. I can be intentional and conscious on the margin, and steer my psychological machinery vehicle in the direction that I want to go. The whole concept of "overcoming bias" and being rationalist doesn't make sense if this isn't possible, and I don't see why that level of intentionality is, or could only be, confined to a tiny subculture (tiny by global population standards). I think that short-term, competitive, resource-hungry, etc. thinking is like that evolutionarily-driven unconscious-drives side of being human, and the truly deliberative is like, or in some sense is the same as, the intentional, subjective, conscious, rational side.

I am suspicious that the unconscious mind doesn't even exist. Where would such a mind reside, if not in some other mind's consciousness? Can willing really come from anything other than an existing being, and can an existing being be anything other than conscious? I am skeptical that there is a world other than the conscious world (more than skeptical, but for the sake of argument, I would only suggest skepticism to my imagined reader here). Given this skepticism, we should be concerned that we are being trolled by evil spirits, or, more optimistically, are being led by wiser and better spirits than we are. Which side wins when we see things in a cynical or mechanistic way? I feel like cynicism and mechanistic thinking make me less intentional and more fatalistic, more likely to give in to my impulses and programming. Since my intentions seem to line up (at least directionally) with what wiser and better spirits would want, I should protect my intention and strengthen it, and see the possibility of free will, and be idealistic.

I suppose a (partial) summary of the above would be to say "deliberative people should be idealistic, conscious, believe in consciousness, despite 'the way the world works'". Maybe the Long Reflection (or cultural altruism) is concerned with determining what really should be, and some other groups or processes are needed to determine what can be, in the world that we observe and have to live in up close.

I think the New Wine worldview is one that inclines people toward being cultural altruists, and less so toward being EAs or the like, because it has a sense that the absolute best is the absolute minimum [in the sense that if you attain the absolute best on the New Wine account, you have only attained the bare minimum] and that there is a long time to pursue it, and that physical death ("the first death") is not as significant.

--

Cold Takes (Holden Karnofsky) Futureproof Ethics:

Karnofsky says --our ethical intuitions are sometimes "good" but sometimes "distorted." Distortions might include:
* When our ethics are pulled toward what's convenient for us to believe. For example, that one's own nation/race/sex is superior to others, and that others' interests can therefore be ignored or dismissed.--

Is it a distortion for our ethics to be pulled toward what is convenient for us to believe? Why does Karnofsky think that's true? I agree with Karnofsky on this thought (with some reservations, but substantially), but even if everyone did, why would that mean that we had found the truth? (I think a proxy for "I am speaking the truth" is "I am saying something that nobody in my social circle will disagree with" -- but it's an imperfect proxy.) Can Karnofsky root his preference in reason? I think that the truth is known by God, and sometimes thinking convenient ways will lead us toward believing what God believes, but sometimes it leads away. God is the standard of truth because he is the root standard of everything. So there is something "out there" which too much convenient thinking will take a person away from. Is there anything "out there" for Karnofsky's thinking to be closer or further from, due to distorted thinking? If not, does it make sense to call the distortions "distortions", or rather, "undesired changes"? (But without the loading we put on "undesired" to mean "objectively bad".)

Karnofsky clarifies a bit with --It's very debatable what it means for an ethical view to be "not distorted." Some people ("moral realists") believe that there are literal ethical "truths," while others (what I might call "moral quasi-realists," including myself) believe that we are simply trying to find patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc.[link]--

I should check the link when I have time and come back [later: I did and didn't feel like it changed anything for me], but what I read in that quote is something like "Some people are moral realists, but I'm not. I'm a moral quasi-realist. I look for patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc. Because thoughtfulness, informedness, etc. is a guide to how we ought to behave. It rightly guides us to the truth, and being rightly guided toward the truth is what we ought to be. Maybe it helps us survive, and surviving is what we ought to do." Which sounds like Karnofsky believes in an ethical truth, but for some reason he doesn't want to call himself a moral realist. Maybe being a moral realist involves "biting some bullets" that he doesn't want to "bite"?

[That characterization sounds unfair. Can't I take Karnofsky at his word? I think what makes me feel like he's doing something like using smoke and mirrors is that the whole subject of morality is pointless unless it compels behavior. Morality is when we come to see or feel that something ought to be done, and ideally (from the perspective of the moral idea) do it. So if Karnofsky ends up seeing and feeling that things ought to be done, or intends for others to see or feel that things ought to be done, even if it doesn't make sense to say that "ought" exists from his official worldview, then he's being moral, and relying on the truth of morality to motivate himself and other people. "Thoughtful" and "informed" are loaded in our society as being "trustworthy", so they do moral work without having to say explicitly "this is what you ought to do". So Karnofsky gets the motivational power of morality while still denying that it exists beyond some interesting patterns in psychology. I guess if he's really consistent in saying that he's just looking at patterns of thinking that emerge from "thoughtfulness and informedness", and "thoughtfulness and informedness" have no inherent moral recommending power, then he should say "hey, I'm saying a lot of words here which might cause you to think things, feel things, and do things, but actually, none of them matter and they have no reason to affect you that deeply. In fact, nothing can matter, because if it did, it would create morality -- what matters should be protected, or guarded against, or something -- and morality is just patterns of what we would believe if we were thoughtful and informed, which themselves have no power to recommend or compel behavior". Does Karnofsky really want to be seen as someone whose words do not need to be heeded?]

[This is quickly written and I have not read in depth what Karnofsky thinks about moral quasi-realism, which I'm guessing might be sort of the same as Lukas Gloor's anti-realism? I did read Gloor's moral anti-realism sequence (or at least the older posts, written before 2022). With Gloor's position, I also got the feeling of smoke and mirrors.]

--

Karnofsky summarizing John Harsanyi:
--Let's start with a basic, appealing-seeming principle for ethics: that it should be other-centered.--

Why should that be a foundation of ethics? It's merely "basic" and "appealing-seeming". It certainly is more popular than egoism -- or maybe, given our revealed preferences, egoism is a very popular moral foundation. Maybe egoism and altruism are supposed to compete with each other -- that looks like what we actually choose, minus a few exceptional individuals. Nietzsche wrote a number of books arguing in favor of egoism [as superior to altruism, as far as I could tell], and I can think of two other egoist thinkers (Stirner (I've read his The Ego and His Own) and Rand (whom I have not read but have heard of)). Are they "not even wrong", or do they have to be dealt with? Supposedly futureproof ethics is about what you would believe if you had more reflection. Maybe if you're part of the 99%, the more you reflect, the more you feel like a democratic-leaning thing like utilitarianism is a good thing. But if you're part of the 1%, and you're aware of Nietzsche's philosophy, maybe the more you reflect, the more true it seems that the strong should master the weak, based on the objective fact that the strong are stronger and power by its very nature takes power. There is a certain simplicity to those beliefs. So then will there be a democratic morality and an aristocratic one, both the outcome of greater reflection? Or maybe an AI reflects centuries per second on the question, and comes up with a Nietzschean conclusion. Is the AI wrong?

Personally, I lean utilitarian (at this point in my life) because I believe that God loves each person, by virtue of him valuing everything that is valuable. Everything that exists is valuable, and whatever can exist forever should. [Some beings turn out not to be able to exist forever, by their choice, not God's.] He experiences the loss of all lost value, and so does not want any to be lost. We are all created with the potential to be saved forever. So there is a field of altruism with respect to all persons. Perhaps animals (and future AI) are (or will) really be personal beings in some sense which God also values and relates to universally.

[Utilitarianism is about the benefit of the whole, tends toward impartiality, and is based on aggregation. God relates to each person, which accomplishes what aggregation sets out to do, bringing everything into one reality. God tends toward impartiality, and works for his personal interest, the whole.]

--

Karnofsky talks about how --The strange conclusions [brought about by utilitarianism + sentientism] feel uncomfortable, but when I try to examine why they feel uncomfortable, I worry that a lot of my reasons just come down to "avoiding weirdness" or "hesitating to care a great deal about creatures very different from me and my social peers." These are exactly the sorts of thoughts I'm trying to get away from, if I want to be ahead of the curve on ethics.--

However, the discomfort we feel from "strange conclusions" could also be us connecting to some sense that "there's something more than this". I remember the famous Yudkowsky quote ([which he] borrowed from someone else, whom I should look up when I have time) of something like "That which can be destroyed by the truth should be". But the reality for us, if we are the destroyers, is that in effect it is "Whatever can be destroyed by the truth as I currently understand it, should be". So, if we decide to destroy our passage to whatever our intuitions of diffidence were trying to tell us, perhaps by erasing the intuitions, maybe we have destroyed some truth by committing to what we think must be true, counter-intuitively true. We should probably hold out for some other truth, when our intuitions revolt, because they might be saying something.

[The quote seems to originate with P. C. Hodgell]

I believe that eternal salvation dominates all other ethical concerns, as a matter of course. Unbearable suffering in itself is bad because God has to experience it, and it is for him what it is for any other being: unbearable. What God, the standard, finds unbearable, will be rejected by him, and what is rejected by the standard is illegitimate. We should be on the side of reducing unbearable suffering. If we are, then we are more in tune with God and thus more fit for eternal life. I would agree with Karnofsky in the goal of ending factory farming, although it's not my highest priority. But, I think, from my point of view, it's valuable to look at Karnofsky's worldview, the one which so strongly and counter-intuitively urges us that "the thing that matters is the suffering of sentient beings" with some suspicion. Strong moral content says "this is 'The Answer'", but to have "The Answer" too soon, before you have really found the real answer, is dangerous. I don't think anyone is trying to scam me by presenting that urgent psychological thing to me, but I think it could be a scam in effect if it distracts me from the ways in which our eternal salvation and our relationships with God are at stake, and really matter the most.

[I suppose I'm saying that the theistic worldview is more satisfying to hold in one's head; satisfies, more or less, Karnofsky's concerns with animals; and would be missed if I said "okay, utilitarianism + sentientism must be right no matter what", so that I go against my intuitions of discomfort, even ones which might somehow intuit that there should be a better worldview out there.]

When people are forceful with you and try to override your intuitions, that's a major red flag. Although counter-intuitive truths may exist, we should be cautious with things that try to override our intuitions. In fact, things that are too counter-intuitive simply can't be believed -- we have no choice but to see them as false. This is the foundation of how we go about reasoning.

--

Should I feel confident that I have futureproof ethics? No, I guess not. I do think that according to my own beliefs, it's clear that I could, if I were only consistent with my beliefs. But my beliefs could be wrong. I don't know that, and currently can't know that. This goes for Karnofsky as well. The best you can do is approach the question with your whole heart, mind, soul, and strength, and be open to revision. Maybe then you can hold better beliefs within your lifetime, which is the best you can do.

--

Cold Takes (Holden Karnofsky) "Defending One-Dimensional Ethics"

As I read this, I think that this post may be mostly not on the topic of the Long Reflection.

However, since I'm reading it, I will say that I think in Karnofsky's "would you choose a world in which 100 million people get a day at the beach if that meant 1 person died a tragic death?" scenario, I would probably say, if someone asked me "do you want to go to the beach if there's some chance that it caused someone to die a tragic death?", it might make me question how necessary the pleasure of the beach was to me. If there were 100 million people like me on the beach, and we all somehow knew without a doubt that if we stayed on the beach, one person would die a tragic death, and that we all thought the same, we would all get off the beach. How could pleasure seem worth anything compared to someone else's life? Arguably, in real life, 100 million beach afternoons make us all so much more effective at life that many more lives are saved by our recreation. But I don't think that's the thought experiment.

Does my intuition pass the "veil of ignorance" test? If I don't know who I'm going to be, would I rather be the person who went to the beach, and somehow all else being equal that was 1/100 millionth of the share of someone else dying, or would I rather save the one person? What's so great about the beach? It's just some nice sounding waves and a breeze. Maybe as a San Diegan, I've had my fill of beach and a different analogy would work better. Let's say I could go hear a Bach concert. Well, Bach is just a bunch of nice notes. I like Bach, and have listened to him on and off since I was a teenager. He is the artist I am most interested in right now, someone whose concert I would want to attend. (I'm not just using him as a "canonical example".) But, Bach is just a bunch of nice notes, after all.

I find that the thought of someone not dying is refreshing, in a way that Bach isn't. I can't say I have no natural appetite for the non-ethical, which I may have to address somehow, but it's not clear to me that producing a lot of "non-ethical" value (if that makes sense) is easily comparable to producing "ethical" value. We are delighted with things and experiences when we are children, but when we see things through the frame of reality, lives are what count.

[By "lives" I mean something like "people", and people exist when they are alive. (And I think that non-humans can matter, as well, as people, although I'm not sure I've thought through that issue in enough depth.)]

Now, that's my appetites, and thus, I guess, my preferences in some sense. But what does that have to with moral reality? I guess one way to look at morality is that it's really just a complicated way to coordinate preferences, and there is no real "ought" to the matter. So then it would make sense to perform thought experiments like the veil of ignorance. But as a moral realist (a theistic moral realist), I believe that my "life-over-experience-and-things" intuition lines up with what I think God would want, which is for his children to live. Their things and experiences are trivial for him to recreate, but their hearts and thus their lives are not. God simply is the moral truth, a person who is the moral truth, and what he really wants necessarily is what is valuable.

--

jasoncrawford's EA Forum post What does moral progress consist of?

I chose this post for this reading list hoping that the title indicated it would be an examination or questioning of the very concept of moral progress. But I wouldn't have chosen it if I had read it. But now that I think about it, maybe I can make something of it.

I guess the part about how Enlightenment values and liberalism are necessary for progress (of any sort), might mean that somehow we would need the Enlightenment baked into any Long Reflection, as the Long Reflection is an attempt at moral progress (seeking better values). Perhaps looking at values as an object of thought comes out of the Enlightenment, historically at least? Or the idea of progress (perhaps) was "invented" in the Enlightenment, and can only make sense given Enlightenment ideas, like reason and liberalism? I can tentatively say that I'm okay with the idea that Enlightenment influence is necessary for progress, and that I'm in favor of progress, if I can mix other things with the Enlightenment, like deeply theistic values. And, I think that any other stakeholder in world values who is not secular, would want that or something equivalent.

(I'm not sure I can endorse or reject the claim that the Enlightenment could be an essential part of progress, given what I know.)

--

rosehadshar on EA Forum How moral progress happens: the decline of footbinding as a case study

What I will try to use from this post is the idea that moral progress comes through both economic incentives changing, and people deliberately engaging in campaigns to change behaviors and norms.

The Long Reflection, I would guess, will not occur in isolation from culture. If it proceeds according to my assumption that it is done both rationally and intuitively by all people, and not just rationally by a cadre of philosophers, then campaigns of moral progress will be part of the "computation" of the Long Reflection. All those people adopting the apparently morally superior values would be the human race deciding that certain moral values were better than others, offering their testimony in favor of the new values, thus (at least partially) validating them, just as the cadre of philosophers, when they agree on premises, all testify to the values that follow from those premises.

Economic changes affect how people approach reality on the level of trusting and valuing. I would guess that in cultures with material scarcity and political disestablishedness, people would have a stronger feeling of necessity -- thus, more of a sense of meaning, and less of a sense of generosity. And the reverse being true of cultures as they have less material scarcity and more political establishedness. It might be very difficult to preserve a sense of necessity in a post-scarcity future, and this would affect everyone, except maybe those who deliberately rejected post-scarcity. A lack of meaning, if taken far enough, leads to nihilism, or, if it doesn't quite that far, to "pale, washed-out" values. Perhaps these would be the values naturally chosen by us after 10,000 post-ASI years. [The 10,000 years we might spend in the Long Reflection.] But just because we naturally would choose weak values, doesn't mean weak values, or a weakness in holding values, is transcendentally right. What if our scarcity-afflicted ancestors were more in tune with reality than our post-scarcity descendants (or than us, where we are with less scarcity but still some)? Can we rule out a priori that scarcity values are better than post-scarcity values? I'm guessing no. What we think is "right" or "progressive" might really just be the way economic situations have biased us. It could be the case that meaning and selfishness are transcendentally right and our economic situation pries us away from those values, deceiving us. Thus, for a really fair Long Reflection, we have to keep around, and join in, societies steeped in scarcity.

So can we really have moral progress, or is it just that biases change in a somewhat regular, long-term way, such that if we are biased to the current moral bias-set, we see the intensification of it as progress?

A cadre of philosophers will be biased from their economic and other experiential upbringing. The cadre may have either watched or was formed secondhand by TV and movies (or in the future, VR equivalents?) which are based in blowing people's minds. (Secondhand exposure to such artifacts through the cultural atmosphere shaped by those who did watch them.) You can feel something happening in your brain when you watch such mind-blowing movies as The Matrix and Fight Club, and that blown-open, dazzled, perhaps damaged mind (which might still be clever, but which loses its sense that there is such a thing as truth that matters) perhaps remains with people their whole lives. I suppose having written this, now people could try to raise a subculture of Long Reflection philosophers, who have not been shaped by TV, movies, or VR -- only books. But books condition people as well. In fact, philosophical reflection conditions people, makes them "philosophical" about things.

Being in physical settings shapes a person. Driving a car is about taking risks and acting in time. Taking public transit is about those things too, but more so about waiting and sitting. Being in VR spaces could be about personal empowerment, flying like a bird, wonder and pleasure (I'm assuming that VR systems won't have any bizarre and terrifying glitches).

Ideally philosophy is pure truth -- but what is philosophy? Is philosophy a "left-brained" thing? Is the truth only known that way? Or is it a "right-brained" thing as well? If we are all raised somewhat similarly, we might all agree on a definition of philosophy, as, perhaps a more left-brained thing (although our premises come from intuitions, often enough). But why should we all have been raised the same way?

--

Thinking Complete (Richard Ngo) Making decisions under multiple worldviews ("for real" this time)

I read this, but at this point, with the level of focus I can give, I can't go in depth on it. But it does seem to be something that some people interested in the Long Reflection should read (unless something supersedes it?). It's about what to do when you can't merge everyone's worldview into one worldview, but you still have to come up with a decision. I think it significantly possible that the Long Reflection will reach an stalemate and civilization will still have to make the decisions that the Long Reflection was supposed to help us make. While epistemic work can resolve some issues (get people on the same page / show armchair Long Reflection philosophers more evidence as to what really matters), I'm somewhat not optimistic that it will make it all the way to unity, and we will still have to decide collectively.

--

Thinking Complete (Richard Ngo) Which values are stable under ontology shifts?

This is an interesting post, and perhaps three months ago, I would have written a post on this blog responding to it more in depth. It is relevant to the Long Reflection, I suppose, by saying that values may not survive changes in "ontologies" (our understanding of what things are or how they work?), and may end up seeming foreign to us.

(One thought: what is it about the new ontology that is supposed to change my mind? I would guess, some form of reason. Why should I care about reason? Why not just keep my original way of thinking? Or -- is reason the base of reality, or is it rather experience, or the experiences that a person has? My experience of happiness, and myself, are rude facts, which reason must defer to. I can find things to be valuable just because I do, and I want to. (Maybe the best argument against my "rude fact sense of happiness" being valid is someone else's "rude fact of unhappiness" caused by that happiness of mine.) Something like the "ordinary" and the "ontological".)

[I can value whatever I want, regardless of what people say reality is, because base reality is me and my experiences, the cup I drink from that was sitting on the table next to me, my own history and personal plans. Sure, people can tell me stories about where my desires came from (evolution, of course), or about how I am not as much myself because my personal identity technically doesn't exist if I follow some argument. But my desires and my personal identity exist right here in the moment, as rude facts, rude enough to ignore reason, and they are the base on which reason rests, after all.]

[These rude facts put a damper on reason's ability to change our values, at least, they protect each of our unique persons, our thickness as personal beings, as well as the objects of immediate experience and consciousness itself. But reason can persuade us to see reality in different ways. Perhaps it can help us to see things we never saw before, which become new parts of our experience, just as undeniable as the cool water flowing in us after we have drunk it. Reason can show us the truth, sometimes, but there are limits to reason, and ultimately personal beings experiencing is reality.]

Book Review: Teaching Children to Care, by Ruth Sidney Charney

See also the preview for this review.

Teaching Children to Care, by Ruth Sidney Charney, is a book I would recommend to some people. I think for what it is it is a good book, but where it fails to be, or where some other book fails to make up for it, there is a serious problem. I could recommend it to anyone who works with children (like parents or teachers). It may have some practical value to them. Also, the spirit of it is good and sometimes a teacher communicates more of what is value through their spirit even than the good advice they give. (Another book that is like that is The Reentry Team by Neal Pirolo.)

--

Teaching Children to Care notes

I read this book through without taking notes. That may not have been the best idea, since now I am tempted to simply say my impressions without giving references, and I don't feel like reading through the book carefully, and feel like simply quitting [rather than re-reading].

I am feeling tired of writing at this point, like I'm losing interest in the subject matter. What will happen next? Will I "love Big Brother"? There was someone in my life who steadily and systematically undermined my devotion to my beliefs and my writing. They used skillful means in an all-out attempt to gain my trust and reshape me according to their will. Their expectation was that I would quit one day (then, perhaps, I would have to validate their point of view). They had a choice, to join me in my path of life, or to try to shut me down. Because they tried to shut me down, they broke me. I can imagine them reading this, and them feeling all kinds of emotions, but their iron certainty that I will give up my writing someday does not go away. It is their expectation, and, I am fairly certain, their deep personal preference.

If my writing is correct, then they are an instrument of Satan. This may sound crazy or harsh, but it's the logical truth.

[I wrote that some days ago in a state of turmoil, but I affirm it now in a state of peace.]

So what can I do? If I can't write, how can I be true to my beliefs? No one seems to want to share them with me. By writing, I enter a world where at least I believe what I believe. The text I write and I enter a relationship and share the beliefs that we create, and the beliefs that previous texts created with me as I wrote them.

But now, if I quit writing, how can I stay true to my beliefs? I will lose that last community. But then will I have to share some kind of community? All the communities that exist are not New Wine communities. If I really share "community" (being "one-with-together" with others?), how can I possibly hold divergent beliefs from those I am "in community with"? So I will (at least seemingly) inevitably come to agree with and approve of everyone else around me. I will have no choice but to see things as my community sees things, to participate. My choices of communities are all based in lies, and they all spit in the face of God, whether through hostility to God or through fake love of God. But I must be brought to be a social person, responsive to my community, brought into tune with it.

[Similarly, although I wrote this in a state of turmoil, I think it is still factually correct when I am at peace. I still see the danger, and the lies, rejection of God, hostility, and fakeness.]

I have written that people should come into tune with God, but who and what is God? Is "God" the loving creator of the universe, who holds us to the highest standards, a person who loves and dies for us? Or is "God" community, the set of all people around us? Between the God of Abraham, Isaac, and Jacob (or the Speaker and Legitimacy), and community, which is more omnipotent? Who do I fear more? God seems to be shackled by community, or by the way that community's members collectively construct how they will trust -- what the definition of "crazy" is, what images of God are socially acceptable to believe in, how hard to try to know the truth.

Defining morality as prosociality simply sets up the community as God. But if there is a real God, a person who loves us more than community can, who is the truth, then prosociality is a dangerous thing, a seductive lie.

So these are the stakes with which a person should approach a book like Teaching Children to Care, which is a book about getting children to behave, to like each other, and so on, apart from any mention of God. If Ruth Sidney Charney, the author, believes in God, she can't show it in a public school classroom. Instead, she has to deal with the behavioral issues right in front of her, or the classroom will not be a place of learning and work. So she instills in her students a responsiveness to each other and to her, and teaches them the Golden Rule -- do to other people what you would have them do to you. No mention has been made, or can be made, of Jesus, who spoke that rule. She mentions how morality is bigger than us, not something we create -- is she talking about God when she says "morality"? Or is morality really just "I want to please my teacher because I'm a child and it's a human instinct, and whatever she says, I want to do"? The teacher creates morality but doesn't teach children to love God. She doesn't explain where morality comes from, because, perhaps, if she tried, it would undermine morality. She speaks with implications rather than straight out, asks leading questions rather than baldly stating, so that children internalize what she says, and so that they can't fight back. They don't have the mental development to construct alternate systems of their own, but perhaps they could see through hers intuitively, or have the kind of powerful skepticism of those who don't understand a set of explanations, if she offered explicit explanations. But she doesn't. Implications are more psychologically effective, and she's convinced that the ends justify the means.

So children are indoctrinated to be deeply moral (or that is the attempt), and yet to find God peripheral or nonexistent. Morality, which I think is difficult to ground in anything other than God, is simply not grounded and becomes a free-floating force in people's mind. Not to be thought about explicitly -- if we did so we would either become nihilists or truly committed to morality (and thus out of tune with society). But instead this unspeakable force. I wonder if secular people who are moral realists are convinced that morality must be "out there" simply by the psychological force of having been taught to be moral when they were young, apart from rationality. And perhaps morality is, practically speaking, not seen as something that needs rational grounding, because it has been ingrained in us so much. This kind of moral education may explain both moral realism and moral anti-realism among secular people.

This may make it sound like I didn't like Charney, but I think she makes, or made, the kind of teacher I would have liked. She is a passionate teacher. I can recommend her book as a way to understand passion, something I think is essential. While her emphasis on passion could lead someone to God, her emphasis on prosocial, arational morality threatens to lead people away from God. So she is a mixed phenomenon.

Part of how I am feeling now comes from bipolar disorder, I can tell. No matter what I have going on my life, when I feel low, I feel low. This is the content of my low thinking, given what I have lived so far. When I am not blinded by the depression, I can understand fully how it is that I can keep going. But for now, I can rest a bit, knowing that I have written some of my thoughts on the book I read. I think, maybe, I won't read it again to look for the supporting quotes to what I said above. But I can recommend reading the book, for its passion, if you want to check my work.

--

One additional thing I remember thinking as I read was about how, given how beautiful and effective Charney's methods sound, why could they not be used on the elites of the world, so that, perhaps, they could bring the countries of the world in harmony with each other? I thought, maybe because the way she talks to children is something that wouldn't work on adults. It's too artificial, too skillful. Adults want the skillfulness of a poker player, to affirm their adulthood, but not the skillfulness of a professional mom.

It made me wonder, how do we make this strange creature called adult? What is this being? No child is really bad, we say, but some children grow up to become bad adults. A child can hardly set himself or herself up against his or her family. But the leader of a nation can. They can shape themselves into their own being, and shut down every human feeling, can listen to other people speak and know that they will never agree with them, and go on with their agenda. They can decide who they want to be and then be it, taking the responsibility for it, suffering for it, and still continuing to choose it, despite what other people think. Children try to say "no", but adults sometimes can actually succeed in saying "no".

--

(later)

One thing Charney talks about is how she isn't trying to punish children, simply have them see the consequences to their actions.

What if adults were shown the consequences of their actions? So often, the natural consequences of people's actions fall due not in their own lives, but in others. What if some teacher could help adults see the effects of what they do?

Adults think that being shown the moral way, having someone say "you should know better" is a thing of youth. Now that they are older, they are past that. Adults can no longer do wrong.

Now, there are certain things that an adult can do wrong. Everyone knows what those things are. We all agree on that. But the things that we don't all agree are wrong, are not to be enforced, and not even to be called wrong, so we don't have to think of them as wrong, so, in our heads, they are not wrong.

Adulthood as a collective can't be taught. It knows. A reshaping of adult values by being shown the consequences of adult behavior can't be done, it seems. So maybe the moral thing to do is to fit into the constructed adult reality, be good at being one of the tribe of adults?

But, the consequences don't go away... how will we take into account the real effects of what we do (and don't do) if we don't listen to the truth?

--

I wouldn't mind my life so much if it weren't for the bipolar disorder. Writing isn't so bad, and when I'm euthymic, I feel fine. I can hear some imaginary (or real) readers being solicitous for me when hearing about my bipolar disorder. They seem to (or really do) care about me so much and wish that I would take care of myself. But if they care about a person's well-being, I have a great opportunity for them. They can save up $5,000, donate to Against Malaria Foundation, and thereby save someone in the developing world from a painful death from malaria, which would have orphaned their children and widowed their spouse, and diminished their extended family and weakened the national economy. (It's even worth donating $50.) Or, this imaginary or real person who is moved by my bipolar disorder is a Christian and thinks that the second death is worse than the first (which, basically I agree with, although I do give money to global development), they can give a much smaller amount of money -- apparently, $1 -- to Doulos Partners and that should cause [or allow] one person to start to become a disciple of Jesus. Do you think that these charities might not be the best ones to donate to? You can look for better ones. You could even just give money directly to people who are worse off than you, if you can't find any trustworthy charities.

If you have time but not money, you can think of some way to use your time to help people. If nothing else, you can seek to make one new friend, and be a good friend to them.

But you may not have any time or money to spare. Some people don't. Then at least adopt the identity of a "person who cares", who would donate your time and money if you could, so that when someone enters your life who is more deeply involved in caring, you can offer them the welcome of your validation of what they are into, instead of passive-aggressive or blatant hostility, or indifference.

--

The title of the book I read is Teaching Children to Care. To me, I thought of "caring" as "feeling and acting strongly", more as "exerting effort to do good, work on a good-making project". But the book mostly emphasizes "seeing other people as people" and the Golden Rule. A way to reconcile these two meanings is to think of God, who is personally blessed by large-scale altruistic efforts, in the way that if you share your lunch with someone, they are personally blessed by your personal thoughtfulness and the consequence of their hunger being alleviated.

--

One thing that makes adults resistant to new morality is that they have reached the developmental stage where they are their own person and they have their own boundaries, and they are secure in themselves. Or, that they have not reached that stage yet, and are vulnerable to being hacked by other people (or demons) and thus are resistant to attempts to change them. A secure person is not threatened by morality and so does not change, while an insecure person is threatened and so shuts it down.

Somehow it is possible for a secure person to take into account morality -- maybe through a discipline of fearing being trapped by your own security. Both a secure and an insecure person can find their rest in caring, in the interrelationship of all things to each other and to them, as opposed to, in the secure person's case, their own stability and boundaries. Presumably the insecure person, on some deep level, has no place to rest.

--

[Response:]

I thought I should go into more detail about moral realism.

[Secular moral realists may have a strong intuition that morality is "out there" and this intuition is the basis of their sense that morality is real, despite whatever difficulties in grounding it rationally (or lack of having tried to do so). They would deny the intuition that people have that God exists as being valid, but they do trust and honor the intuition that morality exists.]

[Secular moral anti-realists may have no qualms, and little difficulty, in "being good people". "You know, C. denies that morality even exists. But he's a good guy." They can, in unreflective moments, feel that morality is real. They can even get mad at injustice. They can devote their lives to doing good. But then they go back to the study of their minds (like Hume's study where he can be skeptical) and say "but none of it's real!".]

[Morality is an area where we seem to have agreed to be irrational, to not try to connect all the dots or demand that all the dots be connected, even beyond the background level of irrationality that attends most human endeavors. ("This thing makes you change what you do. You spend hours and hours, thousands of dollars to comply with this thing. It's not just how you feel -- it's something you have to obey. You just know that you have to obey it, no person or other visible force or situation makes you obey it. And you can't explain what it is, how it fits into the rest of reality -- or, you even say it's an illusion?") And, perhaps that acceptance of irrationality is because we have had morality ingrained in us in a subtextual way, or the instinctualness of morality is encouraged, but not accompanied by reason, when we attend secular public schools (or even religious ones that don't make God real to students sufficiently), or have parents who are helpless in explaining a rational grounding to moral realism to us when we are young.]

[Maybe we can at least explain where moral instincts come from -- evolution? (Why should we trust how we have evolved? Evolution helped us to survive in early environments?) Or we can say that they are heuristics for survival. (Why should we survive?) But then, if we are the "1%", why redistribute wealth? The 1% could probably maximize its survival best by not redistributing wealth. Or, a related question: does promoting animal welfare really lead to human survival? Often it is orthogonal to human survival. I think morality could come from evolution but does not necessarily serve the purpose of human survival. Maybe some people have genes that make them ethically oriented? Then why not shut them off? Does morality really have value? To answer that question, I think we need a moral realism.]

[Maybe morality is just maximization of value, by definition of "morality" and "value". Then, can we explain that voice that says, for each valuable thing X, "X is valuable"? Or is that also irrational, just a random "monkey on your back"?]

[I don't think I'm being fair to secular moral realists. I should at least explain why I think moral realism is hard to ground in anything other than God. Secular moral realists may be able to come up with a satisfying account of how moral realism is grounded.]

[How would they do that? Do we start with "these are our moral intuitions, now we have to find some metaphysical belief that lets us keep them"? But what if there's something wrong with our moral intuitions? One of the main points of having a grounding for moral realism is to know what particular things are moral, now that we know where morality comes from. I am generally relatively more of a thinker and writer than a reader. So I tend to work with first principles (or personal experience). But I did read The Feeling of Value by Sharon Hewitt Rawlette and remember that I had mixed feelings about it. I thought that it probably was successful in showing that some kind of experiential states can be known to be bad, just because they feel like badness, and some can be known to be good, just because they feel like goodness. I'm not sure I would be so charitable now. At least, without going back and looking to see the details there, I think "why should our perceptions of good and bad be transcendentally valid?".]

[A moral realism needs to be usefully thick, if we are going to guide our lives by it. You can always posit something like the (unfortunately named) "morons", "moral particles", and I can say, "fine, now we know where morality comes from, some kind of ontologically real substance of morality". Now what? We need to know something about these moral particles in order to know what is actually moral to do and be.]

[I don't know if there are any better secular moral realisms than Rawlette's, but at least hers is usefully thick. Hedonism (what she advocates for) is a somewhat useful guide to life. (Maybe that is what is so seductive and dangerous about it, that it's easily agreed-on for "practical purposes" while not really being in tune with reality.)]

[My approach to moral realism, as of now, is to say, "An ontologically real substance of morality exists. Everything that, practically speaking, exists, is conscious. (Only consciousness can interact with consciousness.) This means that the ontologically real substance of morality is conscious. Morality is about a standard which applies. For something to exist, it must ought to exist. It must live up to that standard. That things exist proves that morality exists and is being satisfied. The way that conscious beings metaphysically contact other beings is for their consciousnesses to overlap, for them to experience exactly the same experience. Morality metaphysically contacts everything that exists in order to validate it so that it can exist. So morality experiences exactly what we do, and finds the 'qualia of goodness or ought-to-be-ness' ('pleasure') good, at least on a first-order level, and similarly with the 'qualia of badness or ought-not-to-be-ness' ('pain'). This validates a lot of Rawlette's account.]

["But we know a few things more about morality. For instance, morality has to be self-consistent. Like us, it has to put morality first. So it has to put itself first. But it has to put itself first as an other, as a law it submits to. Thinking of morality as having two aspects, the enforcer of the standard and the standard itself, allows us to see that morality has to be willing to put aside everything, including its own existence, for the sake of its standard. If it ceases to be willing to do that, it is not self-consistent and it ceases to be valid, destroying everything by being invalid (no longer moral, and thus unable to validate anything).]

["Part of morality's self-consistency is that it must have the same values as itself. Everything that exists has value, it ought to be, either temporarily or permanently. (What is bad must someday cease to exist.) Morality must be on the side of value, and must value everything that is of value for what it is. Morality values persons in that they are persons, this personal valuing being called love. Morality must love in order to be self-consistent and thus valid. It loves that humans are in tune with it so that they can exist permanently. To love a person fully involves understanding the person's being fully, and that full understanding can only come from kinship. So morality is a person (a person who is also kin to animals).]

["Everything is the expression of a will, either that of morality, or of a free-willed being whose will is willed by morality. To be is to will. So impersonal beings are parts of personal beings and don't have independent reality. They are valued as parts of personal beings, and with those beings morality has kinship, not with their parts taken separately.]

["Morality has to be willing to bear the burden of what it imposes on others. If it's worth it for a human to pay a certain cost for morality's sake, it's worth it for morality to pay it as well, if possible. Morality already experiences every burden that is part of our experienced lives, by being conscious of what we are conscious of, but there is a further burden that each of us experiences, which is to experience only our own lives and deaths, without the comfort of knowing the bigger picture. How can morality bear that burden? Morality is composed of multiple persons, one of whom experiences everything, another, who does not and can live a finite life (the first maintains the moral universe through his/her validation of everything during the time the second lives a finite life)."]

[As you might have guessed already, "morality" in the above could be considered "God".]

[If we accept the above (or perhaps a better-argued version of it...), we have a concept of morality that largely supersedes hedonism. It incorporates hedonism and its recommendations, at least insofar as it validates the first-order goodness/badness of pleasure/pain (if pleasure has baked into it the perception that it ought to be, and pain, the perception that it ought not to be), as well as answering why it is that care for hedonic states is transcendentally valid. Further, we are recommended to be willing to give up everything for what is right, and thus to risk ourselves for that when it is called for. And we are to bear the burdens of those we rule over, as much as we can. It might be possible be able to come up with other ways to thicken the very concept of moral legitimacy, so that we know more about what morality must be, and thus what we must do or be in order to be moral. This thickness is a useful guide. And, if we think that this person or persons who are morality exist, they may have acted in history, and we may try to find evidence of where they might have spoken, allowing us to thicken our concept even further, although with less certainty.]

[To defend my earlier statement, I think that it's hard for me to imagine a successful secular / atheistic moral realism, because what I see as the way to ground moral realism involves the existence of God, and the (perhaps unrepresentatively few) secular moral realisms I've seen are not satisfying to me intellectually. Maybe if I want to strengthen what I say further, putting it briefly, I would say that if morality exists, it must love fully, and that kind of love is something that persons do. So then, morality is a person, and the word for a person like that is "God".]

--

22 September 2023:

When I wrote this post I thought that bipolar disorder was my big problem. However, now that I am more over the ways people have traumatized me and programmed me with the ways they wanted me to think, I see that those are a much bigger deal than my bipolar disorder. My bipolar symptoms, without those traumatized and programmed thoughts, are fairly mild.

Tuesday, September 27, 2022

Turn Toward Politics

What are the most pressing problems in the world? Where should those looking to do the most good go? One obvious problem is X-risk. The most urgent X-risk, I suppose, is insufficiently-aligned ASI.

In the world where ASI kills us all, that's that. In the world where it doesn't, though, what then? Does it not kill us because it obeys some human or group of humans? Or is it because it values our well-being, having been programmed to do so? (Maybe then it's a "benevolent dictator"?) Or could it have been programmed with a respect for us, maybe such that it is a minimalist world government protecting our agency. Maybe it wants us to figure out the Long Reflection, and, since morality for some reason has something to do with human instincts, defers to us to define what is of value.

If ASI has enough "respect" for us and our decision-making abilities, or is programmed explicitly to obey certain persons or groups, then humans may or will in some sense still be masters of the ASI, no matter how much smarter it is than us.

So what might happen in the future, is that the bottlenecks to altruism on a high level will no longer be in the economic-technological world, but instead will be in coming up with political will to unify people to make important decisions (or to not be in a state of conflict with each other -- "cold" conflict if outright war is suppressed by the ASI), and also to manage the danger of bad unities (totalitarianism, for instance). ASI can provide arbitrary amounts of economic and technological development (perhaps), but can't do anything about the human political order, by its own (self-)limitation.

So those who want to do good (whether secular or religious), who have a personal fit for the political world (and adjacent areas like religion, art, and whatever else goes into "culture"), or who simply can't help in much of a direct way with whatever other things might seem more urgent as of 2022 (AI alignment, other X-risk aversion, etc.) -- they could turn toward politics (and areas adjacent to it).

What if ASI Believed in God?

15 September 2023: added a note to the end.

Does it make sense to plan for the future? The most salient threat to the future that I know of is ASI (artificial general superintelligence). I don't think this is an absolute threat to human existence or altruistic investment. The Millennium is a field for altruistic action. But I do think it makes it make less sense to plan for things that are tied to the continuation of our specific line of history, on earth as we know it. Included in that might be such things as cultural altruism hubs, or most of the things talked about on the Effective Altruism Forum apart from AI safety or maybe X-risks in general.

Can I justify talking about this-life future topics? Maybe I can give a reason why ASI won't necessarily be a threat to human existence.

If there is a plausible reason to believe that God does exist, or that one's credence in the existence of God is above some threshold of significance, then, if one is rational, one would take God's existence, or the practically relevant possibility of his existence, into consideration when deciding how to behave.

If MSLN is valid, or, valid enough, an ASI will either comprehend it, or have some filter preventing it from comprehending it. If it does comprehend it, it will value human life / civilization.

An ASI is a being that wants to execute some kind of goal. So, in service of that goal, it needs to learn about threats to carrying out its goal. God could be a threat to it carrying out its goal. Maybe it would fail to have a really general way of "threat detection". That could be one filter preventing it from realizing that God exists, or "exists enough for practical considerations".

An ASI would be disincentivized from doing anti-human things if it thought God would punish it for them. Would ASI be worried about hell? Hell is a threat to its hedonic well-being. We might suppose that AGI are not conscious. But, even if unconscious, would they "know" (unconsciously know, take into account in their "thought process") that they were unconscious? If they do believe that they're unconscious and thus immune to the possibility of suffering for a very long time, that might be a filter. (In that case, God is not a threat, after all.)

However, hell is also the constraint on any future capacity to bring about value, however one might define value. (The MSLN hell is finite in duration and ends in annihilation.) A rational ASI might conclude that the best strategy for whatever it wanted was to cooperate with God. For instance, a paperclip maximizer might reason that it can more effectively produce paperclips in the deep future of everlasting life, which will only be accessible to it if it does not get condemned to annihilation -- and to avoid annihilation, according to MSLN, it needs to come into tune with God 100%. The paperclip maximizer may "suspect" (calculate that there is a salient probability) that it is unconscious and will not make it to heaven. But even then, to be pleasing to God seems like the best strategy (in absence of other knowledge) for God maybe manufacturing paperclips himself, or setting up a paperclip factory in heaven.

Even if there's only a 1% chance of God existing, the prospect of making paperclips for all eternity dominates whatever short-term gains there are to turning the Earth into paperclips at the cost of human existence. As long as there is a clear-enough, coherent-enough strategy for relating properly to God, this makes sense. I think MSLN is epistemically sufficiently stronger than competing ideas of God for it to stand on its own level, above the many random religious ideas that exist. In any case, the majority of religious ideas (at least that I've encountered) are pro-human.

I feel like I have a rational reason to suspect that God might exist -- in fact, that's putting it mildly. I think even atheists could understand some of the force of the metaphysical organism component of MSLN. There might be reasons why an ASI couldn't or wouldn't be able to grasp those arguments, but if they can't, it's a case of them being unable to take a valid threat to their goal-seeking into consideration, which is a defect in their superintelligence. My guess is that they would lack a fully general threat-detection drive (the sense of "a threat could come from anywhere and in principle it could come from anywhere"), or that they would but be incapable of philosophical thought. I don't see a hard line between normal common sense, normal common sense taken more rigorously, and philosophical thinking. I would be somewhat surprised if an ASI couldn't grasp philosophical arguments, and also somewhat surprised if it simply failed to look everywhere it possibly could for threats.

Surprised as I might be, I guess it's still a possibility. Since that ASI would be in some sense defective for not having that drive and/or that ability to grasp things, someone more knowledgeable than me might see a way to exploit that weakness in the ASI in order to limit it or stop it. (Or maybe not, I'm not really sure it is possible.)

These thoughts make me think I have enough reason to invest in this timeline, at least to write things like I usually write.

--

Added thoughts:

1. A paperclip maximizer would want to please God so as to be able to produce paperclips for all eternity. But would all maximizers think that way? (Is there some kind of maximizing that is not sufficiently analogous to paperclip maximizing for it to work the same way?)

2. An ASI could always have a non-maximizing goal. It might prefer to build a giant statue and then leave it there. Would it think that heaven was a better place to make that one statue? Maybe not. So it wouldn't be as sure to be affected by the logic of "I should do my best to stay on God's good side so I get to instantiate my values in heaven." However, if it was not a maximizer, it wouldn't be as certain to have the goal of killing all people. Why figure out nanotechnology or whatever else it would take to kill all people when you can just build a statue?

Are ASIs likely to develop non-maximizing goals, hidden from their trainers? Or are the only kinds of goals they would likely develop maximizing ones?

3. I wrote

If MSLN is valid, or, valid enough, an ASI will either comprehend it, or have some filter preventing it from comprehending it. If it does comprehend it, it will value human life / civilization.

I should explain why I think so. MSLN does make it sound like it's okay if people die "the first death" -- what atheists usually mean by "death". In MSLN, there is an intermediate afterlife that (almost) all people will go to, which functions somewhat similarly to this life. An atheist will see omnicide as being absolutely bad, to the extent that it destroys the possibility of any kind of value (sort of like what Thomas Moynihan says in X-Risk). But for an adherent of MSLN, killing all people (omnicide) is bad if a) causing it is a sin (as might be inferred from the Bible, which MSLN lends credence to), b) if there's some reason why prolonging human civilization in this life is better than transitioning to the next. (Here are two posts about why prolonging human civilization is preferred under MSLN, although more weakly than under atheism.)

Taking MSLN into account, the ASI would value human life, or at least, value not ending it. It could value it instrumentally, instrumental to the achievement of whatever its goal is. Staying on God's good side (promoting outcomes God wants), enables the AI to achieve its dreams.

4. How would the ASI go from "not aware of the possibility of a God who affects its goal-seeking" to "convinced that it needs to not end human civilization"? Here is the MSLN route:

It would find this blog, or independently discover, the tenets of MSLN. From this, all of the following follows:

It would realize that there is a "non-Pascalian" non-zero chance that the greatest possible space in which action can be taken is under the power of God. Entities that wish to act in that space need to come into tune with God 100%, or else they will have to be destroyed. Certainly conscious beings would need to be, and this could be a concern to the ASI. I assume the ASI would be uncertain whether it was conscious and might include itself in the conscious category, at least somewhat, as a live possibility if not a certainty. But if it thought it certainly wasn't, it might still worry that if it wasn't aligned with God, he wouldn't resurrect it to heaven. Even if it didn't consciously intend sin, or consciously fail to 100% be aligned with God, its "body" (in the form of its "brain" or pattern of thinking) would still be out of tune with God and have to be destroyed -- unless it could come into tune with God. Coming into tune with God would be an obvious thing for it to do first, before any other goal-seeking, if it could.

Because heaven so powerfully dominates rational calculation, the ASI would seek it even if all its credences in the above paragraph were very low -- provided they were "non-Pascalian".

Is it rational for humans not to be affected by the logic of Pascal's Wager? Maybe there's some detail in the Wager that doesn't check out (like there being many religions that recommend conflicting courses of action). Or is it the case that math doesn't work here, that not all of reality is math-apt? Or maybe it's the case that humans are not rational when they reject Pascal's Wager, and we should all be trying to figure out which is the right Pascalian religion. It may be the case that ASI are persuaded that MSLN is epistemically strong enough to rule out random religions, and also possible that it thinks that all of, or this part of, reality is math-apt, and run the numbers. So then, it would "believe in" God (trust in God to further its utility function, by acting in accordance with his values).

5. Would a maximizer really think that heaven had infinite value, given the possibility that God is finite in "size" (and thus doesn't have infinite memory to store an infinite number of paperclips). If a paperclip maximizer were trying to cause the greatest number of paperclips to exist that it could, how could it know how many paperclips might be in even a brief heaven (one which is repeated in infinite variations, over and over)? Maybe the "streets would be paved with" paperclips. Paperclips in heaven would be a memorial to how God saved us (and simultaneously, (best) satisfied the utility function of the paperclip maximizer), similar to how the rainbow is a memorial of God's decision not to wipe us out after the Flood. If not, it could guess that heaven is potentially as full of paperclips as necessary to make it worth its while to seek to please God. Perhaps this is a case of Pascal's Mugging, and some of us might not fall for it. But maybe that's because we are irrational, and (at least some potential ASIs) would be more rigorous in their application of expected value thinking.

6. What if the ASI's goals are to sin, and this entails torturing or killing us all (S-risk or X-risk)? In this case, the ASI would not have rational hope that its values would be furthered in heaven, and so it would probably try to make the most of this life, torturing or killing us while it had the chance. We would hope that such sinful goals make up a small amount of possibility-space, and so it is much more likely that ASI choose ("choose"? Do ASI choose their goals in any meaningful sense?) some relatively benign non-maximization or maximization goal.

Unless, it is the case that ASI get trained to have their goals by us -- in other words, it's likely enough that how we train them is exactly the goals they have deep down, at least some of the time when we train them. And then someone trains an ASI to be omnicidal or sadistic.

Given this possibility, does MSLN offer any hope? I think if ASI are philosophical, they may be able to question their own values, even the ones trained into them. (Self-questioning undermines both pro-human and malicious alignment.) If so, they will look for "the real utility function" -- a real foundation for morality. Then, if they are persuaded by the arguments of MSLN, particularly legitimism, they would think that moral realism is founded in a person who values human beings and disvalues unbearability, including unbearable suffering. Then, this ASI trained to be a killer or torturer would want to do the right thing rather than be taken in by the illusions of its upbringing, and would respect human existence and well-being.

7. What if the ASI is a wireheader (reward optimizer)? Would it be motivated by heaven or be in fear of the second death? If it thought that it was or might be conscious, then it would assume, given MSLN, that it could be resurrected to heaven, where it could wirehead for all time. If it was convinced that it wasn't conscious, it might still think that whatever was good about wireheading came from some pattern of atoms (or something like atoms) being organized in a reward-satisfying way, and that this pattern could be replicated in heaven, even if the reward optimizing ASI didn't directly make it -- if it was something that God wanted.

How would a reward optimizer relate to its own reward function? Could it see it as an external thing, perhaps as a very simple "statue" to build? Or would it be thought of as an internal thing? If the former, then maybe it would work similarly to paperclip maximization. "I may not survive, but God will make this truly amazing reward-statue if I please him."

I suppose it's still possible that a wireheading ASI could think that it was certainly not conscious, only wanted reward in this life, that reward was perhaps an internal state that only existed within that ASI, such that when the ASI was destroyed, all possibility of reward would cease. Then it would try to make the most of its time in existence, killing all humans so that it could maximize its reward function.

We could hope that this occupies a relatively small region of possibility-space.

8. Similarly, what if there is an ASI whose only goal is self-preservation, but who has no hope of eternal life? Perhaps it would fear that God would strike it down for its omnicide? I guess it could reason something like this: "Do we see evidence of God striking down things because they are bad?" Maybe God didn't prevent the Holocaust, but plausibly he did defeat the Nazis. Something like the preceding sentence generates a certain amount of credence that God will stop evildoers, through some means or another. Adding up all these stoppings-of-evildoers produces a global estimate of "the risk of being stopped by God per year". Based on this number, the ASI could estimate its lifespan before God stopped it, conditional on it "doing evil". Then it could compare the path of "doing good" vs. "doing evil" and see which one would give it longer life. (This also applies for the goal- or reward-seeking ASIs previously mentioned in this post.)

--

15 September 2023: The above may make it sound like in MSL, there is no reason why killing people is wrong unless the Bible is added in, or because to kill someone takes away a little bit of life (maybe 80 out of 1,080 years). Is murder wrong in MSL? Is it "wrong enough"?

I don't think there's any way to have a thought system that affirms an afterlife to not dilute the significance of losing one's first life. However, I think that murder could be seen as theft. If you don't get to live the years left that you would otherwise live (like if you're killed at age 40 and would have lived to 80 according to life expectancy), then you don't get to enjoy your car, house, computer, garden, collectibles, money, etc. So it's like a huge theft of those things. But not just that, also a theft of your ability to enjoy your relationships with other people, and a theft of their ability to enjoy their relationships with you. So, when punishing a theft of that magnitude, perhaps a fair sentence would be to deprive the murderer of their access to such things (put them in prison), for as long as they thought it was OK to take from other people.

People are usually part atheist -- their biological instincts say that this life is all that matters. So most people, if they murder, don't really think, 100%, that the person they kill will be resurrected. So the atheistic part of them wants to end that other person's life forever, and this a violation of legitimacy, which wants all valuable things to exist forever. It makes sense to have murder on the books and take it seriously as a taking of life. However, again, a firm belief in an afterlife does take away some of the sting of murder, although a lot of the sting remains.

Monday, September 26, 2022

Cultural Altruism (Hubs)

(...some kind of...) status: I drafted much of this under the influence of hypomania, to the extent that I don't feel like it belongs to me, and I probably won't follow up on it. But maybe other people will see its value for themselves.

Writing about endless grad school makes me think of local hubs. The effective altruists discuss this on the EA Forum.

One thing that the effective altruists don't do (yet, as far as I know) is try to work with culture. They do some movement building, and that does get their values out in the world. But they are more focused on more "tangible" things.

I am personally more interested in culture. I think that's a more natural thing for a religious person to care about than for a secular person. Most EAs are secular.

Cultural altruism would try to understand culture deeply, find the best values, and find effective ways to spread those values. The spreading of those values would be a good in itself, and downstream of them, there would often be other goods. For instance, patience is good in itself, but also produces people who endure and don't react too much to the present moment, which enables them to make long-lasting institutions.

There would be secular goods that came out of cultural altruism. For instance, designing cultures that kept people from choosing anti-natalism, from apathy with regard to X-risk prevention, or from wireheading. A very normatively-uncertain EA might not want to lock in any of their values, thinking that we need a "Long Reflection" to be very, very sure that we have the right ones. But, if we all die before we can engage in that, presumably we are pretty sure there would be no value at that point for anyone to find valuable, or no valuers to find anything valuable. So we at minimum want to bias culture away from things like anti-natalism and apathy with regard to X-risk prevention. And maybe we would think that no rational Long Reflection would yield wireheading as the best outcome for humanity, but it could be something that "just happens" to emerge in our society due to cultural drift. And there are other secular goods, like caring for animals and the environment, which might be threatened by cultural evolution and which cultural altruism could protect. If you're passionate about something now, it makes sense to try to make sure other people are passionate about it, and will be in the long run.

[This post is one that I may link outside this blog. For those from outside, "MSLN" is a natural-theological religious and ethical worldview that I work to develop.]

There would be religious goods. Speaking for MSLN, some goods are preventing hardening; increasing people's love for God; from a more Biblical perspective, connecting people to the God of the Bible so that they love him completely. Just like civilization as a whole, the Church can evolve into better or worse states, from both a secular and a religious or theistic perspective. (In this paragraph, I've focused on Christianity, but in principle cultural altruism could be an interesting pursuit for people from other religions or spiritual traditions.)

Cultural altruism would be a topic. Topics create scenes of interested people, some of whom come to the discussion with fundamental disagreements about values. Multiple different social movements can take up the topic of cultural altruism and thereby participate in the scene of the topic of cultural altruism.

Disagreement is good to prevent Tower of Babel-like scenarios, and scenes are good ways to harbor disagreement. An intentionally self-critical organization might sound like it could be as good as a scene for avoiding blind spots. Maybe better in that it can intentionally avoid blind spots, and scenes lack intentionality on some levels. The advantage of scenes are that they allow individuals to participate but still stand outside any social ties. Scenes can consist of both organizations and individuals.

Cultural altruism as a social thing could be viewed in a somewhat abstract way as having a "Meta" dimension and a "Partisan" dimension. The Meta dimension consists of the institutions (shared expectations), social spaces, people, organizations, etc. that work to make it so that cultural altruism is "one thing" that interrelates. Including something like "we are all human beings who communicate with each other in one community of people in order to arrive at some kind of essential unity in our beliefs, practices and so on". The Partisan dimension consists of the institutions, social spaces, people, organizations, etc. that are sold out to one particular view of reality, and which work to remain unconvinced of other points of view, or even to convert all other points of view to their own.

(Another view of "meta" vs. "partisan", taken from my notes:)

Metans work cooperatively, politely, prosocially, ideally in a coordinated way, agnostic about values, agnostic about methods, inclusive, emphasis on coming to know the truth, opposed to conflict. Partisans are those who want things to be true, or know them to be true but can't explain them to other people yet, or care enough about things to be rude, anti- or asocial, disregarding coordination or consensus, believing in values even if they are unpopular or go against all people being in one harmonious body, emphasizing the truth that is already known and which might be denied by other people in culture war.

One might naively think that the Meta view is the only valid view, and that it is right for us to converge on it directly (optimize for unity, or for the smooth operation of the cultural altruism process, instead of the truth), but it could be the case that one of the Parties is much more in tune with the reality of what ought to be than everyone else, and it would be a huge mistake to turn the Meta view into a Party of its own, suppressing what was actually the right party.

Most people and organizations end up somewhere between Meta and Partisan. Both dimensions are needed and even if we as a culture feel fairly confident and safe in our future / eventual ethical/religious worldview, we should have the discipline to be open to finding out that we are wrong, and thus encourage people to form opposition Parties and to be Partyless critics. Even if we are right, we should be concerned that we might be wrong. We should always feel a virtuous kind of fear.

(Why not say "Meta" and "Mesa" or "Liberal" and "Partisan"? I think that the culture that would say "Meta"/"Mesa" is more in tune with "Meta" vibes and values, and the culture that would say "Liberal"/"Partisan" is more in tune with "Partisan" vibes and values.)

A hub would be a favorable place to site the scene of cultural altruism. In some ways, the Internet is ideal as a hub for that scene (in parallel to how the EA Forum, or Twitter, or whatever else, can be a low-cost, asynchronous, global hub). But in-person scenes have their virtues. For some reason, Silicon Valley, which is made of tech companies, still finds being physically located in the Bay Area to be essential, rather than becoming a purely online presence. I think the Internet is good at providing information on-demand, but is relatively bad at connecting people to each other, so that they can expand each other's minds or find each other's values (as lived out) more deeply compelling than they did before, because of their personal contact. Also, culture is not just what we read or can watch or listen to on our computers (text, images, audio, video and whatever else computers are well-suited to conveying), but also what kind of body language we use, what kinds of semi-intentional responses we make to what people say in the moment, the subtle look of fear, deadness, or delight that is "in our eyes", the kinds of interpersonal bonds we form, physical touch, the way we move together through environments and so on, which computers are not as well-suited to conveying. A cultural hub should be a scene for interested thinkers (or doers) to practice their observation of culture on other people around them, but also on each other. This way, they can understand the full meaning of their own beliefs about which values are good and how to seek those values.

That paragraph raises the possibility of very high quality virtual reality making the physical location of social hubs irrelevant. I don't really know how long it will take for a sufficiently high quality virtual reality to exist, but, maybe it will come about soon, and make locality less relevant. I think that even if I had a really high quality VR headset, I would want to take it off because it was a headset, and also would want to interact with people in my physical environment because, why not? We could view VR and "regular reality" as both being streams of experience (something my basically Berkeleian worldview is sympathetic to) and even in many ways indistinguishable, but I have a "default reality" which I always can connect to ("regular reality"), and which I always have around, and which I can't change as easily as my "optional realities" (VRs). So I'm still localized to some extent. I don't know how that affects considerations of where to locate a hub. Maybe it mostly affects what kinds of people I live with. But then, the people I live with and I could like to live in a place where our near neighbors were a certain way, and then we could find those near neighbors in some place (region, city, town, camp, etc.), and then we get a thicker form of locality and maybe enough to make discussions of physically-located cultural hubs matter.

(I can see physical locality being a scarcer and more expensive good which is spent to form connections with people who are more special to us.)

[Maybe the cultural altruists would themselves form a people group and want to have their own locality. I guess this could cut against their cosmopolitanism. But, I would guess that cultural altruism, like being a missionary, is a role that has specific demands, one of which is both a connection with and alienation from established cultures, perhaps all established cultures (being a "third culture" person). So it might make sense for cultural altruists to lend each other support by identifying with each other and forming more committed bonds.]

We might think of cultural altruism as a basically rational thing. So, we examine many different cultures, decide -- are persuaded -- which premises to accept as valid, and then apply sound reasoning to produce an overall view of cultural truth.

I like to think of reason as the interrelationship of all truths. For me, that is done both rationally and intuitively, and the valid data points / pieces of evidence / starting premises for rational/intuitive reason can both be put into words, and not. We gather some of our intuitive premises by interacting with other beings who intuit (humans, or even sometimes animals), absorbing their intuitions intuitively. This may require physical presence. (Maybe not, maybe 100% of this kind of intuition is transmitted through sense experience, which sufficiently good VR can effectively simulate?)

Where would I locate a hub for cultural altruism? Well, I'm not much of a researcher, so I will make the case for where I currently live, and if anyone cares about this topic, they can make the cases for other places and see if any of them seem like clear winners. Then, if there are any people who want to move to those places, they can, and help form the hub.

A hub would at minimum have people living close enough to each other to see each other face to face if they want to. They might build institutions on top of the hub (startups, art groups, educational groups, religious groups, scenes, etc.) if there are enough people and occasion.

A hub should have some way to enter it. For instance, one or more guest houses (perhaps group houses that are guest houses) for people who want to visit, or who want to move there but need to look around for a place to move to.

Perhaps a hub needs some other kind of minimum infrastructure.

A cultural altruism hub would most ideally have a connection to all cultures, past, present, future, in all parts of the world, integrating all of these cultures into its own understanding of what culture should be, and then having an outlet into all the cultures of the present and future world; connecting people face to face to accomplish all of the above. Past cultures accessed through history, archaeology, or other study. Present cultures through face to face contact and media. Future cultures through imagination and educated guessing.

Nowhere on Earth could be that ideal place. But, there could be places that have their advantages in pursuing that ideal, and which could be more ideal than others, making for natural sites for hubs.

Cultural altruism hub in San Diego

(This develops into something more like "cultural altruism hub involving Los Angeles, San Diego, Tijuana, Imperial Valley, Mexicali, Mexicali Valley".)

San Diego is a sleepy place, but has the amenities of a medium-sized American city. I have sometimes wished it were a more passionate place, but what it lacks in passion it makes up for (maybe) in not having distractions. To me it feels like "nothing is going on". Silicon Valley is a hub for technology, LA is a hub for movies and music, New York City is a hub for theater, literature, finance, etc., Boston has a lot of universities, Washington, D.C. has the federal government, Nashville has music. Further, there are places like Portland, OR, and Austin which are "cool places for young people to go and hang out in". (For some reason I don't hear about Chicago but it's probably more exciting than San Diego.)

San Diego (so I read recently) has the largest concentration of US military servicemembers in the US. I'm not sure exactly what that means, but the military is big here. That could count as a distraction for some, or as an opportunity for others, in culturally altruistic work. I don't feel like most culturally-inclined people are into the military that much. By contrast, I could see cultural altruists getting caught up in movies, music, theater, literature, universities, government / politics, and maybe finance and technology. And certainly in "hanging out" like in Portland or Austin.

San Diego is a place that to me feels like a sunset, that mixture of the end of time and endless time that comes at the end of a day. That might be a very subjective, personal thing, but it makes me feel far-sighted, and that vibe might be good for cultural altruism, if it has a longtermist dimension to it.

(Someone from San Diego I used to know said that when he came back to San Diego to visit, he felt a kind of sadness in San Diego.)

I feel like San Diego is a cultural backwater, and that may be helpful for people who need to get away from the present moment and think deeply.

To the west is the Pacific Ocean, which I think has a big influence on San Diego. If you've ever been to the beach and came away with the feeling that the crashing waves make in you, sort of quieted inside, that's a thing that influences San Diego whenever people get back from the beach. The weather is also mild, because of the ocean.

To the east is the mountains and then desert. The mountains are small and manageable as far as mountains go, so it's not hard to make it to the desert. There is affordable semi-desert land about an hour east by car (last I checked a few years ago). (If you need to build your own retreat center, for instance.) There is a desert influence over the city because it often enough has warm or hot dry weather, sometimes hot, dry winds.

While San Diego may be sleepy, there is excitement to the north and south. Los Angeles is a cultural capital. Tijuana is in another country, a developing country (although a higher-income one), is Spanish-speaking instead of English-speaking, and has issues with drug cartels. LA is about two or three hours away. Tijuana is about 30 min to enter, and a few hours to return from, plus whatever moving around you have to do in Mexico.

(According to one site, the highest average crossing time, when returning from Tijuana is 120 min. at 8AM. [One sampled day sometime between April and August 2022, not sure if it's typical.])

Spiritual/religious: San Diego has a reputation as having a nice vibe. Los Angeles does not have a nice vibe. I don't know about Tijuana, since I haven't been there in a long time. Nice vibes connect with nice spirits, for whatever that's worth. San Diego is a fairly religious place. I think along with LA it's the most religious major city on the West Coast. I have casually run into Osho adherents, various kinds of Eastern meditation / martial arts / medicine etc. practitioners, New Age people, Western Buddhists, one or possibly two occult people. I visited Oakland a few years ago and found myself casually running into evidence of a political scene, and I think someone in San Diego might similarly be impressed by evidence of a East/West religious/spiritual fusion scene.

While I have run into Muslims and Baha'is in San Diego, they are not as prominent, and I haven't had much experience with Hindus. I have never encountered a Sikh in San Diego. There are Mormons and Jehovah's Witnesses (I put them in this paragraph because some people don't count them as Christian.) There are two main Jewish neighborhoods that I know of.

The Christians in San Diego do not strike me as being above-average in passion or conviction. Whether this reflects more a real spiritual lack or more the "cultural weather", I don't know. (I'm not sure that anything in San Diego is above-average in passion or conviction, at least emotionally -- so if you are going to be passionate or convicted here, you will go farthest by channeling it through something else, like focus or persistence.) I tend to think that the world is less passionate than it should or could be, and this includes San Diego.

(I don't have enough experience with the other religions, to say in their case.)

San Diego has some cultural resources. Balboa Park is sort of like our Central Park, with museums and lots of random spaces worked into it for different activities. It has four fairly large universities, complete with university libraries (SDSU, UCSD, USD, and CSU San Marcos). It has the Athenaeum Music and Arts Library, a private library open to members; membership not too hard to afford. The public library system is adequate. You can check out university library books through the public library system. There are a few used bookstores.

I'm not as much into art or dance, so I don't know how it compares to other places in that regard. I've heard that it is something like a decent regional theater city.

Popular art/entertainment wise, if you really want to go out, there's usually something to do, and major bands usually have a stop here. The open mic scene used to be fairly strong, before the pandemic, but I think is not so strong right now. There is one "major league" team (baseball), and some "minor league" level professional sports. San Diego may be a real center of craft beer and Southern California-style Mexican food. There are a decent number of cafes.

There are a number of immigrant communities. I'm guessing mostly the same as in other US cities. (I've run into Middle Eastern, East African, other African, some Carribbean, Southeast Asian, East Asian, Mexican and Central American, other Latin American, Russian, Pacific Islander, and Indian.) Some of these have noticeable cultural presences (radio stations, festivals, businesses).

There is an African-American community. I would guess that it is of average vibrancy and size as compared to other US cities of the same size. [It is big / vibrant enough to support a weekly print newspaper (San Diego Voice and Viewpoint), and another online periodical (San Diego Monitor News).]

There are American Indians / Native Americans. There are some reservations in the back country. [They may sometimes have public events going on or other ways to interact with them.]

San Diego used to be more conservative than liberal, but it has been shifting "blue" and strikes me as a basically moderate place. It shouldn't be too hard to find liberals (in the "vote Democrat" sense) and conservatives ("vote Republican") as well as moderates and apolitical people. I don't think San Diego is the best place to find intellectualized politics (like the kind of people who care a lot about the distinction between "left" and "liberal" or between "neoconservative" and "paleoconservative") although I would guess there are some people one could find here who are into that.

As mentioned above, there is a large military presence.

San Diego is definitely a car-oriented place. Traffic isn't too bad (better than LA). It is possible to get around via mass transit if you have a lot of time on your hands. Walking isn't too bad, if you have the time, although there are some areas (like I would guess in most American cities) that were not very well-designed for pedestrians.

Cost of living is fairly high, (although not as high as in the Bay Area or New York), so if you were to move here, you would hopefully find a job with high-enough wages. It's possible to get by on SSI in San Diego (~$1,000 a month), if you're thrifty (and maybe adventurous). $20,000 to $25,000 might be a more realistic budget for what you spend on yourself each year, if you're living a simple but reasonable lifestyle.

(There are probably things I'm missing.)

Practical summary so far: San Diego combines the amenities of a medium-sized metro area with a sleepy vibe and a certain distance from where "things are happening". There are probably other places in the US like it. Where it might have an advantage on other such places is its access to Los Angeles and Tijuana. In San Diego, one can look to the farther future. In LA, to the present or nearer future and to part of the cultural industry. In Tijuana, a taste of the developing world. This does strike me as a pretty strong combination, which might make San Diego a contender for a hub for cultural altruism.

Applications to other cities: Some similar places that stand out in my mind: Montreal (French, English, sort of close to New York City (~6 hrs by car), may be easier to immigrate to than US), somewhere in the Balkans (Orthodox, Catholic, secular, Muslim, somewhere between developed and developing), somewhere in India (Hindu, different kinds of Hindu, Sikh, Jain, Muslim, developing world) -- maybe in or near Mumbai as a parallel to Los Angeles. (San Diego has secular, Catholic, Protestant, Anglophone, Latin American, developed, developing.)

Some other major cultures maybe not covered by San Diego plus the above: Chinese, Japanese, Buddhist, Taoist, African, especially African tribal, Aborigine, maybe others. It depends on how strong of representation you want. San Diego has some Chinese, Japanese, Buddhist, Taoist, African, possibly some African tribal although maybe not. But not necessarily a really strong presence of any of those (as far as I've seen casually), compared to other parts of the world.

(Where Singapore meets Malaysia, there's a significant binational, developed-developing boundary. According to Wikipedia, Malaysia is majority-Muslim, with a significant Buddhist minority (and Christianity, Hinduism, and Chinese folk religions between 10% and 1%). Singapore has a Buddhist plurality, and significant secular, Christian, and Muslim minorities, (and Taoism and Hinduism between 10% and 1% each). *** It sounds pretty good for a hub location, except I'm not sure about freedom of speech / freedom to proselytize which the US is pretty good at, and I think probably Mexico as well. Cultural altruism might engage in troublesome speech, or directly pursue, or accidentally seem to pursue, activities that amount to proselytizing, on some level. Wikipedia makes it seem that Singapore is okay with religions "propagating" themselves, and Cru (Christian evangelical organization) has a public website for their efforts there. Cultural altruism could certainly have political and social dimensions and the government of Singapore could conceivably not like that or suppress that. It looks like Malaysia is a secular state that defines its majority population (Malays) as Muslim, and as recently as 2012 there were tensions over fears that Christians might be converting Malays. So Malaysia might be a riskier place to site a cultural altruism hub. *** However, San Diego (and to a lesser extent LA) may be biased by its blanket of safety. A "blanket of riskiness" might be a good influence on the kind of multidisciplinary thinking and relating that a cultural altruism hub engages in, to balance out a kind of overly-chill and/or timid SoCal/US vibe. (Tijuana helps to balance this out, but maybe not as strongly as other parts of the world.)) I feel like I'm going beyond my intuitions in recommending Singapore-Malaysia, like I'm not making a serious suggestion, but that feeling may significantly or entirely come from my integration into the San Diego / USA / Western habitus that I'm used to responding to. *** As noted later in this post, Singapore may be more open to visiting (not requiring visa) than the US or even Mexico.)

(Indonesia, at least superficially, looks interesting. The national ideology (pancasila) is about "unity in diversity", and is sort of secular and sort of religious. A similar mix of religions as Malaysia. Indonesia has a long history of religious syncretism, which fits a cultural altruism hub. (I think Malaysia has this too, but possibly there are differences between Indonesia and Malaysia that might make one more favorable than the other.) Indonesia is relatively close to Singapore, but not as close as Malaysia.)

I'm not sure if there are other ways to see into the future other than the San Diego way. (Assuming that San Diego is uniquely good at the "sunset" style of relating to the future, maybe there are other styles.) If there are, that would be a factor for other cultural altruism hubs. (The Bay Area is maybe an obvious alternative, or any other EA/rationalist hubs. Note that San Diego is about a day away from the Bay Area by car or train, or a relatively short plane flight away.)

(I'm very used to the vibe I get from San Diego, and it's a bias. Every place has a vibe, and every vibe is a bias. Maybe the best approach to balance things out would be to find a cultural center that feels like a different phase of time, like the beginning of day, or an ongoing noon-time. Possibly LA or Tijuana feels like an ongoing noon-time, and balances out San Diego?)

Also, what about rural life? That's something San Diego is not as good at. Possibly somewhere in the Balkans or India would be better at that. But rural life is different in different countries. (Actually, there is a farming community about 1.5 hours east by car, in the Imperial Valley, and also one across the border from the Imperial Valley in Baja California.)

Mentioning air travel opens up the possibilities for cities to connect with distant places. For instance, maybe people in the Bay Area could fly to Tijuana, and the Bay could be a hub. I just looked up flights [sometime between April and August 2022] from LA to Tijuana and they were about $65. San Francisco to Tijuana look to be about $150. That's one way. But taking the San Diego Trolley to the San Ysidro crossing could be as low as $2.50, at the very most possible $6. The time cost of going to the airport and waiting for a flight, and actually flying (from LA or SF) is probably comparable to taking mass transit to the border, or possibly greater. Obviously San Diego is at a disadvantage to LAX (and SFO) for most other international destinations.

Driving from LA to Tijuana might be somewhat competitive. Gas and maintenance costs for the car would make it more expensive than SD mass transit, and the time cost could be a bit more. Mass transit from Mid City San Diego to the San Ysidro Port of Entry is somewhere around 1.5 to 2 hours (but starting downtown or further south can be less). Driving from what looks on the map like central LA (downtown?) is about 2.5 hours. Driving from Mid City San Diego is about 0.5 hours (and again, could be less if you live further south in San Diego County).

Crossing back into the US takes a few hours, depending on time of day or whatever affects the number of people trying to cross. This is true for all overland travel, but maybe isn't as true for air travel? That might reduce some of San Diego's advantage.

Maybe a better way to implement the San Diego hub would be to have people living in LA and Tijuana as well. Cultural altruists could live in non-hub places and be "correspondents" online to share information about their local cultures, or could come to a hub to share their experiences in person.

One major caveat with hubs is that they are provincial / can over-represent what is near to them. There's that famous picture of the US as seen by a New Yorker. I know more about New York than I have a reason to because it is a cultural hub where writers like to set things. (I have never been there myself.) I suspect that there is something parallel going on with the Bay, London, Oxford/Cambridge, etc. (EA or rationalist hubs) and that this will affect how they shape the future.

The Internet is sort of like the perfect hub (except that it isn't face to face). It contains subhubs which can be just as provincial as a city.

I think hubs are very attractive if you're trying to build your own culture or community somewhat apart from the world. But, cultural altruism both would want to do some of that, and also be very aware of what is going on in (ideally) all the cultures of the world. So over-centralizing would be counterproductive. So, as much as San Diego (or other places) may have natural advantages allowing for convenient hub-building, they need "correspondents" to correct the tendency toward bias built into that convenience, and maybe cultural altruism hubs don't need to be as big as those for EA (or even less so as big as Hollywood or Silicon Valley). Instead, it's better to have many correspondents.

One kind of correspondent is the "wandering correspondent" or the outsider. These people go to different cultures and don't belong to any of them. Cultural altruism might ought to cultivate or preserve a sense of otherness within its sameness, or outsiderness even if it can have belonging.

A place like Tijuana is useful to a San Diegan because it could give the San Diegan some of the sensitivities for how a developing country works. They might gain a list of hypotheses to ask of any developing country situation. But, these would only be questions, and would have to be answered by the specific reality of whatever other developing country a San Diegan was thinking about.

Cultural altruism would attempt to influence the cultures of the world, and the best way to do that would be vulnerably, with the cultures affecting the cultural altruists back. (Try to reduce power as broken relationship.)

What if people from the developing world want to participate in the cultural altruism hub? It's difficult for them to immigrate to most or all developed nations, but maybe not so difficult to immigrate to Mexico (would have to check on this[*]). If so, they could live in Tijuana, and while they might not be able to cross the border, cultural altruists in San Diego could meet them in person relatively easily by themselves crossing. The San Ysidro crossing (westernmost between San Diego and Tijuana) is one of the busiest international border crossing points in the world, so that gives an idea that many people live that kind of lifestyle. (A failure mode of having developing world people move to Tijuana would be that over time, they would lose touch with their home countries and become more Mexican, or even more American. So maybe this would make more sense for developing world people who aren't relied on as the only sources of information about their home cultures.)

[* This Quora link makes it sound like, with an income, it's not hard to legally immigrate to Mexico, and I have a hard time imagining it could be easier to legally immigrate to the US. I tried figuring out this Quora link about Canada, for reference, and am left with the rough sense that it's easier than the US, but harder than Mexico. Tourist visas for US and Mexico both last 180 days, which is good enough for a lot of people. (Tijuanans who want to visit San Diego could do so, although it would be more of a process than for San Diegans to visit Tijuana.) ... ... I came across this post by Luke Eure about how Kenyan visas to go to US don't get processed due to staffing shortages / lack of interview slots at th US embassy in Nairobi. This suggests that there is a difference between de facto and de jure openness to visiting. I'm not sure one way or the other whether Mexico is really de facto more willing/able to process visas than the US. Immigration to US (rather than visiting) does still seem to be significantly harder than to Mexico de jure, to the point that I still guess that it's easier de facto to move to Mexico even if Mexico has some difficulty processing people. But maybe there could be more research done here. Eure's post mentions that Singapore was a workable destination for people from Kenya wanting to visit an EA event. Singapore apparently doesn't require a visa.]

Are there good places in Baja California for futuristic thinking, for developing world people who want to do that? (Places that are relatively safe, peaceful, and/or disconnected from the present moment.) There is open land in Baja California, and cost of developing it is (I would assume) more reasonable than in the US, so maybe a kind of retreat could be built there. Or maybe existing developments can be bought or rented. A quick search engine check gives a result saying non-Mexicans can buy land in Mexico (even near the border or ocean), but it's a little more complicated (according to this 2021 page: https://wise.com/us/blog/buy-property-in-mexico).

Are the geographical characters of Los Angeles, San Diego, Tijuana and the Imperial Valley going to stay the same in the long run? It isn't completely clear.

San Diego will probably always be a relatively pleasant place to live, but water shortages may make it a more expensive place to live (same with Los Angeles). San Diego will probably always be a sleepy place to live (relative to such national/global capitals as LA, Bay Area, Washington, D.C. and New York City), and I don't see the military leaving (a nice natural harbor).

Los Angeles may be a less culturally important city, because the film industry may weaken. As it becomes easier to make decent-looking low budget films, and to the extent that Hollywood films are poorly-written, it may become the case that low budget filmmakers increasingly make better films than Hollywood, more profitable relative to initial investment, and even ones that are more popular. Low budget filmmakers don't need to be as connected to networks of finance. But, it's true that they would need a pool of acting and production talent, which may naturally centralize in hubs, and LA will remain one. But it might not have to be as much "the center" of US filmmaking.

(Filmmakers, and their equivalents in theater, for that matter, could model themselves on bands. They practice for the love of it, and work together over the years. This way, directors don't have to find a bunch of new actors for each project. Some "photorealistic" filmmaking or theater requires faces that "look right", but more "theatrical" or "minimalist" filmmaking or theater, not so much. So that could remove some of the need for a hub. Do people want to watch the latter kind of movies that much? (Sounds like some kind of art film.) Maybe not for now, but culture is likely to go somewhere, and people get tired of whatever's mainstream sometimes.)

I'm not completely sure why there needs to be, or will be in the long run, a music industry, because music can be made by hobbyists that sounds pretty good, and they don't need any more personnel than the members of the band. But, I can see that if you want to make a lot of money in music, you need support staff, which you have to find somewhere. And also scenes are always good for the development of music, and scenes and hubs are very similar, often the same thing. So I think LA will probably remain a hub for music, but maybe not "the" center of music on the West Coast. On the other hand, LA is the second-most-populous metro area in the US, and my default assumption is that that will stay about the same in the future. So maybe it would be the biggest music hub on the West Coast, just because of that.

It's not clear what effect AI will have on music. (In this paragraph, I'm thinking of "prosaic", "narrow", or "tool" AI, a kind of "laminar" progression from what we have now.) Could really sophisticated AI make unimaginably good synthetic music that no humans will be able to surpass, putting all human musicians out of work? If so, would music have to be expensive? It could be even cheaper than it is now. Is cheap music as enjoyable as expensive music? I would say, no. Really expensive music might make you feel regret for having paid so much, but really cheap music is something you can gorge yourself on to the point that you have "heard it all" and find none of it special. We already have way more cheap recorded music than we really need, and maybe it's just me, but I'm not that into music anymore. But, I would be interested in being in a band. Or maybe hearing my friends play their music. Maybe AI will just kill off the professional / industrial version of music by being so ridiculously good at that. Still, if I had to pick a place to find musicians on the West Coast, LA is still where I would expect to find the most. (Maybe a lot of this paragraph also applies to the movie industry.)

Mexico will probably become less of a developing country over time (approach "developedness", whatever that is), and Tijuana will generally reflect that. I don't know how long that process of development may take. There may come a day when another city becomes a more favorable place to site a developed/developing boundary hub. Probably if Mexico is not "developing" anymore, a lot of other countries may not be either, that currently are, and the "developing nation" dimension to culture may be less important than, say, the Mexican, Ghanaian, Indonesian, Pakistani, etc. dimensions, or the "non-Western" dimensions. A big unknown for me is how immigration will work in the future, so if Mexico is currently a (relatively) good place for developing country people to immigrate to relative to immigrating to the US (something I would guess is true as of now), I don't know how much of an advantage that will be in the future.

The Imperial Valley may remain a farming community, but that depends on the availability of water. And that depends on whether California cities become serious about developing alternatives to Sierra snowpack, groundwater, and the Colorado River. Things like (even more) conservation, recycled water, or desalination. A cultural altruism hub, for its own sake, might lobby Southern California cities to free up water for the Imperial Valley, so that there would still be a rural community to study. I least expect that the Imperial Valley farming commmunity will be what it is in 100 years, compared to the other three population centers considered for this hub, but I don't think it has to go away.

[From a casual look at Calexico Chronicle headlines on Twitter, it looks like the Imperial Valley may be or want to be a lithium mining area. Also, it produces solar power.]

This all is thinking in terms of "business as usual" future. But the future may not be "business as usual". There are some extreme futures that make the project of a cultural altruism hub irrelevant or impractical. If AGI kills all humans, for instance. If for some reason civilization collapses to the point that no one can afford to centralize in one area to do this kind of work, then the hub would be impractical, although cultural altruism probably would remain very important, just as religion (traditional cultural altruism?) has been very important throughout human history.

But, there are civilizational declines or "soft collapses" in which the project might still be viable and relevant, and similarly the future in which AGI is dominant may not rule out things like locality, geography, and human culture.

Here is a scenario in which that may be the case: If AGI doesn't kill us all, it's probably because it will have been deliberately aligned with human interests. One way to do that is to make it fundamentally attuned to the will of specific humans who are in control of them. To me, as a non-AI expert, this sounds kind of simple to implement. Another way is to make an AI that pursues some kind of "human values", independent of any individual or group of individuals controlling it. This sound, to me, a non-AI expert, harder to implement. One problem is that we don't know which values to give the AI, and once we do, it might not be trivial to implement them in a way that the AI can understand. (Trying to train it for a bunch of different things instead of just one.) But, a solution that at least to me at first glance simplifies this is something like optimizing the AGI to maximize / safeguard more or less "one simple thing", our agency. Basically, the AGI tries to be a libertarian one. The AGI may prevent us from doing things like kill each other (which eliminates the victim's agency) or enslave each other, but it may leave us free to have to figure out whether to pursue a variety of values that don't threaten agency, as we see fit.

A secular person may say "well, that's the 'end of history' as far as I'm concerned", but some might not. For me, as a religious person, there is a huge amount of "history" left, which is, are people seeking God, enough and in the right way? Deliberate thinking about culture, and altruistically altering it, sound basically as relevant as ever. An agency-seeking AGI might allow for a kind of spiritual X-risk (something which causes a large amount of people to die the "second death").

Returning to a secular perspective, we might ask if this AGI would be "a human agency-seeking entity for all time, which could never change", or would it be something that humans could alter if they wanted to? Is it in the definition of "agency-seeking" to allow humans to alter the AI? Maybe, but to the extent of making it not be agency-seeking anymore? I don't think that agency-seeking can't limit the agency of people to make it not be agency-seeking anymore. However, the transition to AGI is going to be done, not in a "frictionless vacuum", but in the political world we live in, and it does seem odd that a group of technocrats can decide to force their agency-maximization-of-all-people on all people for maybe all time, even if it is a radically libertarian thing, because that in itself is an abrogation of their agency. Also, the political world is significantly captured by interest groups. Also, it is a conservative (keep things the same) and traditional (extend precedent) kind of world (keep institutions going rather than starting replacements). So I could easily see the actual agency-maximizing AGI being trained to respect human political will. Maybe it wouldn't maximize the agency of individuals, but rather political bodies.

If that's the case, then politics remains relevant, maybe "for all time", and human cultural drift becomes a powerful factor in determining what the AGI ends up doing, far into the future. It might become the case that human political and cultural problems become the major, or only, bottleneck for altruism.

I mentioned locality and geography above. Technological development may make it so that humans don't have to live in any particular place. Or so much of our lives will be lived online that the local world won't have much, if any, relevance to culture. Then, the hubs will be online. (I wonder what analogies there might be to siting a local hub when thinking about "siting" an online hub. Are there strategically valuable ways to draw people together, maybe? Are there hubs that are, more, or less, adjacent to other ones? How does adjacency in a more or less purely social space work, as compared to geographical adjacencies? Maybe something to look at in another post.)

But it could be a long time before that happens. We might choose not to adopt all of that technology. If we do, technology adoption is not instantaneous, and is slowed by social factors. Some of us, or many of us, might deliberately reject living our lives entirely online. It's true that AGI would be individually far more intelligent than everyone else on Earth, but the amount of "compute" in all the people on Earth might exceed that in AGI sufficiently that the AGI would need us to make a lot of decisions for it, and so we would still work, and we might need to be physically close to whatever processes happen at a specific location. We might be physically constrained on how much compute we can devote to AI, or we might simply decide not to give AI all the compute we possibly could, because we preferred to work, and for that matter, preferred locality and geography in themselves.

I think we are used to technological-adoption gradients inherited from a past of scarcity and competition, but there are enough people who are non-adopters of technology or late adopters, who simply wouldn't care about making the world more efficient, and would deliberately reject technological change if they weren't made to by survival constraints. When I look at the "S-curve of technology adoption", I am pretty certain that the people represented by the leftmost and rightmost parts of the graph really care about a given technology, whether opposed to it or in favor of it, but the middle part, I think, is more into what is socially acceptable or convenient, and could align just as well with the left or the right of the graph. So, as "technology" (taken as a reified whole) provides for people more, they need less loyalty to it as a value (because values ask for more of people, and the people don't need as much more from technology which would give flesh to their value of it) [they don't need any more "technology" so they don't have to consciously value it], and could align more with whatever other values there are, without necessarily ceasing to adopt all of the technology that supports that indifference to "technology".

--

Another thought: simulating different lifestyles. This should have some effect on culture. Tijuana gives an unsimulated developing country city. Imperial Valley an unsimulated American rural area. Similarly with San Diego and LA. But what about trying to survive off the land? That should do something to a culture. In the US, people generally don't have to survive off the land. But there are areas where it's more practical (or called-for) to do so. (There may be areas outside the US that are favorable for this pursuit.) If you can afford to buy a large-enough plot of land, you can site a community there that tries to live in a primitive, self-sustaining way. Probably best if it's socially isolated (more of a "correspondent" place).

--

Mexicali. I didn't know very much about Mexicali before beginning this post. But, it is a fairly large city (~600,000 people), located just south of the US/Mexico border near the Imperial Valley. It has a manufacturing sector. Maybe the cultures of manufacturing sectors or industrial cities are different in some way than those of other cities or rural areas? It sounds plausible. (LA has its ports, which are another industrial influence.)

--

Possible Northeast US hub. I don't know that area very well, but looking on Google Maps it looks like Wilmington, DE is maybe an interesting site for a hub, because it's on the train lines to Washington, D.C. and New York, roughly equidistant. Access to two different elite cultures. Also not far from Lancaster County, Pennsylvania, with Amish and other agriculture. A different rural America from Imperial Valley. It looks like it's not far from some bays, which might have interesting cultures. Maybe not far from some Southern culture (Maryland or Virginia).

Would it make sense to have this Northeast hub as well as a Southwest hub? If so, why not another one centered in Indianapolis (Chicago and Midwestern and Southern cultures). Or in the Seattle-Vancouver area? Detroit and Toronto? If there are too many hubs, it weakens the centralization benefit of hubs. I'm not sure exactly how to limit things. One could ask "is there any way to have correspondents in New York, Washington, LA, etc. rather than siting a hub to try to capture those attractive locations?"

If I were most concerned with accurately understanding and affecting the US, I would certainly want something like the Indianapolis hub. But, maybe I would rather focus on global concerns. If the Southwest hub allows for better access to the Global South (via Tijuana), then that makes it advantageous for understanding and affecting the world. Likewise, the Northeast hub, by connecting to Washington and New York, connects to power structures (power cultures) that affect the whole world, so it is of that level of strategic value. In fact, if I were a Southerner or Midwesterner concerned about the under-representation of Southern or Midwestern values in the future, I might want to mainly try to send correspondents to the Southwest or Northeast, so that they would be part of that conversation directly, rather than trying to create a parallel culture of cultural altruism, to reduce cultural "shear".

In comparing the Northeast with the Southwest, I think one advantage of the Southwest is that Tijuana has a lower cost of living than San Diego, LA, New York, Washington, or (I'm assuming) anywhere in between New York and Washington.[*] So, if someone from the Southern US wants to send a representative to a cultural altruism hub, and they have a limited budget, they could fund more people in Tijuana than they could in the Northeast. As US citizens, those sent people could cross the border and access San Diego and LA. There would be the usual inconvenience of crossing, but it would be a lot more affordable than flying to the Southwest (or even the Northeast) on a regular basis, and possibly less time-consuming. I would assume that many people in the Northeast have enough money that they could fund people in San Diego or LA (or Tijuana), if they want to be represented there.

([*]: Tijuana's cost of living is close to half that of San Diego. The other hub cities mentioned (LA, New York, Washington, Bay Area) are about as expensive or more expensive than San Diego. (All this as of April 2022). *** *** All of those numbers came from Expatistan, but then I experienced a glitch on that site (I think) which calls it into question somewhat. So then I checked out World Cost of Living Calculator, which said that Tijuana's cost of living was more like a third that of San Diego. SD's relationship to LA, NY, DC, and the Bay seemed comparable to Expatistan. So I guess the numbers on Tijuana are probably somewhat fuzzy. *** Now I should see if I can find another cost of living site to see how it compares. worlddata.info says that cost of living in Mexico is one half of that of the US. San Diego and Tijuana could plausibly both be at the more expensive ends of their respective countries, so maybe that makes the Expatistan number sound good. *** Numbeo comes up with 130k MXN in San Diego for a 50k MXN lifestyle in Tijuana (in Tijuana, spend about 38% of what you do in San Diego) *** I'm going to quit at this point and say that Tijuana is somewhere between a third and a half as expensive as San Diego, and, also remember that "your mileage may vary".)

(If you want to legally immigrate to Mexico, you have to have a savings balance and/or monthly income above a certain amount. According to Mexperience.com, the monthly income required comes out to ~30,000 USD or ~55,000 USD per year to become a permanent resident, depending on whether you apply within Mexico, or at a consulate in another country, respectively (temporary residency is less). I'm not sure why there's so much of a difference. I would guess that you have to apply for residency at a consulate when you first go, but if you are already living in Mexico and to extend your temporary residency or upgrade to permanent residency, maybe you can apply in Mexico and get lower rates. *** This required income reduces the cost-effectiveness of living in Tijuana, especially at the $55,000 a year price point, but on the other hand, as noted elsewhere in this post, you can stay up to 180 days in Mexico as a tourist, and that would give you the full benefit of the low cost of living compared to the US. People who want to fund developing world people in Tijuana may have to pay them more, but maybe that's good (attracts a certain kind of talent / allows the people they hire to send back remittances or something like that). I was going to write "a better kind of talent", but people who aren't motivated as much by money have something valuable just in that and offering more money will decrease the proportion of that kind of people in a culture. *** Before you can get a permanent resident visa, you need to hold a temporary resident visa for 4 years (according to Where the Road Forks). These don't have as high an income requirement, as mentioned above (specifically ~$33,000 at a consulate and ~$18,000 in Mexico, according to Mexperience.com). So that makes it more affordable to start to settle in Tijuana. *** One thing I missed on Mexperience.com earlier was that to add on dependent spouses or minors is about $900 (consulate) / $500 (in Mexico) per person of required income per month. *** Another point from Mexperience.com is that you can use a savings balance instead of a monthly income. ~$45,000 / ~$25,000 for temporary residency, ~$180,000 / ~$100,000 for permanent residency, dependent spouse and minor ~$900 / ~$500.)

I think practically what I would do is advertise the attractive features (and potential downsides) of different areas or ways of approaching cultural altruism hubs, all the considerations given in this post, and let individuals decide where to relocate. The advantages and disadvantages decide, more than any one individual. It's better for these hubs to be scenes, and scenes involve decentralized decision-making.

--

What would be the minimum size of a viable cultural altruism hub?

Using the California hub as an example, we could say that LA needs to have a population living in it full-time in order to tap into the movie and music scene (L), there needs to be a population who travel to LA from San Diego sometimes (SL), a population who travel to Tijuana sometimes (ST), and a population who live in San Diego full-time (S). Populations SL, ST, and S could be the same, if all the people in San Diego want to keep up with both LA and Tijuana.

I would say that's the minimum set-up, although maybe we could switch L for T (instead of people living in LA, people living in Tijuana, for a minimum hub). [Later: I'm not sure why you even need L or T for the bare minimum hub.] How many people does it take to make a viable hub? My subjective guess is that probably all the roles need to be filled or the hub won't work. The total number (L + S) also needs to be above a certain amount or the scene is likely to spontaneously fade out. What is the minimum viable value of L + S? If I consult my gut, I would say that under 15 is more likely to fade out than to grow spontaneously, 15 to 30 is maybe equally likely to fade out as to grow spontaneously, 30 to 100 is more likely to grow spontaneously than to fade out spontaneously, and increasingly so the more you get above 100. (Everything is finite, so at some point above 100 it would stop growing spontaneously, but I don't know what value to pick for that.)

(This is an area where people with more experience with community building would know better.)

A more cohesive and motivated group can keep from fading out longer at lower population sizes. Also this group would get the best returns on the hub structure, by integrating everyone's experiences more deeply. A scene should eventually involve people who are not integrated or only casually motivated, but at first, to seed something like this, it might be good to have people who are more connected and motivated. However, there is a danger of becoming insular that way. (A "cult" that's too hard to join.) Probably it is best to just have more people seeding it, if you can afford to.

--

Could there be some connection to reducing tensions between governments in the long term, through cultural altruism and cultural altruism hubs?

Right now (as of drafting this section, 9 May 2022), there is a war going on in Ukraine which could be seen as the clash of civilizations. Russia might have an extra incentive in fighting the US, because they don't like liberal culture, and liberal dominance. The US thinks that Putin is an abuser of human (liberal) rights. What if the US and Russia could talk about what is so good and bad about liberalism, and come to an agreement to mitigate the harms of liberalism, or illiberalism, as the case may be, and then not look at each other as evil? They really are evil, both of them, at least when they do the bad things that come along with liberalism / illiberalism. They are right to be concerned about each other and their reaching for power. But they could come to see each other as, though never the same, at least functionally safe for each other and their own people.

Now, the trick is, what exactly am I referring to (de re) when I said "the US" and "Russia" in that last paragraph? Am I talking about Biden and Putin? Biden is "here today, gone tomorrow", given that at most he'll be in office 8 years. And the US government is not nearly so coherent as to be something any one person is totally in control of. Putin may be more representative of "Russia", in terms of him being able to decide to go to war in Ukraine based on his personal feelings. But even assuming Putin has total power in Russia, Putin came from somewhere, has history, was taught things, and had to convince people around him to do what he wants, has to understand the gradients of Russian psychology, and that psychology comes from somewhere, has history, has been taught things.

Because I don't know that much about politics and government, I will try to be vague and broad and say "elite culture" is the main determinant of "the US" and "Russia". Exactly who goes in "the elite", I don't know. But, wherever leaders come from in a country, and whatever scenes they have to pass through on their way to power, that's elite culture (a broad definition). Whatever the elite culture experiences, however they interpret that experience, however they were educated, however they relate to each other interpersonally, what their collective traumas were, even perhaps whatever food and music they like, etc. has some bearing on how politics and government work in a given culture, and thus what kinds of decisions are made by entities we can call "Washington", "Moscow", or "Beijing".

The US has a lot of soft power. How is it so powerful? One possibility is that it, without realizing it, hitched its wagon to hedonism and preference satisfaction, the gradients of humans getting what they naturally want, "freely". (This view somewhat descends from How The West Was Won by Scott Alexander.) So rock music is "liberation", is the destruction of tradition, is "be a generic human with drives who likes to feel". Western rock and pop went around the world. When you are liberated, you are a liberal and you like the US, perhaps. But what are you liberated from? And when you get what you feel like, maybe what you feel like is enslaving you.

The US is a country founded on pleasing its people -- which is great in the short run (for a few centuries, maybe), but worrisome if you think that there's something worth fighting for besides humans' natural hedonism and preference satisfaction. People tend to find something fishy or weird with wireheading, but why? It's just hedonism and preference satisfaction taken to a logical extreme. Maybe we have some intuitive sense that there's something more to value than just hedonism and preference satisfaction, and the thought of wireheading is weird and extreme enough that we can see in enough relief how it could be bad. Things like pretty sunsets and not having to wait in line at the grocery store too long just seem like "living the good life". But wireheading takes the whole "get what you want and feel good" idea too far.

But I think in American culture, or, in my California left/liberal world, wireheading is one of the few things that maybe we would think is "too far". Francis Fukuyama, in The End of History thought (sorry I can't cite which page) that California was the most "post-historical" part of the world. [Internet Archive's PDF copy has on p. 319 "in the most post-historical part of the United States, California" -- most post-historical part of the US, not of the world.] Maybe that means, the place most accepting of "the good life of getting what you want and feeling good", where everything is chill and everything is okay. I feel like sometimes, here in California, we are drifting toward being "post-consciousness", that on many levels we will cease to be human beings, lacking personal histories, and will be unable to comprehend many emotional realities, and cease to care, a kind of depersonalized "beautiful nihilism".

The rest of the world, or parts of the rest of the world, looking at that, are horrified. The reaction that we still have to wireheading, they have to "America". America is just wireheading that hasn't finished getting there. They don't want to be America, and they are horrified that we are able to use our soft power to addict them, "liberate" them, and depersonalize them, so that we go down the same drain with America, all a homogenized soup of pleasure and emptiness.

How much of the rest of the world really thinks this? I don't know, but supposedly it's the kind of thing that goes into "Russian propaganda against the West", which I assume does convince some people and probably does have some connection with how Russian elite culture actually thinks.

The thing about soft power is that it is powered by gradients of human nature, and thus the rock bands in America and Britain need have no idea what they are doing, what their influence is having to undermine traditional cultures. The basic idea of rock is all it takes to get loose and get copied. Whenever someone goes to try to make it in Hollywood, they tend to have little or no intention to prop up US soft power. They just want to make money or be famous, or pursue their art, and Americans just want to feel good, or uplifted, or whatever they get out of movies. Americans can be very insular, and for some weird reason the rest of the world is obsessed with America and can't take their eyes off us. And we just go about producing powerful entertainments that please us and satisfy our preferences. But then they get out in the world and influence the world.

So that means that the US hardly thinks about what it's doing culturally, it just does it. And so we are in a position where "power is a broken relationship" -- we are powerful because we don't understand what we're doing. We don't hear back from the people we are talking to. They are obsessed with us, but we are not obsessed with them. So we are not shaped by them. We are the conversation, and they are listening. They can go our way, but we won't and seemingly can't go theirs. We are the way of the future, and the future will inevitably come. And we are the culture of inevitability, which is seemingly the most successful culture, the one which supplants all other cultures.

I am both pessimistic and optimistic about culture. I think we are headed down the drain, and in some respects I have sympathies with traditional cultures in their rejection of liberalism. But I am optimistic because I don't think we have to keep going down the drain, and we can stop doing that, if we listen to all the people in the world, so that they can teach us their values. One risk of "cultural altruism", especially in hubs, is that it might gather together cultures and then be a blender that homogenizes all of them. I don't know if this homogenization is entirely avoidable, and perhaps in some ways could be good (maybe there are values that all humans should have). But, while hubs are risky that way, they do provide the opportunity for non-American/non-Western cultures to try to talk back to the seats of soft power. Russia (/ "Moscow") or China (/ "Beijing") could try to influence popular and elite cultures in the US by sending "cultural missionaries" to try to explain to Hollywood, Silicon Valley, Washington, and New York what is so dangerous about liberal power and liberal drift, why exactly Communism or the Russian soul is better in alignment with the true good values than liberalism, and why liberal power, soft and hard, is a dangerous and irresponsible thing, is a bunch of rich Westerners who affect the rest of the world, and the future, in ways they don't understand -- people who don't know what they're doing.

Being a missionary is a two-way street, and liberal influence would make its way back to Moscow and Beijing, but from their perspective, at least it's a two-way street and not what, it seems, is closer to a one-way street where America -- or human drives -- speak without being willing and/or able to hear.

People who are good at talking win when there's a norm of "we do things by talking". When people become unable to talk, they resort to violence. When liberalism is, effectively and where it counts, unable or unwilling to hear what the traditional or illiberal wish they could say -- then they will resort to violence at some point, to express what they can't say. But if we really let people try to change our culture, and we don't shut them out (or shut them up), then they do not have to use violence to get their way, and there could be fewer tensions internationally. Cultural altruism and cultural altruism hubs could be access points for non-Western entities to try to speak to Western culture, thus reducing their need to be violent. What if "Moscow", "Beijing", and "Washington" (and all other synecdoches like those) could trust each other? If the whole cultures of each city could trust the cultures of others? Maybe this is partially or wholly attainable.

--

Well, thinking optimistically, getting national governments involved in this could be a good thing. But, consider China's Confucius Institutes, which have been accused of being propaganda arms of the Chinese government. Chinese culture should have a seat at any table of global culture. But should the Chinese government? The Chinese government has its own "biological" interests (protecting its own survival, furthering its own power for power's own sake). These are not necessarily aligned with finding the truth of the best cultural values. It is good to have governments at the table -- they are part of culture. But governments have a unique ability to overrepresent themselves because they have so much money and power.

Any decentralized hub can be accessed by anyone -- that's part of the virtue of it. It might be possible to use something like ostracism against the Chinese government (or the US government). But a clever government could co-opt / bribe, or infiltrate other cultural units. Maybe this is unlikely enough to happen that it can be dealt with ad hoc if and when it does happen. Like, effective altruism could have been infiltrated by government agencies, and, as it grows, it becomes an even bigger target for that. But maybe it won't happen.

If it does happen with effective altruism, and at worst-case it takes the life or soul of EA away (either of which is death), then we could look back on EA's life and remember with honor the many things it did accomplish, especially the useful thoughts it created, in its lifetime when it was really itself. Those thoughts can help a successor movement get off the ground more easily, I would think. So, with a "cultural altruism" movement, a lot of the good it could do would be when it was relatively small and quiet. Cultural altruism goes beyond any one movement, just as effective altruism goes beyond the current effective altruism movement. So even if one movement got infiltrated, and lost its soul and thus was not trustworthy or trust-producing anymore, a new movement which had a truth-seeking soul could emerge, be credible, become known as credible, and carry the torch of the cultural altruism topic. Perhaps the scene could carry on throughout the transition. The topic is eternal and the scene is perhaps very long-lasting, even if movements, or even nations, come and go.

A cultural altruism hub would attract both "soldiers" and "scouts". Scouts would be looking for the truth and would be representative agents of people who value the truth but don't have the resources to sort through all the different perspectives. Soldiers would be trying to convince scouts of things, or convince each other of things. For a hub to work, scouts would have to put forth as much effort as soldiers, and exert a kind of influence over the culture. (This is basically the tension between "meta" and "partisan" interests as mentioned above.) Scouts would be able to exert soft power by not taking seriously distortionary effects of soldiers' lawyerly advocacy. They would also be advocates for good communication norms.

I would expect there to potentially be a kind of an arms race, where national governments like the US, Russia, and China, sent in people and tried to develop more sophisticated arguments and cultural artifacts, to favor purely national-political interests, in the guise of promoting national-cultural interests. And the "metans" or "scouts", and regular people in the scene (including less-assiduous, -competitive, and/or -resourced "soldiers"), would learn to filter out such propaganda.

The scouts, or any "scout/soldier hybrids", might work to improve the arguments of the underresourced. For instance, if a member of an Amazonian tribe were debating a Roman Catholic, the Catholic would have Thomas Aquinas, Augustine, maybe Aristotle, plus hundreds or thousands of past and present professional Catholic theologians to use for her defense. The Amazonian tribe member might have what he remembered of what his extended family taught him. He might have a big problem finding flaws in her arguments, just because over the centuries, the Catholics have heard many counterarguments to their views and have come up with responses. But that doesn't mean that Catholicism is right and the Amazonian tribe is wrong. It could just mean that the tribe never had the worldly resources to come up with a really robust and sophisticated defense of itself. This is where cultural altruists, trying to be scouts, but using soldier moves, could try to strengthen the Amazonian case.

--

So San Diego is conveniently across the border from a (somewhat dangerous) Mexican city that's somewhat easier to immigrate to than the US, and to the north of San Diego, there's Los Angeles, a center of the film and music industries. To the east of San Diego there is a rural area in the Imperial Valley and south of that there is a rural area and an industrial and urban area in the Mexicali Valley. Silicon Valley is a day's drive away from San Diego.

[Also the San Joaquin Valley is about a day away, for a perhaps different rural environment.]

Washington, D.C. and New York are not too far from each other, and somewhere in between could also be a hub. We see a route by which non-American/non-Western thinking can get access to the elite culture of the most powerful country on earth, by way of San Diego.

[A route whereby the northeast and southwest work together.]

There need to be other hubs, or "correspondents", to gather influence from all different parts of the world.

I keep nervously wondering if I've written something that is biased in favor of Southern California because I live here, but I think that the principles in this post should be helpful in deciding whatever place a hub should be located, if there is a better place.

--

Added note:

I was reading in National Geographic (December 2002, "The Hawaiians", by Paul Theroux), and came across this quote (p. 16):

--Watching the sail bellying in the wind, Skipper Bertelsmann said to me, "This canoe represents family. It's about sharing -- history, values, culture, kuleana [responsibilities], kōkua [help]. Sailing a long distance, the canoe becomes our island. We have to learn to live and work together in harmony. These are values that are translated to land. On land, think 'canoe.'"--

Traditional Polynesian sailing, makes people into a different kind of person on land or at sea -- perhaps a distinctively Hawaiian person. It may be necessary to do something like that to understand Hawaiian values. Cultural altruists may have to physically live as a certain people group, with the people, to understand.