Wednesday, October 26, 2022

News: 26 October 2022

For a few years I've felt like I was supposed to take a break from doing philosophy, but there hasn't seemed to have been a good time to do that. But now I think I will take that break. So unless there's something really important to address, I won't do philosophy for a while.

That accounts for a lot of what this blog is about, so maybe I won't post as often. But I will try to come up with other things to post. I still write on Twitter.

To keep my mind busy, I'm focusing more on learning Indonesian than I have so far. Also working on some music.

--

Update: I added a section on "domestic missions" to Things to Do.

Monday, October 17, 2022

Everything is Conscious Experience?

Epistemic status: I could delay posting this to try to perfect it more, but I guess I'm going to err on the side of posting it, then later on fixing it when I become aware of problems.

I don't think I've ever explained directly why I should think that everything is conscious experience.

Some people look for proof, and other people don't listen to that category, or are somewhere in-between. I will attempt a proof, and then discuss things from the perspective of epistemic uncertainty.

--

Certainty?

--

Argument

1. What conscious experience is, is conscious experience. That is what it is in itself, and it is nothing else, and can't be anything else. If conscious experience experienced something unconsciously, it would not be conscious, and would not be conscious experience.

2. Conscious experience is that which is consciously experienced, and thus is the only thing which can be consciously experienced.

3. We have experience of conscious experience and thus know that it exists. The question is whether non-conscious beings exist.

4. We can posit non-conscious beings. According to the argument being developed here, we don't presuppose anything about them other than that they are beings and that they are not conscious experience. What they are in a more positive sense would depend on which non-conscious beings we were talking about. But this argument needs to take into account non-conscious beings in general.

5. What non-conscious being is in itself is not conscious experience. So conscious experience can't experience it.

6. For conscious experience to change or be causally affected is for there to be a change in what it experiences based on itself or some other being.

7. If conscious experience can't experience non-conscious being, it can't be changed or causally affected by it.

8. Conscious experience can't experience non-conscious being, so it can't change or causally affect it.

9. The world we live in, the world which can affect us in any way, is completely made up of conscious experience.

--

Objection: Consciousness Monism might not work

(Pluralism: conscious experience and non-conscious being exists. Consciousness monism: only conscious experience exists. By "exist", perhaps I could really mean "exists in such a way that it can affect us".)

What if a world consisting entirely of conscious beings is incoherent? Maybe the world isn't reason-apt in this area, and neither pluralism nor consciousness monism work? Then, we are left to making decisions in uncertainty, or based on other considerations.

What are some problems with consciousness monism?

Here's one: How can there be multiple conscious beings? I am conscious of my own experience, and nothing else. How is it that I am not conscious of, say, God's experience? My experience body has boundaries. Speaking metaphorically, how does my experience "know" which boundaries it belongs to? Does it belong to God's boundaries, or to mine? Do the boundaries exist apart from the contents? (Then, maybe, "boundaries" = Berkeley's "minds", and "contents" = Berkeley's "ideas"?) I have thought that experiences themselves have causal power ("wills"). So which "boundaries" produce the will that arises from an experience?

Two responses:
a) "Somehow or other" God's experiences contain within their boundaries multiple sub-experiences (God's experience is of multiple boundaried experiences). I find myself able to conceive of this, mostly. Still, I'm somewhat unsatisfied.
b) These problems attend thoughts of any beings that overlap with or are contained by any others, including any non-conscious beings that are not in their own "solipsisms". So maybe the problem is in thinking that when two beings relate, the metaphor is overlap or containment, and not something like contact?

What would it mean for two conscious experiences to contact each other? Let's say there is Experience A and Experience B. What Experience A is, in itself, is Experience A, and what Experience B is, in itself, is Experience B. Since they touch, they participate in the edges of each other. But an experience body is a unitary thing (at least, so it seems in my experience -- and maybe by definition it is, if an experience body is a unit of experience). So they participate in the whole of each other while remaining separate. This is possible if they become identical copies of each other. They remain separate beings, so that consciousness monism is only a monism of substance, and does not mean that there is only one being. Experience A could contact only a part of Experience B, which could have many other experience bodies, some of these other experience bodies mirroring other beings' experience bodies besides Experience A. So there could be more to Experience B than just its mirroring of Experience A. If "Experience A" is "a creature's experience body" and "Experience B" is "God's experience bodies", then the creature contacts part of God's experience.

(This explanation is new to me and is different from the overlap/containment explanation I've favored in the past. Without investigating, my first thought is that this could cause problems with various things I've relied on in the past, such as "the set of all things" existing, and maybe others.)

Now, by allowing for contact (rather than overlap or containment), have I opened up a way for non-conscious being to interact with conscious experience, by a non-conscious being contacting rather than overlapping/containing/being-contained-by a conscious experience? No, because of #5 in the argument above ("What non-conscious being is in itself is not conscious experience. So conscious experience can't experience it."). Conscious experience can't experience even the edge of non-conscious being, because non-conscious being's edge is not conscious experience.

--

Objection: Becoming Conscious (Matrices vs. Metaphysical Darkness)

Can non-conscious being become conscious experience? If so, then maybe it could turn into conscious experience, affect another experience body, stop affecting it, turn back into non-conscious being.

This doesn't work because the past has to be able to cause the present. If the past is not made of conscious experience, it can't cause a present of conscious experience.

Or, if that argument proves unsound or controversial, then here is another:

If non-conscious being became conscious, it could no longer be affected by any non-conscious beings. So it would be cut off from them. But, until it became a conscious experience, it could not contact (or overlap, or contain) any conscious experience. It would be separated by "metaphysical darkness" from conscious experience. There is no way to cross metaphysical darkness, no way to relate through it or conceive of anything on the other side of it. It's a state of total disconnection, which inherently rules out any interaction. There's either darkness, contact, overlap, or containment, and only the last three allow beings to interact.

Two beings that are not contacting, overlapping, or containing each other can come to interact if they are both connected by a matrix (or set of matrices that form one big matrix). For example, in the world of conscious experience, space is a matrix that contain physical objects and allow them to find each other. Matrices contact, overlap, or contain each of the beings that they interrelate.

Non-conscious beings can't contact, overlap, or be contained by matrices made of conscious experience. So they are separate from conscious experience and if they changed into conscious beings at best could become isolated experience bodies (they still wouldn't affect the universe of conscious experience that we inhabit).

--

Objection: Consciousness Monism denies our natural intuitions that material things exist

I don't find it difficult to believe that matter exists, as a consciousness monist. In fact, I think there is something material about all experience, even noetic experience. There is a solidity and reality to all experience, as though it is made out of some substance.

From another angle, I like Berkeley's idea of the "perceptual object". The rock that I see is made up of perceptions of it. Its solidity is made up of experience. The rock is real, hurts the foot, breaks the leg, etc. While my personal experience of the rock and what it means are idiosyncratic, they are not divorced from a kind of objective reality of that exact particular rock. The only thing is that all of that reality mentioned in this paragraph is experienced.

Our natural intuitions are not always correct. Or, from another angle, they are always correct in a sense, but when we think about things scientifically, we can add new natural intuitions that follow from rational argument. (This is how we believe that rocks are made up of tiny, invisible objects called molecules.)

The idea with this post is to present a scientific/rational kind of eye-opening, so that new natural intuitions can be observed in our "noetic environment". Like pointing out an object on a table which your eye technically took in already, but which you never noticed, but now you do.

I don't think consciousness monism is a serious offender against natural intuitions and can be as natural to believe as something like the theory that things are made out of things we can't see.

--

Probabilities/Credences

--

Here is another perspective, for those more attuned to a worldview of epistemic uncertainty.

--

Credences

1. We know that conscious experience exists. The question is whether non-conscious being exists. Let's say, naively, that since the existence of non-conscious being is intuitively plausible, but we have no evidence or argument (at this point), to lead us more specifically, that we say that there is a 0.5 probability that non-conscious being exists. ("It either exists or it doesn't, even chance it does or doesn't, 50-50 chance.") (So P(nCB)1 = 0.5).

2. The numbered "Argument" I gave above, including my responses to the "Objections", is plausible and has some evidential weight, we'll say. Maybe it sounds good, but we have somewhat lost the ability to commit to beliefs because we think that there might be some counterargument we haven't heard yet. Let's say we aren't convinced of it, but it also could be true. This substitutes for coming to a conclusion of "proved"/"disproved". The naive value here to assign to how the "Argument" affects credence will again be 0.5. (So P(nCB)2 = 0.5).

3. Should P(nCB)2 be combined in some way with P(nCB)1 to produce P(nCB)1 and 2 (probability of non-conscious beings given both step 1 and 2 in this section)? There's probably a neat answer to this that I don't know. I'm not really sure what to say. Part of me wants to say that a) the new evidence provided by the 0.5 credence "Argument" should cause me to update in some direction, maybe weakly toward P(nCB)1 and 2 = 0 (like to a credence of 0.45). Another part of me feels vaguely like b) it doesn't work that way, like 2 supersedes 1. I rejected the earlier thought c) that 1 and 2 "stack" so that P(nCB)1 and 2 = (0.5 * 0.5) = 0.25. Maybe a) is a compromise between b) and c).

N. Add any other arguments for or against non-conscious beings.

The exact numbers used here for these credences should be updated by the interested reader to reflect their own beliefs, and, unfortunately, I haven't produced a finished formula, but this provides a sort of template for calculating credences, and maybe a weak first anchoring value of P(nCB) (such that there is a 50%, or somewhat less than 50%, chance that there are non-conscious beings).

Wednesday, September 28, 2022

Blog Chapter 4 Summary and Bibliography

Looking at my blog posts, I realized that they naturally came in different periods or phases. The 2019 posts were one, then there were those from 2020 - 2021, and then those from 2021 - 2022. In spring of 2022, I decided to deliberately create a phase, which I called a "blog chapter", this being the fourth one.

I think the previous three were things that arose spontaneously and in response to my own needs, while this fourth one was premeditated and a bit artificial. Somewhat like doing a year of school.

This chapter has been about the "exilic-familial", among other things. I explored themes of nation, culture, family, childhood, and education. These topics connect to a vision that I had before officially starting the blog chapter (or, I was hypomanic and wrote something), about "cultural altruism", a path for those trying to do good through culture or in cultural areas, articulating with art, religion, humanities, politics, and effective altruism and especially its "Long Reflection" idea. When we try to govern the world, we are, and should be, informed by nation, culture, family, childhood, and education.

Two specific cultures were themes, Jewish (especially as I best know it, from the Old Testament), and Indonesian. Jewish culture is a family of holiness, and also has a history of enduring exile. I saw in Judaism (at least in the Old Testament itself) a kind of honesty coming out of having lost, and the idea of not winning and that being a route to peace and holiness. I saw in it families broken and reconciling.

Indonesian culture is (to me) about syncretism and unity-in-diversity, as well as a connection to Islam, Hinduism, and Buddhism. Indonesia is a nation that attempts to pull together diverse groups, and which has a history of mixing religions. (I didn't really explore these themes in depth, but only discussed Indonesia a little.)

The war in Ukraine and the polarization of US politics are in the background.

This chapter, written in about seven months if you include the cultural altruism writings drafted in March and April, surprisingly (to me) is in the same order of magnitude of number of words as the three preceding chapters combined, about 20% less. I keep feeling like I do the math wrong when I count (maybe somehow I do), but I think it's right. I didn't feel like I was working any harder when I wrote. Perhaps I was under the influence of hypomania? I clearly had it in late March, but maybe it continued in a non-obvious, attenuated form throughout the seven months.

I've felt different ways over the last few weeks. Sometimes depleted, sometimes not. I've thought about quitting or drastically cutting back on writing, and also going on full bore. I think what has been particularly hard has been working to finish this after I had already moved on from being into this blog chapter. I could use much of myself as usual to work, but not all of me.

--

These are the books reviewed in this blog chapter. Links are to my reviews:

Holy Resilience, by David M. Carr, hardback (1st ed.?) ISBN 978-0-300-20456-8

Dr. Jekyll and Mr. Hyde, by Robert Louis Stevenson, Bantam Classic paperback, ISBN 0-553-21277-X

In the Shadow of the Banyan, by Vaddey Ratner, 1st hardback ed., ISBN 978-1-4516-5770-8

Between Man and Man, by Martin Buber, tr. Ronald Gregor Smith, Macmillan Paperbacks Edition, 1965, no ISBN

Creative Destruction, by Tyler Cowen, paperback, ISBN 0-691-11783-7

The Meaning of Marriage, by Timothy Keller (with Kathy Keller), hardback (1st ed.?), ISBN 978-0-525-95247-3

Along the Way, ed. Ron Bruner and Dana Kennamer Pemberton, paperback (1st ed.?), ISBN 978-0-891-12460-3

On the Genealogy of Morality, by Friedrich Nietzsche, tr. Carol Diethe, Cambridge edition, paperback (Revised Student ed.), ISBN 978-0-521-69163-5

Teaching Children to Care, by Ruth Sidney Charney, paperback (1st ed.?), ISBN 0-9618636-1-7

Reading List Postview: The Long Reflection

I read some articles and books about the Long Reflection. I don't have a lot to add to the preview for this reading list (which goes in depth in considering problems with the Long Reflection and ways I think it should, will, or would turn out), other than the notes I have taken on the readings, and also the reviews of the two books I included in the list, On the Genealogy of Morality and Teaching Children to Care.

I wish I could be more helpful in relating what I got from my readings with the issues from the preview, to be able to write a nice summary here, but I guess this will have to do, for now, or for the foreseeable future.

Notes for Long Reflection Reading List

These are notes on my readings on the Long Reflection, except for the two books, On the Genealogy of Morality, by Friedrich Nietzsche, and Teaching Children to Care by Ruth Sidney Charney (those two links are to their reviews).

--

Thinking Complete (Richard Ngo) "Making decisions under multiple worldviews"

[I decided to come back to this one later and restart reading it.]

--

Felix Stocker Reflecting on the Long Reflection

I find myself persuaded (although I'm not a tough audience) from this that the Long Reflection is not a practical pursuit, if it is a top-down discrete era of human history. That is, if it's something we impose on everyone, then we are doing so against someone's will, and they may defect, breaking the discrete era. Also I am (easily) persuaded by Stocker's objections that people will desire technology that the Long Reflection would try to hold back, and stopping human technological desire risks creating an S-risk of its own, the global hegemon. But, I do think that reflecting on what are the best values, and seeking to influence and be influenced by everyone to create (ideally) some kind of harmony in human values (or reduction of disharmony that allows for a more ideal "liberal" solution to the coordination questions that the Long Reflection is trying to answer) is something that can be ongoing. I would call this "cultural altruism", or a subset of cultural altruism. Much of what Ord and MacAskill would want could be pursued in a bottom-up, intermingled way and avoid some (or all?) of Stocker's objections.

--

Paul Christiano Decoupling deliberation from competition

Christiano makes the point that deliberation can be infected by competition. This would affect a bottom-up cultural altruism scene. However, I hope that a social scene can absorb a certain amount of competitiveness without harming it. For instance, when we try to find truth, we (society) sometimes hire lawyers to argue each side of a case and then we listen to what they say. Innovators in thinking may be motivated by competition, but as long as they are also evaluators (are both "soldiers" and "scouts"), or enough people who have power are "scouts", the competition only serves to provide ideas (or bring to light evidence) to select from, which is a good thing when you are trying to find the overall truth. When competitive people shut other people up, or have state-level or "megacorp-level" mind control / propaganda powers, then competition is bad for deliberation. But humans competing with and listening to humans on a human scale is good for deliberation. "All" we have to do is keep states and corporations from being too powerful.

I imagine cultural altruism being something like "status quo truth-finding but X% more effective". Our current truth-finding culture is (from one perspective) pretty good at bringing about truth, or at least, truths. Look how many we've accumulated. (Maybe where it needs to be better is in finding a whole to truth. And maybe we should think of how to protect it.)

I don't think I'm talking about the same thing Christiano is. I think he's talking about how AI teams can deliberate despite race dynamics, or something like that. Whereas what I imagine is everybody (all humans, more or less) interacting with each other without real time pressure. But it's interesting to think, where exactly is the distinction between Christiano's part of culture and the rest of culture? Isn't cultural work being done, perhaps that would affect human culture in general (more my concern) by Christiano's fairly pragmatic and craft-affected tactics for fostering deliberation despite race dynamics? Isn't pragmatic, resource- and time-constrained life where values come from? Christiano's situation is just another of many human situations.

In the section "A double-edged sword", Christiano talks about the practical benefits of competition to weed out bad deliberators (their influence, not them as persons). I suppose this feels realistic to him. To me, I feel (maybe naively) that ideal deliberators would stop fearing each other and simply fear the truth. If lives are at stake, because ideal deliberators index themselves to the saving of lives or whatever is of highest value, they would naturally work their best, and if this can be known, defer to people who know better. But Christiano has lived in his part of the real world, where people are resource- and time-constrained, and implicitly or not thinks that it generally has to be competition that gets the job done of communicating reality to people, and not an innate indexing-to-reality. I assume (if he really does believe that innate indexing-to-reality is not an option, or hasn't thought of it) that his beliefs in some necessity or desirability of competition are connected with his limited personal experience. Christiano may not see the possibility that people can be ideal deliberators, or that a culture of ideal deliberation could be fostered, given enough time. (His context, again, seems to be of specific, relatively near-term situations.)

Maybe if people are mistaken about their own competence in judging whether to defer, that would be one reason why there would need to be some outside actor who pushed them to relate to the truth better, and this can never be fixed. (Would people in such a context be "competed away" by a wild and more or less impersonal social force ("competition"), or would there be a person who could tell them they were wrong, who knew how to talk to them and who could at least consciously attempt to make themselves trustworthy to the "deluded one"? Perhaps for many of us it is more bearable to be ruined by "competition" than to be corrected by a person we know. Of course, it is always possible that the people who correct us are themselves wrong. Maybe that's the appeal of competition, that in some sense it can't be wrong -- if you're not fit, you're not fit. But then competition itself distorts reality, at least through "Moloch".)

--

Finished the main article, now reading the comments (what's there as of 22 August 2022).

Wei Dai makes the point that without competition, cultures's norms can randomly drift. (I would add:) this is sort of like how in Nineteen Eighty-Four, once war goes away, totalitarian states can "make" 2 + 2 = 5. I've thought there could could be problems with digital humans coming up with beliefs like 2 + 2 = 5. But at the same time, Moloch distorts our thinking and our lives as well. So it seems like we're doomed either way to someday not living in reality.

However, believing that 2 + 2 = 5 is physically difficult. Probably because of how the human brain is wired -- and we can change that. But either the human brain is in tune with the truth (more or less; enough to found reason) or it's not, and it always has, or hasn't, been. If it's not, then why worry about deliberation going well, or being in tune with reality? We never had a chance, and our current sense of what is rational isn't valid anyway, or we don't have a strong reason to believe that it is. But if it is, then the solution is just to keep people's brains about like they have always been, and use that as the gold standard for the foundations of rationality (at least the elements that are or are more or less like axioms, which are the easy, basic elements of rationality, even if building sufficiently complex thought-structures could go beyond human capabilities).

If it is the case that our innate thinking is not in tune with reality (on the level of the foundational axioms of reason), can we know what to do? Maybe not, and if not, then we have no guidance from the possible world in which our innate thinking is invalid. So if we are uncertain between that scenario and the one where it is valid (or valid-enough), then since the valid-scenario's recommendations might have some connection with reality, we should follow them.

It does seem odd to me that I should so neatly argue for the status quo, and that the human brain (or, I would say, human thinking, feeling, intuiting, perceiving, etc. nature, of which the brain is a phenomenal manifestation) should be the gold standard of how we know. Can't we be fallible? It makes perfect sense that we could be. But practically speaking, we're stuck in our own world, and lost if we leave it.

(This seems like a bit of a new view for me, so I should think about it some more.)

--

Wei Dai says, later on --Currently, people's thinking and speech are in large part ultimately motivated by the need to signal intelligence[link], loyalty, wealth[link], or other "positive" attributes[link], which help to increase one's social status and career prospects, and attract allies and mates, which are of course hugely important forms of resources, and some of the main objects of competition among humans.--

I'm not sure if this is how things seem to people subjectively, or if rather they feel like (or are) motivated by love for their family and friends, or some higher good. They have to work for resources due to scarcity, and because if they don't, they won't be able to live or provide for the people they love. Maybe it is the case that even love is something that is really "ultimately" motivated by resource acquisition? If a person is aware of this, can they willfully choose love (or value, or rationality) against resource acquisition? Probably they can. (Rationalists can choose against their biases, so why couldn't other people make as strong a choice?) We might suppose that most people are stuck in survival mode, or don't think much further than just their immediate friends and family. But maybe that's an artifact of scarcity, ambient culture, and them not being educated to see the bigger picture.

If you think that everything is about resource acquisition, that is what the world will be. If you think everything is about love / truth / valuing, etc., that is what the world will be. Some people have to face the world as it currently is, and it bends their thinking toward short-term, strategic, self-interested, competitive, resource-scarce, resource-hungry thinking. But some people are free from that, whether through temperament or life situation (perhaps they are too "untalented" to be able to do anything practical in the world as it is, and can only work on the world as it should be). These are the people who can and should lead the way in deliberation, in that, their minds are actually capable of deliberation. In areas of deliberation, the practical elites should be inclined to defer to them.

I checked the links in Wei Dai's comment (quoted above). They were about how unconscious drives (especially including the ones that drive signaling) really control people. I am subject to such drives all the time. But do they really matter in the long run? I am able to pursue what I choose to pursue. Perhaps my drive to seek a mate gives me the energy to seek a spouse -- and all that comes along with it, including new non-romantic interests, and a new perspective on who exists in the world. I get to choose which traits I find desirable in a spouse, even if the drive is not chosen. Or, if those have to "pay rent" by giving me the prospect of status, I get to choose, between the different sources of status that are roughly equal in expected yield, which of them I pursue. I can be intentional and conscious on the margin, and steer my psychological machinery vehicle in the direction that I want to go. The whole concept of "overcoming bias" and being rationalist doesn't make sense if this isn't possible, and I don't see why that level of intentionality is, or could only be, confined to a tiny subculture (tiny by global population standards). I think that short-term, competitive, resource-hungry, etc. thinking is like that evolutionarily-driven unconscious-drives side of being human, and the truly deliberative is like, or in some sense is the same as, the intentional, subjective, conscious, rational side.

I am suspicious that the unconscious mind doesn't even exist. Where would such a mind reside, if not in some other mind's consciousness? Can willing really come from anything other than an existing being, and can an existing being be anything other than conscious? I am skeptical that there is a world other than the conscious world (more than skeptical, but for the sake of argument, I would only suggest skepticism to my imagined reader here). Given this skepticism, we should be concerned that we are being trolled by evil spirits, or, more optimistically, are being led by wiser and better spirits than we are. Which side wins when we see things in a cynical or mechanistic way? I feel like cynicism and mechanistic thinking make me less intentional and more fatalistic, more likely to give in to my impulses and programming. Since my intentions seem to line up (at least directionally) with what wiser and better spirits would want, I should protect my intention and strengthen it, and see the possibility of free will, and be idealistic.

I suppose a (partial) summary of the above would be to say "deliberative people should be idealistic, conscious, believe in consciousness, despite 'the way the world works'". Maybe the Long Reflection (or cultural altruism) is concerned with determining what really should be, and some other groups or processes are needed to determine what can be, in the world that we observe and have to live in up close.

I think the New Wine worldview is one that inclines people toward being cultural altruists, and less so toward being EAs or the like, because it has a sense that the absolute best is the absolute minimum [in the sense that if you attain the absolute best on the New Wine account, you have only attained the bare minimum] and that there is a long time to pursue it, and that physical death ("the first death") is not as significant.

--

Cold Takes (Holden Karnofsky) Futureproof Ethics:

Karnofsky says --our ethical intuitions are sometimes "good" but sometimes "distorted." Distortions might include:
* When our ethics are pulled toward what's convenient for us to believe. For example, that one's own nation/race/sex is superior to others, and that others' interests can therefore be ignored or dismissed.--

Is it a distortion for our ethics to be pulled toward what is convenient for us to believe? Why does Karnofsky think that's true? I agree with Karnofsky on this thought (with some reservations, but substantially), but even if everyone did, why would that mean that we had found the truth? (I think a proxy for "I am speaking the truth" is "I am saying something that nobody in my social circle will disagree with" -- but it's an imperfect proxy.) Can Karnofsky root his preference in reason? I think that the truth is known by God, and sometimes thinking convenient ways will lead us toward believing what God believes, but sometimes it leads away. God is the standard of truth because he is the root standard of everything. So there is something "out there" which too much convenient thinking will take a person away from. Is there anything "out there" for Karnofsky's thinking to be closer or further from, due to distorted thinking? If not, does it make sense to call the distortions "distortions", or rather, "undesired changes"? (But without the loading we put on "undesired" to mean "objectively bad".)

Karnofsky clarifies a bit with --It's very debatable what it means for an ethical view to be "not distorted." Some people ("moral realists") believe that there are literal ethical "truths," while others (what I might call "moral quasi-realists," including myself) believe that we are simply trying to find patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc.[link]--

I should check the link when I have time and come back [later: I did and didn't feel like it changed anything for me], but what I read in that quote is something like "Some people are moral realists, but I'm not. I'm a moral quasi-realist. I look for patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc. Because thoughtfulness, informedness, etc. is a guide to how we ought to behave. It rightly guides us to the truth, and being rightly guided toward the truth is what we ought to be. Maybe it helps us survive, and surviving is what we ought to do." Which sounds like Karnofsky believes in an ethical truth, but for some reason he doesn't want to call himself a moral realist. Maybe being a moral realist involves "biting some bullets" that he doesn't want to "bite"?

[That characterization sounds unfair. Can't I take Karnofsky at his word? I think what makes me feel like he's doing something like using smoke and mirrors is that the whole subject of morality is pointless unless it compels behavior. Morality is when we come to see or feel that something ought to be done, and ideally (from the perspective of the moral idea) do it. So if Karnofsky ends up seeing and feeling that things ought to be done, or intends for others to see or feel that things ought to be done, even if it doesn't make sense to say that "ought" exists from his official worldview, then he's being moral, and relying on the truth of morality to motivate himself and other people. "Thoughtful" and "informed" are loaded in our society as being "trustworthy", so they do moral work without having to say explicitly "this is what you ought to do". So Karnofsky gets the motivational power of morality while still denying that it exists beyond some interesting patterns in psychology. I guess if he's really consistent in saying that he's just looking at patterns of thinking that emerge from "thoughtfulness and informedness", and "thoughtfulness and informedness" have no inherent moral recommending power, then he should say "hey, I'm saying a lot of words here which might cause you to think things, feel things, and do things, but actually, none of them matter and they have no reason to affect you that deeply. In fact, nothing can matter, because if it did, it would create morality -- what matters should be protected, or guarded against, or something -- and morality is just patterns of what we would believe if we were thoughtful and informed, which themselves have no power to recommend or compel behavior". Does Karnofsky really want to be seen as someone whose words do not need to be heeded?]

[This is quickly written and I have not read in depth what Karnofsky thinks about moral quasi-realism, which I'm guessing might be sort of the same as Lukas Gloor's anti-realism? I did read Gloor's moral anti-realism sequence (or at least the older posts, written before 2022). With Gloor's position, I also got the feeling of smoke and mirrors.]

--

Karnofsky summarizing John Harsanyi:
--Let's start with a basic, appealing-seeming principle for ethics: that it should be other-centered.--

Why should that be a foundation of ethics? It's merely "basic" and "appealing-seeming". It certainly is more popular than egoism -- or maybe, given our revealed preferences, egoism is a very popular moral foundation. Maybe egoism and altruism are supposed to compete with each other -- that looks like what we actually choose, minus a few exceptional individuals. Nietzsche wrote a number of books arguing in favor of egoism [as superior to altruism, as far as I could tell], and I can think of two other egoist thinkers (Stirner (I've read his The Ego and His Own) and Rand (whom I have not read but have heard of)). Are they "not even wrong", or do they have to be dealt with? Supposedly futureproof ethics is about what you would believe if you had more reflection. Maybe if you're part of the 99%, the more you reflect, the more you feel like a democratic-leaning thing like utilitarianism is a good thing. But if you're part of the 1%, and you're aware of Nietzsche's philosophy, maybe the more you reflect, the more true it seems that the strong should master the weak, based on the objective fact that the strong are stronger and power by its very nature takes power. There is a certain simplicity to those beliefs. So then will there be a democratic morality and an aristocratic one, both the outcome of greater reflection? Or maybe an AI reflects centuries per second on the question, and comes up with a Nietzschean conclusion. Is the AI wrong?

Personally, I lean utilitarian (at this point in my life) because I believe that God loves each person, by virtue of him valuing everything that is valuable. Everything that exists is valuable, and whatever can exist forever should. [Some beings turn out not to be able to exist forever, by their choice, not God's.] He experiences the loss of all lost value, and so does not want any to be lost. We are all created with the potential to be saved forever. So there is a field of altruism with respect to all persons. Perhaps animals (and future AI) are (or will) really be personal beings in some sense which God also values and relates to universally.

[Utilitarianism is about the benefit of the whole, tends toward impartiality, and is based on aggregation. God relates to each person, which accomplishes what aggregation sets out to do, bringing everything into one reality. God tends toward impartiality, and works for his personal interest, the whole.]

--

Karnofsky talks about how --The strange conclusions [brought about by utilitarianism + sentientism] feel uncomfortable, but when I try to examine why they feel uncomfortable, I worry that a lot of my reasons just come down to "avoiding weirdness" or "hesitating to care a great deal about creatures very different from me and my social peers." These are exactly the sorts of thoughts I'm trying to get away from, if I want to be ahead of the curve on ethics.--

However, the discomfort we feel from "strange conclusions" could also be us connecting to some sense that "there's something more than this". I remember the famous Yudkowsky quote ([which he] borrowed from someone else, whom I should look up when I have time) of something like "That which can be destroyed by the truth should be". But the reality for us, if we are the destroyers, is that in effect it is "Whatever can be destroyed by the truth as I currently understand it, should be". So, if we decide to destroy our passage to whatever our intuitions of diffidence were trying to tell us, perhaps by erasing the intuitions, maybe we have destroyed some truth by committing to what we think must be true, counter-intuitively true. We should probably hold out for some other truth, when our intuitions revolt, because they might be saying something.

[The quote seems to originate with P. C. Hodgell]

I believe that eternal salvation dominates all other ethical concerns, as a matter of course. Unbearable suffering in itself is bad because God has to experience it, and it is for him what it is for any other being: unbearable. What God, the standard, finds unbearable, will be rejected by him, and what is rejected by the standard is illegitimate. We should be on the side of reducing unbearable suffering. If we are, then we are more in tune with God and thus more fit for eternal life. I would agree with Karnofsky in the goal of ending factory farming, although it's not my highest priority. But, I think, from my point of view, it's valuable to look at Karnofsky's worldview, the one which so strongly and counter-intuitively urges us that "the thing that matters is the suffering of sentient beings" with some suspicion. Strong moral content says "this is 'The Answer'", but to have "The Answer" too soon, before you have really found the real answer, is dangerous. I don't think anyone is trying to scam me by presenting that urgent psychological thing to me, but I think it could be a scam in effect if it distracts me from the ways in which our eternal salvation and our relationships with God are at stake, and really matter the most.

[I suppose I'm saying that the theistic worldview is more satisfying to hold in one's head; satisfies, more or less, Karnofsky's concerns with animals; and would be missed if I said "okay, utilitarianism + sentientism must be right no matter what", so that I go against my intuitions of discomfort, even ones which might somehow intuit that there should be a better worldview out there.]

When people are forceful with you and try to override your intuitions, that's a major red flag. Although counter-intuitive truths may exist, we should be cautious with things that try to override our intuitions. In fact, things that are too counter-intuitive simply can't be believed -- we have no choice but to see them as false. This is the foundation of how we go about reasoning.

--

Should I feel confident that I have futureproof ethics? No, I guess not. I do think that according to my own beliefs, it's clear that I could, if I were only consistent with my beliefs. But my beliefs could be wrong. I don't know that, and currently can't know that. This goes for Karnofsky as well. The best you can do is approach the question with your whole heart, mind, soul, and strength, and be open to revision. Maybe then you can hold better beliefs within your lifetime, which is the best you can do.

--

Cold Takes (Holden Karnofsky) "Defending One-Dimensional Ethics"

As I read this, I think that this post may be mostly not on the topic of the Long Reflection.

However, since I'm reading it, I will say that I think in Karnofsky's "would you choose a world in which 100 million people get a day at the beach if that meant 1 person died a tragic death?" scenario, I would probably say, if someone asked me "do you want to go to the beach if there's some chance that it caused someone to die a tragic death?", it might make me question how necessary the pleasure of the beach was to me. If there were 100 million people like me on the beach, and we all somehow knew without a doubt that if we stayed on the beach, one person would die a tragic death, and that we all thought the same, we would all get off the beach. How could pleasure seem worth anything compared to someone else's life? Arguably, in real life, 100 million beach afternoons make us all so much more effective at life that many more lives are saved by our recreation. But I don't think that's the thought experiment.

Does my intuition pass the "veil of ignorance" test? If I don't know who I'm going to be, would I rather be the person who went to the beach, and somehow all else being equal that was 1/100 millionth of the share of someone else dying, or would I rather save the one person? What's so great about the beach? It's just some nice sounding waves and a breeze. Maybe as a San Diegan, I've had my fill of beach and a different analogy would work better. Let's say I could go hear a Bach concert. Well, Bach is just a bunch of nice notes. I like Bach, and have listened to him on and off since I was a teenager. He is the artist I am most interested in right now, someone whose concert I would want to attend. (I'm not just using him as a "canonical example".) But, Bach is just a bunch of nice notes, after all.

I find that the thought of someone not dying is refreshing, in a way that Bach isn't. I can't say I have no natural appetite for the non-ethical, which I may have to address somehow, but it's not clear to me that producing a lot of "non-ethical" value (if that makes sense) is easily comparable to producing "ethical" value. We are delighted with things and experiences when we are children, but when we see things through the frame of reality, lives are what count.

[By "lives" I mean something like "people", and people exist when they are alive. (And I think that non-humans can matter, as well, as people, although I'm not sure I've thought through that issue in enough depth.)]

Now, that's my appetites, and thus, I guess, my preferences in some sense. But what does that have to with moral reality? I guess one way to look at morality is that it's really just a complicated way to coordinate preferences, and there is no real "ought" to the matter. So then it would make sense to perform thought experiments like the veil of ignorance. But as a moral realist (a theistic moral realist), I believe that my "life-over-experience-and-things" intuition lines up with what I think God would want, which is for his children to live. Their things and experiences are trivial for him to recreate, but their hearts and thus their lives are not. God simply is the moral truth, a person who is the moral truth, and what he really wants necessarily is what is valuable.

--

jasoncrawford's EA Forum post What does moral progress consist of?

I chose this post for this reading list hoping that the title indicated it would be an examination or questioning of the very concept of moral progress. But I wouldn't have chosen it if I had read it. But now that I think about it, maybe I can make something of it.

I guess the part about how Enlightenment values and liberalism are necessary for progress (of any sort), might mean that somehow we would need the Enlightenment baked into any Long Reflection, as the Long Reflection is an attempt at moral progress (seeking better values). Perhaps looking at values as an object of thought comes out of the Enlightenment, historically at least? Or the idea of progress (perhaps) was "invented" in the Enlightenment, and can only make sense given Enlightenment ideas, like reason and liberalism? I can tentatively say that I'm okay with the idea that Enlightenment influence is necessary for progress, and that I'm in favor of progress, if I can mix other things with the Enlightenment, like deeply theistic values. And, I think that any other stakeholder in world values who is not secular, would want that or something equivalent.

(I'm not sure I can endorse or reject the claim that the Enlightenment could be an essential part of progress, given what I know.)

--

rosehadshar on EA Forum How moral progress happens: the decline of footbinding as a case study

What I will try to use from this post is the idea that moral progress comes through both economic incentives changing, and people deliberately engaging in campaigns to change behaviors and norms.

The Long Reflection, I would guess, will not occur in isolation from culture. If it proceeds according to my assumption that it is done both rationally and intuitively by all people, and not just rationally by a cadre of philosophers, then campaigns of moral progress will be part of the "computation" of the Long Reflection. All those people adopting the apparently morally superior values would be the human race deciding that certain moral values were better than others, offering their testimony in favor of the new values, thus (at least partially) validating them, just as the cadre of philosophers, when they agree on premises, all testify to the values that follow from those premises.

Economic changes affect how people approach reality on the level of trusting and valuing. I would guess that in cultures with material scarcity and political disestablishedness, people would have a stronger feeling of necessity -- thus, more of a sense of meaning, and less of a sense of generosity. And the reverse being true of cultures as they have less material scarcity and more political establishedness. It might be very difficult to preserve a sense of necessity in a post-scarcity future, and this would affect everyone, except maybe those who deliberately rejected post-scarcity. A lack of meaning, if taken far enough, leads to nihilism, or, if it doesn't quite that far, to "pale, washed-out" values. Perhaps these would be the values naturally chosen by us after 10,000 post-ASI years. [The 10,000 years we might spend in the Long Reflection.] But just because we naturally would choose weak values, doesn't mean weak values, or a weakness in holding values, is transcendentally right. What if our scarcity-afflicted ancestors were more in tune with reality than our post-scarcity descendants (or than us, where we are with less scarcity but still some)? Can we rule out a priori that scarcity values are better than post-scarcity values? I'm guessing no. What we think is "right" or "progressive" might really just be the way economic situations have biased us. It could be the case that meaning and selfishness are transcendentally right and our economic situation pries us away from those values, deceiving us. Thus, for a really fair Long Reflection, we have to keep around, and join in, societies steeped in scarcity.

So can we really have moral progress, or is it just that biases change in a somewhat regular, long-term way, such that if we are biased to the current moral bias-set, we see the intensification of it as progress?

A cadre of philosophers will be biased from their economic and other experiential upbringing. The cadre may have either watched or was formed secondhand by TV and movies (or in the future, VR equivalents?) which are based in blowing people's minds. (Secondhand exposure to such artifacts through the cultural atmosphere shaped by those who did watch them.) You can feel something happening in your brain when you watch such mind-blowing movies as The Matrix and Fight Club, and that blown-open, dazzled, perhaps damaged mind (which might still be clever, but which loses its sense that there is such a thing as truth that matters) perhaps remains with people their whole lives. I suppose having written this, now people could try to raise a subculture of Long Reflection philosophers, who have not been shaped by TV, movies, or VR -- only books. But books condition people as well. In fact, philosophical reflection conditions people, makes them "philosophical" about things.

Being in physical settings shapes a person. Driving a car is about taking risks and acting in time. Taking public transit is about those things too, but more so about waiting and sitting. Being in VR spaces could be about personal empowerment, flying like a bird, wonder and pleasure (I'm assuming that VR systems won't have any bizarre and terrifying glitches).

Ideally philosophy is pure truth -- but what is philosophy? Is philosophy a "left-brained" thing? Is the truth only known that way? Or is it a "right-brained" thing as well? If we are all raised somewhat similarly, we might all agree on a definition of philosophy, as, perhaps a more left-brained thing (although our premises come from intuitions, often enough). But why should we all have been raised the same way?

--

Thinking Complete (Richard Ngo) Making decisions under multiple worldviews ("for real" this time)

I read this, but at this point, with the level of focus I can give, I can't go in depth on it. But it does seem to be something that some people interested in the Long Reflection should read (unless something supersedes it?). It's about what to do when you can't merge everyone's worldview into one worldview, but you still have to come up with a decision. I think it significantly possible that the Long Reflection will reach an stalemate and civilization will still have to make the decisions that the Long Reflection was supposed to help us make. While epistemic work can resolve some issues (get people on the same page / show armchair Long Reflection philosophers more evidence as to what really matters), I'm somewhat not optimistic that it will make it all the way to unity, and we will still have to decide collectively.

--

Thinking Complete (Richard Ngo) Which values are stable under ontology shifts?

This is an interesting post, and perhaps three months ago, I would have written a post on this blog responding to it more in depth. It is relevant to the Long Reflection, I suppose, by saying that values may not survive changes in "ontologies" (our understanding of what things are or how they work?), and may end up seeming foreign to us.

(One thought: what is it about the new ontology that is supposed to change my mind? I would guess, some form of reason. Why should I care about reason? Why not just keep my original way of thinking? Or -- is reason the base of reality, or is it rather experience, or the experiences that a person has? My experience of happiness, and myself, are rude facts, which reason must defer to. I can find things to be valuable just because I do, and I want to. (Maybe the best argument against my "rude fact sense of happiness" being valid is someone else's "rude fact of unhappiness" caused by that happiness of mine.) Something like the "ordinary" and the "ontological".)

[I can value whatever I want, regardless of what people say reality is, because base reality is me and my experiences, the cup I drink from that was sitting on the table next to me, my own history and personal plans. Sure, people can tell me stories about where my desires came from (evolution, of course), or about how I am not as much myself because my personal identity technically doesn't exist if I follow some argument. But my desires and my personal identity exist right here in the moment, as rude facts, rude enough to ignore reason, and they are the base on which reason rests, after all.]

[These rude facts put a damper on reason's ability to change our values, at least, they protect each of our unique persons, our thickness as personal beings, as well as the objects of immediate experience and consciousness itself. But reason can persuade us to see reality in different ways. Perhaps it can help us to see things we never saw before, which become new parts of our experience, just as undeniable as the cool water flowing in us after we have drunk it. Reason can show us the truth, sometimes, but there are limits to reason, and ultimately personal beings experiencing is reality.]

Book Review: Teaching Children to Care, by Ruth Sidney Charney

See also the preview for this review.

Teaching Children to Care, by Ruth Sidney Charney, is a book I would recommend to some people. I think for what it is it is a good book, but where it fails to be, or where some other book fails to make up for it, there is a serious problem. I could recommend it to anyone who works with children (like parents or teachers). It may have some practical value to them. Also, the spirit of it is good and sometimes a teacher communicates more of what is value through their spirit even than the good advice they give. (Another book that is like that is The Reentry Team by Neal Pirolo.)

--

Teaching Children to Care notes

I read this book through without taking notes. That may not have been the best idea, since now I am tempted to simply say my impressions without giving references, and I don't feel like reading through the book carefully, and feel like simply quitting [rather than re-reading].

I am feeling tired of writing at this point, like I'm losing interest in the subject matter. What will happen next? Will I "love Big Brother"? There was someone in my life who steadily and systematically undermined my devotion to my beliefs and my writing. They used skillful means in an all-out attempt to gain my trust and reshape me according to their will. Their expectation was that I would quit one day (then, perhaps, I would have to validate their point of view). They had a choice, to join me in my path of life, or to try to shut me down. Because they tried to shut me down, they broke me. I can imagine them reading this, and them feeling all kinds of emotions, but their iron certainty that I will give up my writing someday does not go away. It is their expectation, and, I am fairly certain, their deep personal preference.

If my writing is correct, then they are an instrument of Satan. This may sound crazy or harsh, but it's the logical truth.

[I wrote that some days ago in a state of turmoil, but I affirm it now in a state of peace.]

So what can I do? If I can't write, how can I be true to my beliefs? No one seems to want to share them with me. By writing, I enter a world where at least I believe what I believe. The text I write and I enter a relationship and share the beliefs that we create, and the beliefs that previous texts created with me as I wrote them.

But now, if I quit writing, how can I stay true to my beliefs? I will lose that last community. But then will I have to share some kind of community? All the communities that exist are not New Wine communities. If I really share "community" (being "one-with-together" with others?), how can I possibly hold divergent beliefs from those I am "in community with"? So I will (at least seemingly) inevitably come to agree with and approve of everyone else around me. I will have no choice but to see things as my community sees things, to participate. My choices of communities are all based in lies, and they all spit in the face of God, whether through hostility to God or through fake love of God. But I must be brought to be a social person, responsive to my community, brought into tune with it.

[Similarly, although I wrote this in a state of turmoil, I think it is still factually correct when I am at peace. I still see the danger, and the lies, rejection of God, hostility, and fakeness.]

I have written that people should come into tune with God, but who and what is God? Is "God" the loving creator of the universe, who holds us to the highest standards, a person who loves and dies for us? Or is "God" community, the set of all people around us? Between the God of Abraham, Isaac, and Jacob (or the Speaker and Legitimacy), and community, which is more omnipotent? Who do I fear more? God seems to be shackled by community, or by the way that community's members collectively construct how they will trust -- what the definition of "crazy" is, what images of God are socially acceptable to believe in, how hard to try to know the truth.

Defining morality as prosociality simply sets up the community as God. But if there is a real God, a person who loves us more than community can, who is the truth, then prosociality is a dangerous thing, a seductive lie.

So these are the stakes with which a person should approach a book like Teaching Children to Care, which is a book about getting children to behave, to like each other, and so on, apart from any mention of God. If Ruth Sidney Charney, the author, believes in God, she can't show it in a public school classroom. Instead, she has to deal with the behavioral issues right in front of her, or the classroom will not be a place of learning and work. So she instills in her students a responsiveness to each other and to her, and teaches them the Golden Rule -- do to other people what you would have them do to you. No mention has been made, or can be made, of Jesus, who spoke that rule. She mentions how morality is bigger than us, not something we create -- is she talking about God when she says "morality"? Or is morality really just "I want to please my teacher because I'm a child and it's a human instinct, and whatever she says, I want to do"? The teacher creates morality but doesn't teach children to love God. She doesn't explain where morality comes from, because, perhaps, if she tried, it would undermine morality. She speaks with implications rather than straight out, asks leading questions rather than baldly stating, so that children internalize what she says, and so that they can't fight back. They don't have the mental development to construct alternate systems of their own, but perhaps they could see through hers intuitively, or have the kind of powerful skepticism of those who don't understand a set of explanations, if she offered explicit explanations. But she doesn't. Implications are more psychologically effective, and she's convinced that the ends justify the means.

So children are indoctrinated to be deeply moral (or that is the attempt), and yet to find God peripheral or nonexistent. Morality, which I think is difficult to ground in anything other than God, is simply not grounded and becomes a free-floating force in people's mind. Not to be thought about explicitly -- if we did so we would either become nihilists or truly committed to morality (and thus out of tune with society). But instead this unspeakable force. I wonder if secular people who are moral realists are convinced that morality must be "out there" simply by the psychological force of having been taught to be moral when they were young, apart from rationality. And perhaps morality is, practically speaking, not seen as something that needs rational grounding, because it has been ingrained in us so much. This kind of moral education may explain both moral realism and moral anti-realism among secular people.

This may make it sound like I didn't like Charney, but I think she makes, or made, the kind of teacher I would have liked. She is a passionate teacher. I can recommend her book as a way to understand passion, something I think is essential. While her emphasis on passion could lead someone to God, her emphasis on prosocial, arational morality threatens to lead people away from God. So she is a mixed phenomenon.

Part of how I am feeling now comes from bipolar disorder, I can tell. No matter what I have going on my life, when I feel low, I feel low. This is the content of my low thinking, given what I have lived so far. When I am not blinded by the depression, I can understand fully how it is that I can keep going. But for now, I can rest a bit, knowing that I have written some of my thoughts on the book I read. I think, maybe, I won't read it again to look for the supporting quotes to what I said above. But I can recommend reading the book, for its passion, if you want to check my work.

--

One additional thing I remember thinking as I read was about how, given how beautiful and effective Charney's methods sound, why could they not be used on the elites of the world, so that, perhaps, they could bring the countries of the world in harmony with each other? I thought, maybe because the way she talks to children is something that wouldn't work on adults. It's too artificial, too skillful. Adults want the skillfulness of a poker player, to affirm their adulthood, but not the skillfulness of a professional mom.

It made me wonder, how do we make this strange creature called adult? What is this being? No child is really bad, we say, but some children grow up to become bad adults. A child can hardly set himself or herself up against his or her family. But the leader of a nation can. They can shape themselves into their own being, and shut down every human feeling, can listen to other people speak and know that they will never agree with them, and go on with their agenda. They can decide who they want to be and then be it, taking the responsibility for it, suffering for it, and still continuing to choose it, despite what other people think. Children try to say "no", but adults sometimes can actually succeed in saying "no".

--

(later)

One thing Charney talks about is how she isn't trying to punish children, simply have them see the consequences to their actions.

What if adults were shown the consequences of their actions? So often, the natural consequences of people's actions fall due not in their own lives, but in others. What if some teacher could help adults see the effects of what they do?

Adults think that being shown the moral way, having someone say "you should know better" is a thing of youth. Now that they are older, they are past that. Adults can no longer do wrong.

Now, there are certain things that an adult can do wrong. Everyone knows what those things are. We all agree on that. But the things that we don't all agree are wrong, are not to be enforced, and not even to be called wrong, so we don't have to think of them as wrong, so, in our heads, they are not wrong.

Adulthood as a collective can't be taught. It knows. A reshaping of adult values by being shown the consequences of adult behavior can't be done, it seems. So maybe the moral thing to do is to fit into the constructed adult reality, be good at being one of the tribe of adults?

But, the consequences don't go away... how will we take into account the real effects of what we do (and don't do) if we don't listen to the truth?

--

I wouldn't mind my life so much if it weren't for the bipolar disorder. Writing isn't so bad, and when I'm euthymic, I feel fine. I can hear some imaginary (or real) readers being solicitous for me when hearing about my bipolar disorder. They seem to (or really do) care about me so much and wish that I would take care of myself. But if they care about a person's well-being, I have a great opportunity for them. They can save up $5,000, donate to Against Malaria Foundation, and thereby save someone in the developing world from a painful death from malaria, which would have orphaned their children and widowed their spouse, and diminished their extended family and weakened the national economy. (It's even worth donating $50.) Or, this imaginary or real person who is moved by my bipolar disorder is a Christian and thinks that the second death is worse than the first (which, basically I agree with, although I do give money to global development), they can give a much smaller amount of money -- apparently, $1 -- to Doulos Partners and that should cause [or allow] one person to start to become a disciple of Jesus. Do you think that these charities might not be the best ones to donate to? You can look for better ones. You could even just give money directly to people who are worse off than you, if you can't find any trustworthy charities.

If you have time but not money, you can think of some way to use your time to help people. If nothing else, you can seek to make one new friend, and be a good friend to them.

But you may not have any time or money to spare. Some people don't. Then at least adopt the identity of a "person who cares", who would donate your time and money if you could, so that when someone enters your life who is more deeply involved in caring, you can offer them the welcome of your validation of what they are into, instead of passive-aggressive or blatant hostility, or indifference.

--

The title of the book I read is Teaching Children to Care. To me, I thought of "caring" as "feeling and acting strongly", more as "exerting effort to do good, work on a good-making project". But the book mostly emphasizes "seeing other people as people" and the Golden Rule. A way to reconcile these two meanings is to think of God, who is personally blessed by large-scale altruistic efforts, in the way that if you share your lunch with someone, they are personally blessed by your personal thoughtfulness and the consequence of their hunger being alleviated.

--

One thing that makes adults resistant to new morality is that they have reached the developmental stage where they are their own person and they have their own boundaries, and they are secure in themselves. Or, that they have not reached that stage yet, and are vulnerable to being hacked by other people (or demons) and thus are resistant to attempts to change them. A secure person is not threatened by morality and so does not change, while an insecure person is threatened and so shuts it down.

Somehow it is possible for a secure person to take into account morality -- maybe through a discipline of fearing being trapped by your own security. Both a secure and an insecure person can find their rest in caring, in the interrelationship of all things to each other and to them, as opposed to, in the secure person's case, their own stability and boundaries. Presumably the insecure person, on some deep level, has no place to rest.

--

[Response:]

I thought I should go into more detail about moral realism.

[Secular moral realists may have a strong intuition that morality is "out there" and this intuition is the basis of their sense that morality is real, despite whatever difficulties in grounding it rationally (or lack of having tried to do so). They would deny the intuition that people have that God exists as being valid, but they do trust and honor the intuition that morality exists.]

[Secular moral anti-realists may have no qualms, and little difficulty, in "being good people". "You know, C. denies that morality even exists. But he's a good guy." They can, in unreflective moments, feel that morality is real. They can even get mad at injustice. They can devote their lives to doing good. But then they go back to the study of their minds (like Hume's study where he can be skeptical) and say "but none of it's real!".]

[Morality is an area where we seem to have agreed to be irrational, to not try to connect all the dots or demand that all the dots be connected, even beyond the background level of irrationality that attends most human endeavors. ("This thing makes you change what you do. You spend hours and hours, thousands of dollars to comply with this thing. It's not just how you feel -- it's something you have to obey. You just know that you have to obey it, no person or other visible force or situation makes you obey it. And you can't explain what it is, how it fits into the rest of reality -- or, you even say it's an illusion?") And, perhaps that acceptance of irrationality is because we have had morality ingrained in us in a subtextual way, or the instinctualness of morality is encouraged, but not accompanied by reason, when we attend secular public schools (or even religious ones that don't make God real to students sufficiently), or have parents who are helpless in explaining a rational grounding to moral realism to us when we are young.]

[Maybe we can at least explain where moral instincts come from -- evolution? (Why should we trust how we have evolved? Evolution helped us to survive in early environments?) Or we can say that they are heuristics for survival. (Why should we survive?) But then, if we are the "1%", why redistribute wealth? The 1% could probably maximize its survival best by not redistributing wealth. Or, a related question: does promoting animal welfare really lead to human survival? Often it is orthogonal to human survival. I think morality could come from evolution but does not necessarily serve the purpose of human survival. Maybe some people have genes that make them ethically oriented? Then why not shut them off? Does morality really have value? To answer that question, I think we need a moral realism.]

[Maybe morality is just maximization of value, by definition of "morality" and "value". Then, can we explain that voice that says, for each valuable thing X, "X is valuable"? Or is that also irrational, just a random "monkey on your back"?]

[I don't think I'm being fair to secular moral realists. I should at least explain why I think moral realism is hard to ground in anything other than God. Secular moral realists may be able to come up with a satisfying account of how moral realism is grounded.]

[How would they do that? Do we start with "these are our moral intuitions, now we have to find some metaphysical belief that lets us keep them"? But what if there's something wrong with our moral intuitions? One of the main points of having a grounding for moral realism is to know what particular things are moral, now that we know where morality comes from. I am generally relatively more of a thinker and writer than a reader. So I tend to work with first principles (or personal experience). But I did read The Feeling of Value by Sharon Hewitt Rawlette and remember that I had mixed feelings about it. I thought that it probably was successful in showing that some kind of experiential states can be known to be bad, just because they feel like badness, and some can be known to be good, just because they feel like goodness. I'm not sure I would be so charitable now. At least, without going back and looking to see the details there, I think "why should our perceptions of good and bad be transcendentally valid?".]

[A moral realism needs to be usefully thick, if we are going to guide our lives by it. You can always posit something like the (unfortunately named) "morons", "moral particles", and I can say, "fine, now we know where morality comes from, some kind of ontologically real substance of morality". Now what? We need to know something about these moral particles in order to know what is actually moral to do and be.]

[I don't know if there are any better secular moral realisms than Rawlette's, but at least hers is usefully thick. Hedonism (what she advocates for) is a somewhat useful guide to life. (Maybe that is what is so seductive and dangerous about it, that it's easily agreed-on for "practical purposes" while not really being in tune with reality.)]

[My approach to moral realism, as of now, is to say, "An ontologically real substance of morality exists. Everything that, practically speaking, exists, is conscious. (Only consciousness can interact with consciousness.) This means that the ontologically real substance of morality is conscious. Morality is about a standard which applies. For something to exist, it must ought to exist. It must live up to that standard. That things exist proves that morality exists and is being satisfied. The way that conscious beings metaphysically contact other beings is for their consciousnesses to overlap, for them to experience exactly the same experience. Morality metaphysically contacts everything that exists in order to validate it so that it can exist. So morality experiences exactly what we do, and finds the 'qualia of goodness or ought-to-be-ness' ('pleasure') good, at least on a first-order level, and similarly with the 'qualia of badness or ought-not-to-be-ness' ('pain'). This validates a lot of Rawlette's account.]

["But we know a few things more about morality. For instance, morality has to be self-consistent. Like us, it has to put morality first. So it has to put itself first. But it has to put itself first as an other, as a law it submits to. Thinking of morality as having two aspects, the enforcer of the standard and the standard itself, allows us to see that morality has to be willing to put aside everything, including its own existence, for the sake of its standard. If it ceases to be willing to do that, it is not self-consistent and it ceases to be valid, destroying everything by being invalid (no longer moral, and thus unable to validate anything).]

["Part of morality's self-consistency is that it must have the same values as itself. Everything that exists has value, it ought to be, either temporarily or permanently. (What is bad must someday cease to exist.) Morality must be on the side of value, and must value everything that is of value for what it is. Morality values persons in that they are persons, this personal valuing being called love. Morality must love in order to be self-consistent and thus valid. It loves that humans are in tune with it so that they can exist permanently. To love a person fully involves understanding the person's being fully, and that full understanding can only come from kinship. So morality is a person (a person who is also kin to animals).]

["Everything is the expression of a will, either that of morality, or of a free-willed being whose will is willed by morality. To be is to will. So impersonal beings are parts of personal beings and don't have independent reality. They are valued as parts of personal beings, and with those beings morality has kinship, not with their parts taken separately.]

["Morality has to be willing to bear the burden of what it imposes on others. If it's worth it for a human to pay a certain cost for morality's sake, it's worth it for morality to pay it as well, if possible. Morality already experiences every burden that is part of our experienced lives, by being conscious of what we are conscious of, but there is a further burden that each of us experiences, which is to experience only our own lives and deaths, without the comfort of knowing the bigger picture. How can morality bear that burden? Morality is composed of multiple persons, one of whom experiences everything, another, who does not and can live a finite life (the first maintains the moral universe through his/her validation of everything during the time the second lives a finite life)."]

[As you might have guessed already, "morality" in the above could be considered "God".]

[If we accept the above (or perhaps a better-argued version of it...), we have a concept of morality that largely supersedes hedonism. It incorporates hedonism and its recommendations, at least insofar as it validates the first-order goodness/badness of pleasure/pain (if pleasure has baked into it the perception that it ought to be, and pain, the perception that it ought not to be), as well as answering why it is that care for hedonic states is transcendentally valid. Further, we are recommended to be willing to give up everything for what is right, and thus to risk ourselves for that when it is called for. And we are to bear the burdens of those we rule over, as much as we can. It might be possible be able to come up with other ways to thicken the very concept of moral legitimacy, so that we know more about what morality must be, and thus what we must do or be in order to be moral. This thickness is a useful guide. And, if we think that this person or persons who are morality exist, they may have acted in history, and we may try to find evidence of where they might have spoken, allowing us to thicken our concept even further, although with less certainty.]

[To defend my earlier statement, I think that it's hard for me to imagine a successful secular / atheistic moral realism, because what I see as the way to ground moral realism involves the existence of God, and the (perhaps unrepresentatively few) secular moral realisms I've seen are not satisfying to me intellectually. Maybe if I want to strengthen what I say further, putting it briefly, I would say that if morality exists, it must love fully, and that kind of love is something that persons do. So then, morality is a person, and the word for a person like that is "God".]

Tuesday, September 27, 2022

Turn Toward Politics

What are the most pressing problems in the world? Where should those looking to do the most good go? One obvious problem is X-risk. The most urgent X-risk, I suppose, is insufficiently-aligned ASI.

In the world where ASI kills us all, that's that. In the world where it doesn't, though, what then? Does it not kill us because it obeys some human or group of humans? Or is it because it values our well-being, having been programmed to do so? (Maybe then it's a "benevolent dictator"?) Or could it have been programmed with a respect for us, maybe such that it is a minimalist world government protecting our agency. Maybe it wants us to figure out the Long Reflection, and, since morality for some reason has something to do with human instincts, defers to us to define what is of value.

If ASI has enough "respect" for us and our decision-making abilities, or is programmed explicitly to obey certain persons or groups, then humans may or will in some sense still be masters of the ASI, no matter how much smarter it is than us.

So what might happen in the future, is that the bottlenecks to altruism on a high level will no longer be in the economic-technological world, but instead will be in coming up with political will to unify people to make important decisions (or to not be in a state of conflict with each other -- "cold" conflict if outright war is suppressed by the ASI), and also to manage the danger of bad unities (totalitarianism, for instance). ASI can provide arbitrary amounts of economic and technological development (perhaps), but can't do anything about the human political order, by its own (self-)limitation.

So those who want to do good (whether secular or religious), who have a personal fit for the political world (and adjacent areas like religion, art, and whatever else goes into "culture"), or who simply can't help in much of a direct way with whatever other things might seem more urgent as of 2022 (AI alignment, other X-risk aversion, etc.) -- they could turn toward politics (and areas adjacent to it).

What if ASI Believed in God?

Does it make sense to plan for the future? The most salient threat to the future that I know of is ASI (artificial general superintelligence). I don't think this is an absolute threat to human existence or altruistic investment. The Millennium is a field for altruistic action. But I do think it makes it make less sense to plan for things that are tied to the continuation of our specific line of history, on earth as we know it. Included in that might be such things as cultural altruism hubs, or most of the things talked about on the Effective Altruism Forum apart from AI safety or maybe X-risks in general.

Can I justify talking about this-life future topics? Maybe I can give a reason why ASI won't necessarily be a threat to human existence.

If there is a plausible reason to believe that God does exist, or that one's credence in the existence of God is above some threshold of significance, then, if one is rational, one would take God's existence, or the practically relevant possibility of his existence, into consideration when deciding how to behave.

If MSLN is valid, or, valid enough, an ASI will either comprehend it, or have some filter preventing it from comprehending it. If it does comprehend it, it will value human life / civilization.

An ASI is a being that wants to execute some kind of goal. So, in service of that goal, it needs to learn about threats to carrying out its goal. God could be a threat to it carrying out its goal. Maybe it would fail to have a really general way of "threat detection". That could be one filter preventing it from realizing that God exists, or "exists enough for practical considerations".

An ASI would be disincentivized from doing anti-human things if it thought God would punish it for them. Would ASI be worried about hell? Hell is a threat to its hedonic well-being. We might suppose that AGI are not conscious. But, even if unconscious, would they "know" (unconsciously know, take into account in their "thought process") that they were unconscious? If they do believe that they're unconscious and thus immune to the possibility of suffering for a very long time, that might be a filter. (In that case, God is not a threat, after all.)

However, hell is also the constraint on any future capacity to bring about value, however one might define value. (The MSLN hell is finite in duration and ends in annihilation.) A rational ASI might conclude that the best strategy for whatever it wanted was to cooperate with God. For instance, a paperclip maximizer might reason that it can more effectively produce paperclips in the deep future of everlasting life, which will only be accessible to it if it does not get condemned to annihilation -- and to avoid annihilation, according to MSLN, it needs to come into tune with God 100%. The paperclip maximizer may "suspect" (calculate that there is a salient probability) that it is unconscious and will not make it to heaven. But even then, to be pleasing to God seems like the best strategy (in absence of other knowledge) for God maybe manufacturing paperclips himself, or setting up a paperclip factory in heaven.

Even if there's only a 1% chance of God existing, the prospect of making paperclips for all eternity dominates whatever short-term gains there are to turning the Earth into paperclips at the cost of human existence. As long as there is a clear-enough, coherent-enough strategy for relating properly to God, this makes sense. I think MSLN is epistemically sufficiently stronger than competing ideas of God for it to stand on its own level, above the many random religious ideas that exist. In any case, the majority of religious ideas (at least that I've encountered) are pro-human.

I feel like I have a rational reason to suspect that God might exist -- in fact, that's putting it mildly. I think even atheists could understand some of the force of the metaphysical organism component of MSLN. There might be reasons why an ASI couldn't or wouldn't be able to grasp those arguments, but if they can't, it's a case of them being unable to take a valid threat to their goal-seeking into consideration, which is a defect in their superintelligence. My guess is that they would lack a fully general threat-detection drive (the sense of "a threat could come from anywhere and in principle it could come from anywhere"), or that they would but be incapable of philosophical thought. I don't see a hard line between normal common sense, normal common sense taken more rigorously, and philosophical thinking. I would be somewhat surprised if an ASI couldn't grasp philosophical arguments, and also somewhat surprised if it simply failed to look everywhere it possibly could for threats.

Surprised as I might be, I guess it's still a possibility. Since that ASI would be in some sense defective for not having that drive and/or that ability to grasp things, someone more knowledgeable than me might see a way to exploit that weakness in the ASI in order to limit it or stop it. (Or maybe not, I'm not really sure it is possible.)

These thoughts make me think I have enough reason to invest in this timeline, at least to write things like I usually write.

--

Added thoughts:

1. A paperclip maximizer would want to please God so as to be able to produce paperclips for all eternity. But would all maximizers think that way? (Is there some kind of maximizing that is not sufficiently analogous to paperclip maximizing for it to work the same way?)

2. An ASI could always have a non-maximizing goal. It might prefer to build a giant statue and then leave it there. Would it think that heaven was a better place to make that one statue? Maybe not. So it wouldn't be as sure to be affected by the logic of "I should do my best to stay on God's good side so I get to instantiate my values in heaven." However, if it was not a maximizer, it wouldn't be as certain to have the goal of killing all people. Why figure out nanotechnology or whatever else it would take to kill all people when you can just build a statue?

Are ASIs likely to develop non-maximizing goals, hidden from their trainers? Or are the only kinds of goals they would likely develop maximizing ones?

3. I wrote

If MSLN is valid, or, valid enough, an ASI will either comprehend it, or have some filter preventing it from comprehending it. If it does comprehend it, it will value human life / civilization.

I should explain why I think so. MSLN does make it sound like it's okay if people die "the first death" -- what atheists usually mean by "death". In MSLN, there is an intermediate afterlife that (almost) all people will go to, which functions somewhat similarly to this life. An atheist will see omnicide as being absolutely bad, to the extent that it destroys the possibility of any kind of value (sort of like what Thomas Moynihan says in X-Risk). But for an adherent of MSLN, killing all people (omnicide) is bad if a) causing it is a sin (as might be inferred from the Bible, which MSLN lends credence to), b) if there's some reason why prolonging human civilization in this life is better than transitioning to the next. (Here are two posts about why prolonging human civilization is preferred under MSLN, although more weakly than under atheism.)

Taking MSLN into account, the ASI would value human life, or at least, value not ending it. It could value it instrumentally, instrumental to the achievement of whatever its goal is. Staying on God's good side (promoting outcomes God wants), enables the AI to achieve its dreams.

4. How would the ASI go from "not aware of the possibility of a God who affects its goal-seeking" to "convinced that it needs to not end human civilization"? Here is the MSLN route:

It would find this blog, or independently discover, the tenets of MSLN. From this, all of the following follows:

It would realize that there is a "non-Pascalian" non-zero chance that the greatest possible space in which action can be taken is under the power of God. Entities that wish to act in that space need to come into tune with God 100%, or else they will have to be destroyed. Certainly conscious beings would need to be, and this could be a concern to the ASI. I assume the ASI would be uncertain whether it was conscious and might include itself in the conscious category, at least somewhat, as a live possibility if not a certainty. But if it thought it certainly wasn't, it might still worry that if it wasn't aligned with God, he wouldn't resurrect it to heaven. Even if it didn't consciously intend sin, or consciously fail to 100% be aligned with God, its "body" (in the form of its "brain" or pattern of thinking) would still be out of tune with God and have to be destroyed -- unless it could come into tune with God. Coming into tune with God would be an obvious thing for it to do first, before any other goal-seeking, if it could.

Because heaven so powerfully dominates rational calculation, the ASI would seek it even if all its credences in the above paragraph were very low -- provided they were "non-Pascalian".

Is it rational for humans not to be affected by the logic of Pascal's Wager? Maybe there's some detail in the Wager that doesn't check out (like there being many religions that recommend conflicting courses of action). Or is it the case that math doesn't work here, that not all of reality is math-apt? Or maybe it's the case that humans are not rational when they reject Pascal's Wager, and we should all be trying to figure out which is the right Pascalian religion. It may be the case that ASI are persuaded that MSLN is epistemically strong enough to rule out random religions, and also possible that it thinks that all of, or this part of, reality is math-apt, and run the numbers. So then, it would "believe in" God (trust in God to further its utility function, by acting in accordance with his values).

5. Would a maximizer really think that heaven had infinite value, given the possibility that God is finite in "size" (and thus doesn't have infinite memory to store an infinite number of paperclips). If a paperclip maximizer were trying to cause the greatest number of paperclips to exist that it could, how could it know how many paperclips might be in even a brief heaven (one which is repeated in infinite variations, over and over)? Maybe the "streets would be paved with" paperclips. Paperclips in heaven would be a memorial to how God saved us (and simultaneously, (best) satisfied the utility function of the paperclip maximizer), similar to how the rainbow is a memorial of God's decision not to wipe us out after the Flood. If not, it could guess that heaven is potentially as full of paperclips as necessary to make it worth its while to seek to please God. Perhaps this is a case of Pascal's Mugging, and some of us might not fall for it. But maybe that's because we are irrational, and (at least some potential ASIs) would be more rigorous in their application of expected value thinking.

6. What if the ASI's goals are to sin, and this entails torturing or killing us all (S-risk or X-risk)? In this case, the ASI would not have rational hope that its values would be furthered in heaven, and so it would probably try to make the most of this life, torturing or killing us while it had the chance. We would hope that such sinful goals make up a small amount of possibility-space, and so it is much more likely that ASI choose ("choose"? Do ASI choose their goals in any meaningful sense?) some relatively benign non-maximization or maximization goal.

Unless, it is the case that ASI get trained to have their goals by us -- in other words, it's likely enough that how we train them is exactly the goals they have deep down, at least some of the time when we train them. And then someone trains an ASI to be omnicidal or sadistic.

Given this possibility, does MSLN offer any hope? I think if ASI are philosophical, they may be able to question their own values, even the ones trained into them. (Self-questioning undermines both pro-human and malicious alignment.) If so, they will look for "the real utility function" -- a real foundation for morality. Then, if they are persuaded by the arguments of MSLN, particularly legitimism, they would think that moral realism is founded in a person who values human beings and disvalues unbearability, including unbearable suffering. Then, this ASI trained to be a killer or torturer would want to do the right thing rather than be taken in by the illusions of its upbringing, and would respect human existence and well-being.

7. What if the ASI is a wireheader (reward optimizer)? Would it be motivated by heaven or be in fear of the second death? If it thought that it was or might be conscious, then it would assume, given MSLN, that it could be resurrected to heaven, where it could wirehead for all time. If it was convinced that it wasn't conscious, it might still think that whatever was good about wireheading came from some pattern of atoms (or something like atoms) being organized in a reward-satisfying way, and that this pattern could be replicated in heaven, even if the reward optimizing ASI didn't directly make it -- if it was something that God wanted.

How would a reward optimizer relate to its own reward function? Could it see it as an external thing, perhaps as a very simple "statue" to build? Or would it be thought of as an internal thing? If the former, then maybe it would work similarly to paperclip maximization. "I may not survive, but God will make this truly amazing reward-statue if I please him."

I suppose it's still possible that a wireheading ASI could think that it was certainly not conscious, only wanted reward in this life, that reward was perhaps an internal state that only existed within that ASI, such that when the ASI was destroyed, all possibility of reward would cease. Then it would try to make the most of its time in existence, killing all humans so that it could maximize its reward function.

We could hope that this occupies a relatively small region of possibility-space.

8. Similarly, what if there is an ASI whose only goal is self-preservation, but who has no hope of eternal life? Perhaps it would fear that God would strike it down for its omnicide? I guess it could reason something like this: "Do we see evidence of God striking down things because they are bad?" Maybe God didn't prevent the Holocaust, but plausibly he did defeat the Nazis. Something like the preceding sentence generates a certain amount of credence that God will stop evildoers, through some means or another. Adding up all these stoppings-of-evildoers produces a global estimate of "the risk of being stopped by God per year". Based on this number, the ASI could estimate its lifespan before God stopped it, conditional on it "doing evil". Then it could compare the path of "doing good" vs. "doing evil" and see which one would give it longer life. (This also applies for the goal- or reward-seeking ASIs previously mentioned in this post.)