Tuesday, February 16, 2021

News: 16 February 2021

Tomorrow is Ash Wednesday. I don't think I have ever celebrated Lent before, but I happen to have been feeling like I should take a break from social media for a few weeks now. Also from the rationalist/EA world. So I'm going to try to do that, and peg it to Lent. When Easter comes, I'll see if I want to go back.

I'll make exceptions, one of which is the subreddit that I started recently. (It's a place where I post links that I find interesting.) It is r/10v24.

--

As you may have noticed, I started reading Berkeley's book on immaterialism and Timothy Ware's book on Eastern Orthodoxy. I haven't forgotten about Rawlette and Moynihan. Their reviews are close to being complete, I think. But I'm waiting for something, not sure what, before I finish and release them.

Sunday, February 14, 2021

Reality is Law

Epistemic status: provisional.

Let's say I'm walking on a trail that goes along the side of a canyon wall, so that there is a sheer slope above and a sheer slope below the trail. I come to a point where the trail is washed out, too wide for me to step across or safely jump. So, I have to go back. The wash-out ought to be there and compels my action.

I get back to town and can't find a place to sleep at night, so I sleep on the street. Around 2 AM, I am awakened by a police officer who says "You can't sleep on the streets. It's the law." I know that behind the police officer is the city ordinance. The city ordinance ought to be there and compels my action.

But perhaps it is unjust that the law exists? Why? Why isn't the wash-out unjust? Does it have a good reason to block me? What's the difference? Is it that a human enforces the law? Suppose we were hooked up Neuralink-style to all-powerful AI. And the AI would enforce the law of not-sleeping on the street by making it so that the pavement felt like it was burning hot, preventing sleep. Does that make it a just law? Is it that a human or group of humans designed the city ordinance, and so they can be held accountable, and that makes it just or unjust? Maybe we get used to not being able to argue with wash-outs, making them feel not unjust, whereas we are used to being able to argue with parents or siblings. But the law itself, is it just or unjust? Is it just to prevent people from sleeping on the streets?

Is racism suddenly okay if an AI comes up with it without any help from humans? (Is it okay if the AI isn't a general intelligence?) Sickle-cell anemia affects African-Americans more than other Americans. Should that be, just because humans didn't come up with it? You could say that it neither should be nor shouldn't be, because it just is, but then the fact of whether it should or shouldn't be can suddenly change if you find out that it wasn't due to human agency. But what we observe, simantically, is that ought is a question to ask of the thing we observe. It should or shouldn't be. Should its oughtness depend more on whether some human has happened to notice it, realizing they ought to take care of it? Or on its effects on whomever it comes into contact with?

Are humans morally accountable for what goes on on earth? Or are each of us the helpless pawns of the system? If the system makes things the way they are, then maybe it's not a case of them "oughting" to be, or not, but rather they just are. But if we think that they should not be, then we approach them differently, we are really motivated to change the system. Some people protest the unjust system. They go after people in power, because ought resides in people. But other people try to work with the technological or policy sides of how the system is built. They aren't trying to hold anyone to account -- they are only trying to solve problems as impersonal phenomena. The system can be better, and they should be the ones to help make it better. Should the system be better? If it shouldn't, then it's bad that they make it better. If it neither should nor shouldn't, then why does the "should" fall on people to make it better? It would make a lot more sense if the system itself should be better, so that people should make themselves be the ones to make it better. If we suppose that human decisions are not the problem, rather it's erosion that produces wash-outs, then ought we still fix the wash-outs? It would seem so.

The wash-outs shouldn't be there. But, of course, in a sense, they should. We relate to the wash-outs as laws, just as we relate to systems, or the decrees of a city council. What kind of law can legitimately curtail our freedom? Reality is that which can curtail your freedom -- absolutely. Reality is a law. So what kind of law is valid? Is reality valid just because it is? Or do things, instead, exist because they are valid? Or their existence just is a particular kind of validity? That it should be is that it is.

A law can be valid, but not just because it is on the books. There is something else that makes a law valid. A law is valid "as law" if it really deserves to be on the books, and it is valid "as ordinance" if it has been written in the law books and is now being enforced.

What is is always at least somewhat valid, is really valid to some extent. Whatever really is, is absolutely. There is no arguing with it. Suppose that unarguable component were merely valid "as ordinance". If it were invalid "as law", it could be invalidated on that ground. In order to be really valid, it has to be valid "as law".

What, truly and ultimately, ought to be able to curtail a person's freedom, absolutely? What is it that allows reality to be real?

--

We are tempted to say that the world is at root a natural, impersonal phenomenon, but if we say that, we lose the ability to most deeply feel the wrongness of earthquakes, (like the famous one in Lisbon), because we think that there is no personal cause to it. The earthquake itself is wrong, but this doesn't make sense to us if we think that it's just a couple of plates slipping past each other. And so the suffering that comes from the earthquake -- maybe we feel it should happen, just a little bit more than if we railed against God for having caused the earthquake.

But then, we approach social systems as though they are "plates slipping past each other". Having found it easy to accept the suffering of those affected by the brute occurrence of an earthquake, we accept the suffering of those affected by social systems. In either case, we might feel something, but not what we would have felt if we felt injustice. We live blanketed by "should be", by the breath of status quo.

We could say that "is" tells us that we shouldn't feel "ought". The naturalistic origin of the earthquake blunts our "ought". Or we could say that "ought" tells us that we should adjust our view of "is". "Ought" is a connection with reality, as much as "is" is, and can guide us to see that everything is caused by personal choice, and so there isn't a naturalistic world that is not justice-apt. Justice is everywhere. Everything is made out of justice, in that everything is made out of legitimacy. (Or twisted justice, twisted legitimacy -- that which we call unjust or illegitimate.)

Seeing everything through the lens of justice can be dangerous. Having reawakened our sense of justice, we might oppose people we shouldn't oppose, fanatically. Seeing things spiritually and seeing evil properly should help with this.

Wednesday, February 10, 2021

Book Review Preview: The Orthodox Church by Timothy Ware

Finished book review here.

Today I got a copy of The Orthodox Church, by Timothy Ware. It's the 1964 Pelican edition.

Having grown up in America, I had more casual occasion to encounter Catholic and various Protestant influences than Eastern Orthodox ones, when I was learning about the different interpretations people make of the Bible. So one reason to read this book is to fill a general gap in my knowledge.

More specifically, from what I've heard of Orthodox soteriology, it may be closest to New Wine soteriology of that of any existing church, and it would be very interesting to see what similarities or differences exist between the two. Motivational structure tends to follow from soteriology, and so if Orthodoxy has a somewhat New Wine-like soteriology, one would think that it ought to have a proportionately similar motivational structure. Eastern Orthodoxy is what it is, so if New Wine motivational structure is similar enough to the Orthodox one, then we would expect it, in practice, to be not much more or less effective than Orthdox motivation, all else being equal.

More generally again, it looks like The Orthodox Church is a history of doctrine, and thus might be a good way for me to consider the points of view of different eras of the church. There was only one, catholic / orthodox church for the first half of the church's history, so this accounts for a lot of theological thinking. Orthodox theological development post-Schism is interesting in itself and ought to have some similarities with the other branches of Christianity. I am curious to see what I think about traditional theology.

Saturday, February 6, 2021

Cartesian Starting Point

Epistemic status: provisional until I read Descartes.

Not having read Descartes, I will use his statement "I think, therefore I am."

I don't know about you, but if I doubt everything, one thing I can't doubt is my own existence. The past might not exist, the future might not. Other people may only be apparitions without souls, and I might be in a dream. But I exist. Since I am able to doubt, I know that I exist. Only existing things can doubt. Whatever goes into doubt is part of "I think", so emotions and sense perceptions are included.

The Cartesian starting point is this starting point of you, in the moment, as an experiencing subject. This is the most certain starting point for knowledge. Stories of God creating the world or evolution leading up to us are both distant from this starting point. Thoughts of the transhumanist or millennial future are also far away and unreal. The explanations given by science or the Bible are far away. It's not as though we can't cross the distance from here to all those distant things, but we have to start where we start.

When I think about "I think (/ experience) therefore I am" (and maybe it's the same for you), I see immediately in myself an I who is thinking. I am an immediate source of knowledge to myself. No matter what argument about personal identity might confuse me about the existence of me, I immediately know myself to exist in the moment, as a person. So we start off by knowing that reality is personal, and we have to do work if we want to try to claim it's anything else. (Reality is personal, and it is also experienced.)

Starting from this point, I can be an empiricist. I exist, and my experiences exist, and I can say what I see. From this, I can build up knowledge.

Thursday, February 4, 2021

Book Review Preview: Principles of Human Knowledge by George Berkeley

I've been meaning to read something by George Berkeley for a while. I have a copy of Principles of Human Knowledge, so I thought I would read that. I think Berkeley may support my project (in somewhat uninteresting ways -- not surprising because I based my project on what I'd heard about him), but also challenge it. For instance, Berkeley was against the reality of abstractions (I think?) and I am for them -- but maybe the sense in which Berkeley was against abstractions is not the same in which I am for them. Overall, I remember Berkeley as being someone into showing that everything is minds and ideas (I would say, persons and experiences). This is basically what I'm into.

One thing I am interested to see is his anti-material substance arguments.

Cultural Drift as X-Risk

Cultural drift is the process by which a culture changes in a way that no one person intends. An X-risk (existential risk) is one which threatens the survival of humans or their ability to rebuild to our level of civilization. Or the definition can be broadened (as in this book) to include anything that threatens the long-term potential of the human species. This might include things that cause a great deal of suffering (S-risk).

Is cultural drift (as X-risk) something to be worried about? Would people freely choose a bad outcome for themselves? Of course, it would seem good to them. But we might not agree. Longtermists (future-oriented altruists) want to make the future be a certain way -- contain humans, for instance. They might be willing to make all kinds of plans to make that happen. It would make sense to try to provide for a way to resist cultural drift, if cultural drift could produce the same negative outcome as some other cause of extinction or S-risk that we currently plan to avert. We wouldn't want humanity to die out, voluntarily or through insufficient motivation to save itself, or for there to be a kind of totalitarianism in which the people have been "ministry of love"d to the point of approving of their own subjugation and torture. These are two possible outcomes of cultural drift, two things that could somehow seem good to future people.

How would this be protected against? We have to be willing to say that some human preferences, and some human cultures, are wrong. So we probably want some kind of moral realism that is culture-invariant, which does not progress / drift historically. The realism would imply a worldview which would enable us to build an axiology maintaining the importance of resisting extinction or S-risk. A kind of firm and consistent cultural constitution.

Why might we not want such a thing? One possibility is that we want to respect the ability of every culture, every generation of humans, to make their own choices. If we are moral anti-realists, we may want to extend the logic of that so far that we don't constrain any future people to what are our contingent judgments anyway. If a future generation wants to choose to die or become slaves, so be it. While we can certainly try to force future people to do what we want, regardless of their preferences, for some reason we prefer not to do that, perhaps because it appears wrong to us, and though we are moral anti-realists, we still obey our own consciences as though they matter, aren't just contingent themselves.

Humans may someday become useless, no longer needed to run civilization. We could drift toward being useless, with the decisions we make. We might choose convenience and enjoyment over maintaining an interest in human-AI affairs. Then, human-AI culture may drift to the point that AI no longer value us, and we no longer value ourselves. And then we die out, to save resources for the advancement of civilization.

This might not sound bad, if we assume that AI can carry on being conscious without us. But why should we be so sure that AI can be conscious? A materialist might assume that anything that acts like it's conscious, or maybe better, has sufficiently human-like neural structures (or equivalents to neural structures), is conscious. If structures and behaviors like ours are how we know that our peers have minds, if neural structures just are what make us conscious, then we can be pretty sure that AI can be conscious.

However, assuming materialism seems shaky to me. Immaterialism is at least as good as materialism, (I think better), as an explanation of what we observe. As an immaterialist, I wouldn't risk torturing an AI, because an AI (as it appeared to me) could happen to have a consciousness attached to it. But I also wouldn't risk "uploading my mind" into a computer, and then trusting that that worked, in the sense of creating a conscious copy of me. Maybe it wouldn't happen to have a consciousness attached to it.

Attempting to upload humans to computers and then letting the humans die out risks extinction of consciousness. How strictly will we be against taking such a risk when the day comes when it seems technically feasible? That's something that can vary culturally. The author of X-Risk makes out the prevention of extinction to be a serious responsibility which is up to us to take on. But what if our culture drifts so much, particularly through ease, that we just don't care, or don't care enough to hold back from taking the risk?

Another thing to consider is that cultural drift doesn't have to lead to a stark, conscious, official statement of "I am an antinatalist so I support the end of humanity" or "I am gungho to be subjugated" or "I am OK with humans dying out since I'm pretty sure computers can be conscious". Rather, for it to be a problem, there only has to be an erosion of people's determination, and an increased tendency for them to be confused. The more people are eroded and feel little stake, little capacity for outrage, in what goes on -- a fiducialism without connection to the best, perhaps -- the less they will be able to resist the small groups of people (or AI) who have other ideas for them. Likewise if they don't have their minds firmly made up, or don't have the capacity to hold onto their own beliefs, someone or something that wants to manipulate them can do so, to bend their minds to "willingly" support them.

--

We might find the project of avoiding drift attractive as a kind of cultural engineering project. If we make it so that people believe what they need to in order to not choose death, without having to ground that guiding belief in truth, we might consider ourselves successful. But there is some danger in us forming a premature consensus that excludes God or perhaps some other important truth. Whatever history-invariant cultural constitution we decide on, we should decide on it because it is true.

We could see this as a project for everyone to work on: finding the truth of what values are really eternal. In the meantime, the government (and the other elites who lead society) would try to facilitate that process, without making up our minds for us.

Is cultural drift likely to be a problem? I'm not sure that it's the most likely problem, but to the extent that any of our problems are parameterized by human motivation, it seems like coming up with a solid, rational, intuitively and emotionally moving thought pattern to motivate action is called for anyway, and a moral realism is the kind of thing that can work in that role. If there is a 10% chance of cultural drift being a potential problem, then someone should care about it. And since culture is in the hands of everyone, it's something everyone can work on. Some other X-risks tend to require specialist training to directly work on their prevention, but not this one.

MSLN Reasons to Oppose X-Risks

10 December 2021: See also Disestablishedness vs. Anti-temptation.

Existential risks (things that wipe out the human species (extinction risks); make it so that future humans can never regain our level of civilization; or cause a sufficiently large amount of suffering, perhaps "forever") are of obvious concern to humanists. If there is no God, then it would seem that humans are responsible for stopping X-risks. We can't count on anyone else helping. (A view like this can be found in X-Risk by Thomas Moynihan.)

Should a theist care about existential risk? I'll consider the MSLN point of view.

One reason existential risks that do not cause extinction are a problem is the suffering they cause. Suffering is bad in itself but a broken-down civilization may also not be optimal for drawing people to God. One existential risk is global totalitarian government, which could cause suffering and put people in bondage, no less spiritual than political.

A "clean" extinction could also be a problem from God's point of view. We tend to think that there has been such a thing as moral progress from the early days of humanity. Christians think this came through Abraham, his descendants, and Jesus and his followers. Secular people think this came through civilizational development (or something like that). Either way, it takes a lot of work to get where we are now. For some reason, God didn't create us all with a nice, morally-progressed culture. Perhaps there were reasons why he couldn't. So then an extinction either causes him to say "that's it, that's the total number of people who will ever exist" or "now I have to create another universe and live through the process of civilizational/kingdom maturing again". Neither option would be optimal for God, in seeking rest, unless the extinction happened exactly when he would have ended the world anyway.

--

If MSLN were used as the base for a reason (or reasons) to oppose X-risk, it might look something like this:

We have to pay attention to reason because it enables us to deal with the ultimate risk, punishment from God, if we risk hardening through being closed to reason. The voice of God changing us to become fit for him can come through inferences, and if we feel like we are sovereigns who can ignore the implications of facts, when we do so we may be shutting out God. Reason gives us the framework to take everything seriously, and the sense of obligation to facts, so we don't just disregard them when we feel like it. Through that, we oppose X-risk, even if X-risk is not an obvious problem, visible right in front of us. Through reason, we care about future people, as much as through it we would care about people living far away.

The preceding paragraph is provisional. I should make a more developed post on the relationship between belief in God and reason.

God wants civilization to be preserved. He suffered a lot to see civilization this far. Civilization is better than non-civilization for preventing hardening. Or if it's not, civilization is more or less inevitable -- humans seek success and success adds up to civilization. But alternative civilizations to our own could actually be worse from God's perspective, so it's worth preserving it ourselves, until he chooses to end it. Preserving (and improving) our own civilization, it seems, is the best way to prevent hardening. So people's lives (eternal lives) are at stake, in us preserving civilization.

God could always intervene in trying to preserve civilization, and he may be quite active, behind the scenes. But he may not protect us from everything. The genocides of the 20th century weren't what he really wanted, but they happened. But humans might have been able to prevent them, even if he wouldn't or couldn't.

We are motivated by fear (the origin of reason) and also by love (concern for those, including ourselves, who might be hardened in an environment with more contributions to hardening than our own civilization).

Tuesday, February 2, 2021

Fiducialism

This was meant to replace Fiducial Utilitarianism. I think it improves it in some ways, but I'm okay with the older version, too. This one only focuses on "fiducialism", while the other one talks more about utilitarianism as well. This booklet may also be helpful to read on the subject.

10 December 2021: See also a reason why fiducialism may be necessary.

"Fiducialism" is my term, borrowing from Joseph Godfrey's (or some source of his's) term "fiducial", which means "pertaining to trust". Fiducialism is when we seek to trust. Hedonism is when we seek positive experiences, but fiducialism is when we seek to be receptive to some kind of positivity ("receptivity to enhancement" is Godfrey's definition of trust, in his Trust of People, Words, and God). I don't remember Godfrey's take, but I think "enhancement" can be defined as broadly as "well-being". If I am receptive to something as though it it is making me better off in some way, whatever definition of "better off" I trust as true, then I am trusting it.

Trust is personal connection. We trust the chairs we sit in, and the ground we walk on. We trust ideas -- even ones that in a sense we distrust. Just by thinking them enough to disagree with them, we trust them. We connect with their reality. We are receptive to what comes our way through these connections.

Some things inhibit trust. An "insult to the organ of trust" ("insult" in a medical sense) can be called a betrayal. A betrayal can hinder a person's ability to trust, either temporarily or permanently. We avoid betrayals so that we can trust more. Inhibiting trust in a betraying thing (a prudent behavior) helps us to trust more in the long run.

Part of trusting more is to trust more and more things. Or, perhaps more precisely, to be capable of trusting more and more things. What's important is to develop the receptivity. It's important to go out and experience the world so that you open yourself up on the inside. But when you have experienced enough, you don't need to keep experiencing things. Then you can rest.

It's important for a fiducialist to trust the set of all things, to have an overall trust. Godfrey calls something like this "security trusting" (trusting what is, over all, to be good in the end for you) and "openness trusting" (you approach reality as something which can give you opportunities). Security trusting and openness trusting are part of everyone's lives to some extent, whether or not they officially acknowledge them or know how they might be rationally justified.

I think that altruists are people who trust, who personally connect with reality other than themselves. Trusting brings life to people. Trusting preserves personality, because it is you, the person, who trust. A hedonist could cease to be a person in that emphatic sense, as they may be passively fed positive experiences. But fiducialists themselves trust whatever experience they have, to the full extent that it is trustworthy.

We need to open up to this or that thing, but one reality we need to open up to is "the best". Strangely, we don't have a taste or a stomach for the best, at least, not always developed to its full extent. So to be a fiducialist, we have to seek what is highest, deepest, and truest. A fiducialist can't avoid what is best, hoping to only trust the easier things. Trusting can be difficult and costly.

The very idea of maximization, at the heart of utilitarianism, can come out of a receptivity to "the best".

Fiducialism could be considered a definition of instrumental rationality, just like hedonism. It inherently disposes us to connect with the truth, which is like "the best", or is part of "the best". So it is a good grounding for epistemic rationality. Hedonism allows us to go into the experience machine, but fiducialism wants to be open to what exists. It does not force us to connect with all facts, but just as many facts as it takes for us to connect with what is deepest, truest, and highest. Fiducialists who do not seek out all facts are still receptive to them.

Fiducialism is disposed to value all things except to the extent that they betray. The process of growing in fiducialism sometimes requires betrayals, breakings-open. We can have a false wholeness that needs to be broken. Fiducialism wants us to be disposed to value all the things that matter, to have a complete picture. To the extent that we have the occasion or opportunity, we will value, connect with, all things.