These are notes on my readings on the
Long Reflection, except for the two books, On the Genealogy of Morality, by Friedrich
Nietzsche, and Teaching
Children to Care by Ruth Sidney Charney (those two links are
to their reviews).
--
Thinking Complete (Richard Ngo) "Making
decisions under multiple worldviews"
[I decided to come back to this one later and restart reading it.]
--
Felix Stocker Reflecting on the Long
Reflection
I find myself persuaded (although I'm not a tough audience) from
this that the Long Reflection is not a practical pursuit, if it is
a top-down discrete era of human history. That is, if it's something
we impose on everyone, then we are doing so against someone's will,
and they may defect, breaking the discrete era. Also I am (easily)
persuaded by Stocker's objections that people will desire technology
that the Long Reflection would try to hold back, and stopping human
technological desire risks creating an S-risk of its own, the global
hegemon. But, I do think that reflecting on what are the best values,
and seeking to influence and be influenced by everyone to create
(ideally) some kind of harmony in human values (or reduction of
disharmony that allows for a more ideal "liberal" solution to the
coordination questions that the Long Reflection is trying to answer)
is something that can be ongoing. I would call this
"cultural altruism", or a subset
of cultural altruism. Much of what Ord and MacAskill would want
could be pursued in a bottom-up, intermingled way and avoid some
(or all?) of Stocker's objections.
--
Paul Christiano Decoupling
deliberation from competition
Christiano makes the point that deliberation can be infected by
competition. This would affect a bottom-up cultural altruism scene.
However, I hope that a social scene can absorb a certain amount
of competitiveness without harming it. For instance, when we try
to find truth, we (society) sometimes hire lawyers to argue each side of a
case and then we listen to what they say. Innovators in thinking
may be motivated by competition, but as long as they are also
evaluators (are both "soldiers" and "scouts"), or enough people who
have power are "scouts", the competition only serves to provide
ideas (or bring to light evidence) to select from, which is a good
thing when you are trying to find the overall truth. When competitive
people shut other people up, or have state-level or "megacorp-level"
mind control / propaganda powers, then competition is bad for deliberation.
But humans competing with and listening to humans on a human scale is
good for deliberation. "All" we have to do is keep states and corporations
from being too powerful.
I imagine cultural altruism being something like "status quo
truth-finding but X% more effective". Our current truth-finding culture
is (from one perspective) pretty good at bringing about truth, or at
least, truths. Look how many we've accumulated. (Maybe where it needs
to be better is in finding a whole to truth. And maybe we should think
of how to protect it.)
I don't think I'm talking about the same thing Christiano is. I think
he's talking about how AI teams can deliberate despite race dynamics,
or something like that. Whereas what I imagine is everybody (all humans,
more or less) interacting with each other without real time pressure.
But it's interesting to think, where exactly is the distinction between
Christiano's part of culture and the rest of culture? Isn't cultural
work being done, perhaps that would affect human culture in general
(more my concern) by Christiano's fairly pragmatic and craft-affected
tactics for fostering deliberation despite race dynamics? Isn't
pragmatic, resource- and time-constrained life where values come from?
Christiano's situation is just another of many human situations.
In the section "A double-edged sword", Christiano talks about the
practical benefits of competition to weed out bad deliberators (their
influence, not them as persons). I suppose this feels realistic
to him. To me, I feel (maybe naively) that ideal deliberators would
stop fearing each other and simply fear the truth. If lives are at
stake, because ideal deliberators index themselves to the saving of
lives or whatever is of highest value, they would naturally work their
best, and if this can be known, defer to people who know better. But
Christiano has lived in his part of the real world, where people are
resource- and time-constrained, and implicitly or not thinks that it
generally has to be competition that gets the job done of communicating
reality to people, and not an innate indexing-to-reality. I assume (if
he really does believe that innate indexing-to-reality is not an
option, or hasn't thought of it) that his beliefs in some necessity
or desirability of competition are connected with his limited personal
experience. Christiano may not see the possibility that people can
be ideal deliberators, or that a culture of ideal deliberation could
be fostered, given enough time. (His context, again, seems to be of
specific, relatively near-term situations.)
Maybe if people are
mistaken about their own competence in judging whether to defer,
that would be one reason why there would need to be some outside
actor who pushed them to relate to the truth better, and this can
never be fixed. (Would people in such a context be "competed away"
by a wild and more or less impersonal social force ("competition"),
or would there be a person who could tell them they were wrong,
who knew how to talk to them and who could at least consciously
attempt to make themselves trustworthy to the "deluded one"? Perhaps
for many of us it is more bearable to be ruined by "competition"
than to be corrected by a person we know. Of course, it is always
possible that the people who correct us are themselves wrong. Maybe
that's the appeal of competition, that in some sense it can't be
wrong -- if you're not fit, you're not fit. But then competition
itself distorts reality, at least through "Moloch".)
--
Finished the main article, now reading the comments (what's there
as of 22 August 2022).
Wei Dai makes the point that without competition, cultures's
norms can randomly drift. (I would add:) this is sort of like
how in Nineteen Eighty-Four, once war goes away, totalitarian
states can "make" 2 + 2 = 5. I've thought there could could be
problems with digital humans coming up with beliefs like 2 + 2 = 5.
But at the same time, Moloch distorts our thinking and our lives
as well. So it seems like we're doomed either way to someday not
living in reality.
However, believing that 2 + 2 = 5 is physically difficult.
Probably because of how the human brain is wired -- and we can
change that. But either the human brain is in tune with the
truth (more or less; enough to found reason) or it's not, and
it always has, or hasn't, been. If it's not, then why worry
about deliberation going well, or being in tune with reality?
We never had a chance, and our current sense of what is rational
isn't valid anyway, or we don't have a strong reason to believe
that it is. But if it is, then the solution is just to keep
people's brains about like they have always been, and use that
as the gold standard for the foundations of rationality (at
least the elements that are or are more or less like axioms,
which are the easy, basic elements of rationality, even if
building sufficiently complex thought-structures could go
beyond human capabilities).
If it is the case that our innate thinking is not in tune
with reality (on the level of the foundational axioms of reason),
can we know what to do? Maybe not, and if not, then we have no
guidance from the possible world in which our innate thinking
is invalid. So if we are uncertain between that scenario and
the one where it is valid (or valid-enough), then since the
valid-scenario's recommendations might have some connection
with reality, we should follow them.
It does seem odd to me that I should so neatly argue for
the status quo, and that the human brain (or, I would say,
human thinking, feeling, intuiting, perceiving, etc. nature,
of which the brain is a phenomenal manifestation) should be
the gold standard of how we know. Can't we be fallible? It
makes perfect sense that we could be. But practically speaking,
we're stuck in our own world, and lost if we leave it.
(This seems like a bit of a new view for me, so I should
think about it some more.)
--
Wei Dai says, later on
--Currently, people's thinking and speech are in large part
ultimately motivated by the need to signal intelligence[link],
loyalty, wealth[link], or other "positive" attributes[link], which help
to increase one's social status and career prospects, and attract
allies and mates, which are of course hugely important forms of
resources, and some of the main objects of competition among
humans.--
I'm not sure if this is how things seem to people subjectively,
or if rather they feel like (or are) motivated by love for their
family and friends, or some higher good. They have to work for
resources due to scarcity, and because if they don't, they won't
be able to live or provide for the people they love. Maybe it
is the case that even love is something that is really "ultimately"
motivated by resource acquisition? If a person is aware of this,
can they willfully choose love (or value, or rationality) against
resource acquisition? Probably they can. (Rationalists can choose
against their biases, so why couldn't other people make as strong
a choice?) We might suppose that most people are stuck in survival
mode, or don't think much further than just their immediate friends
and family. But maybe that's an artifact of scarcity, ambient
culture, and them not being educated to see the bigger picture.
If you think that everything is about resource acquisition,
that is what the world will be. If you think everything is about
love / truth / valuing, etc., that is what the world will be.
Some people have to face the world as it currently is, and it
bends their thinking toward short-term, strategic, self-interested,
competitive, resource-scarce, resource-hungry thinking. But some
people are free from that, whether through temperament or life
situation (perhaps they are too "untalented" to be able to do anything
practical in the world as it is, and can only work on the world
as it should be). These are the people who can and should lead
the way in deliberation, in that, their minds are actually capable
of deliberation. In areas of deliberation, the practical elites
should be inclined to defer to them.
I checked the links in Wei Dai's comment (quoted above). They
were about how unconscious drives (especially including the ones
that drive signaling) really control people. I am subject to such
drives all the time. But do they really matter in the long run?
I am able to pursue what I choose to pursue. Perhaps my drive to
seek a mate gives me the energy to seek a spouse -- and all that
comes along with it, including new non-romantic interests, and
a new perspective on who exists in the world. I get to choose
which traits I find desirable in a spouse, even if the drive is
not chosen. Or, if those have to "pay rent" by giving me the
prospect of status, I get to choose, between the different sources
of status that are roughly equal in expected yield, which of them
I pursue. I can be intentional and conscious on the margin, and
steer my psychological machinery vehicle in the direction that I
want to go. The whole concept of "overcoming bias" and being
rationalist doesn't make sense if this isn't possible, and I don't
see why that level of intentionality is, or could only be,
confined to a tiny subculture (tiny by global population standards).
I think that short-term, competitive, resource-hungry, etc. thinking
is like that evolutionarily-driven unconscious-drives side of being
human, and the truly deliberative is like, or in some sense is the
same as, the intentional, subjective, conscious, rational side.
I am suspicious that the unconscious mind doesn't even exist.
Where would such a mind reside, if not in some other mind's consciousness?
Can willing really come from anything other than an existing being,
and can an existing being be anything other than conscious? I am
skeptical that there is a world other than the conscious world (more
than skeptical, but for the sake of argument, I would only suggest
skepticism to my imagined reader here). Given this skepticism, we should
be concerned that we are being trolled by evil spirits, or, more
optimistically, are being led by wiser and better spirits than we are.
Which side wins when we see things in a cynical or mechanistic way?
I feel like cynicism and mechanistic thinking make me less intentional
and more fatalistic, more likely to give in to my impulses and
programming. Since my intentions seem to line up (at least
directionally) with what wiser and better spirits would want, I should
protect my intention and strengthen it, and see the possibility of
free will, and be idealistic.
I suppose a (partial) summary of the above would be to say "deliberative
people should be idealistic, conscious, believe in consciousness,
despite 'the way the world works'". Maybe the Long Reflection (or
cultural altruism) is concerned with determining what really should be,
and some other groups or processes are needed to determine what can be,
in the world that we observe and have to live in up close.
I think the New Wine worldview is one that inclines people toward
being cultural altruists, and less so toward being EAs or the like,
because it has a sense that the absolute best is the absolute minimum
[in the sense that if you attain the absolute best on the New Wine
account, you have only attained the bare minimum] and that there is
a long time to pursue it, and that physical death ("the first death")
is not as significant.
--
Cold Takes (Holden Karnofsky) Futureproof
Ethics:
Karnofsky says --our ethical intuitions are sometimes "good" but sometimes
"distorted." Distortions might include:
* When our ethics are pulled toward what's convenient for us to believe.
For example, that one's own nation/race/sex is superior to others, and that
others' interests can therefore be ignored or dismissed.--
Is it a distortion for our ethics to be pulled toward what is convenient
for us to believe? Why does Karnofsky think that's true? I agree with
Karnofsky on this thought (with some reservations, but substantially), but
even if everyone did, why would that mean that we had found the truth?
(I think a proxy for "I am speaking the truth" is "I am saying something
that nobody in my social circle will disagree with" -- but it's an imperfect
proxy.) Can Karnofsky root his preference in reason? I think that the
truth is known by God, and sometimes thinking convenient ways will lead us
toward believing what God believes, but sometimes it leads away. God is the
standard of truth because he is the root standard of everything. So
there is something "out there" which too much convenient thinking will
take a person away from. Is there anything "out there" for Karnofsky's
thinking to be closer or further from, due to distorted thinking? If
not, does it make sense to call the distortions "distortions", or rather,
"undesired changes"? (But without the loading we put on "undesired" to
mean "objectively bad".)
Karnofsky clarifies a bit with --It's very debatable what it means
for an ethical view to be "not distorted." Some people ("moral realists")
believe that there are literal ethical "truths," while others (what I
might call "moral quasi-realists," including myself) believe that we are
simply trying to find patterns in what ethical principles we would
embrace if we were more thoughtful, informed, etc.[link]--
I should check the link when I have time and come back [later: I did
and didn't feel like it changed anything for me], but what I read in that
quote is something like "Some people are moral realists, but I'm not.
I'm a moral quasi-realist. I look for patterns in what ethical principles
we would embrace if we were more thoughtful, informed, etc. Because
thoughtfulness, informedness, etc. is a guide to how we ought to behave.
It rightly guides us to the truth, and being rightly guided toward the
truth is what we ought to be. Maybe it helps us survive, and surviving
is what we ought to do." Which sounds like Karnofsky believes in an
ethical truth, but for some reason he doesn't want to call himself a
moral realist. Maybe being a moral realist involves "biting some
bullets" that he doesn't want to "bite"?
[That characterization sounds unfair. Can't I take Karnofsky at his
word? I think what makes me feel like he's doing something like using
smoke and mirrors is that the whole subject of morality is pointless
unless it compels behavior. Morality is when we come to see or feel
that something ought to be done, and ideally (from the perspective
of the moral idea) do it. So if Karnofsky ends up seeing and feeling
that things ought to be done, or intends for others to see or feel
that things ought to be done, even if it doesn't make sense to say that
"ought" exists from his official worldview, then he's being moral,
and relying on the truth of morality to motivate himself and other
people. "Thoughtful" and "informed" are loaded in our society as
being "trustworthy", so they do moral work without having to say
explicitly "this is what you ought to do". So Karnofsky gets the
motivational power of morality while still denying that it exists
beyond some interesting patterns in psychology. I guess if he's
really consistent in saying that he's just looking at patterns of
thinking that emerge from "thoughtfulness and informedness", and
"thoughtfulness and informedness" have no inherent moral recommending
power, then he should say "hey, I'm saying a lot of words here which
might cause you to think things, feel things, and do things, but
actually, none of them matter and they have no reason to affect you
that deeply. In fact, nothing can matter, because if it did, it would
create morality -- what matters should be protected, or guarded against,
or something -- and morality is just patterns of what we would believe
if we were thoughtful and informed, which themselves have no power to
recommend or compel behavior". Does Karnofsky really want to be seen
as someone whose words do not need to be heeded?]
[This is quickly written and I have not read in depth what Karnofsky
thinks about moral quasi-realism, which I'm guessing might be sort of
the same as Lukas Gloor's anti-realism? I did read Gloor's
moral anti-realism sequence
(or at least the older posts, written before 2022). With Gloor's
position, I also got the feeling of smoke and mirrors.]
--
Karnofsky summarizing John Harsanyi:
--Let's start with a basic, appealing-seeming principle for ethics: that
it should be other-centered.--
Why should that be a foundation of ethics? It's merely "basic" and
"appealing-seeming". It certainly is more popular than egoism -- or
maybe, given our revealed preferences, egoism is a very popular moral
foundation. Maybe egoism and altruism are supposed to compete with
each other -- that looks like what we actually choose, minus a few
exceptional individuals. Nietzsche wrote a number of books arguing
in favor of egoism [as superior to altruism, as far as I could tell],
and I can think of two other egoist thinkers (Stirner (I've read his
The Ego and His Own) and Rand (whom I have not read but have
heard of)). Are they "not even wrong", or do they have to be dealt
with? Supposedly futureproof ethics is about what you would believe
if you had more reflection. Maybe if you're part of the 99%, the more
you reflect, the more you feel like a democratic-leaning thing like
utilitarianism is a good thing. But if you're part of the 1%, and
you're aware of Nietzsche's philosophy, maybe the more you reflect,
the more true it seems that the strong should master the weak,
based on the objective fact that the strong are stronger and
power by its very nature takes power. There is a certain
simplicity to those beliefs. So then will there be a democratic
morality and an aristocratic one, both the outcome of greater
reflection? Or maybe an AI reflects centuries per second on the
question, and comes up with a Nietzschean conclusion. Is the AI
wrong?
Personally, I lean utilitarian (at this point in my life) because
I believe that God loves each person, by virtue of him valuing
everything that is valuable. Everything that exists is valuable,
and whatever can exist forever should. [Some beings turn out not to
be able to exist forever, by their choice, not God's.] He experiences
the loss of all lost value, and so does not want any to be lost. We
are all created with the potential to be saved forever. So there is
a field of altruism with respect to all persons. Perhaps animals (and
future AI) are (or will) really be personal beings in some sense which
God also values and relates to universally.
[Utilitarianism is about the benefit of the whole, tends toward
impartiality, and is based on aggregation. God relates to each person,
which accomplishes what aggregation sets out to do, bringing everything
into one reality. God tends toward impartiality, and works for his
personal interest, the whole.]
--
Karnofsky talks about how
--The strange conclusions [brought about by utilitarianism + sentientism]
feel uncomfortable, but when I try to examine why they feel
uncomfortable, I worry that a lot of my reasons just come down to
"avoiding weirdness" or "hesitating to care a great deal about creatures
very different from me and my social peers." These are exactly the
sorts of thoughts I'm trying to get away from, if I want to be ahead of
the curve on ethics.--
However, the discomfort we feel from "strange conclusions" could
also be us connecting to some sense that "there's something more than
this". I remember the famous Yudkowsky quote ([which he] borrowed from someone
else, whom I should look up when I have time) of something like
"That which can be destroyed by the truth should be". But the
reality for us, if we are the destroyers, is that in effect it is
"Whatever can be destroyed by the truth as I currently understand it,
should be". So, if we decide to destroy our passage to whatever
our intuitions of diffidence were trying to tell us, perhaps by
erasing the intuitions, maybe we have destroyed some truth by
committing to what we think must be true, counter-intuitively
true. We should probably hold out for some other truth, when
our intuitions revolt, because they might be saying something.
[The quote seems to originate with
P. C. Hodgell]
I believe that eternal salvation dominates all other ethical
concerns, as a matter of course. Unbearable suffering in itself
is bad because God has to experience it, and it is for him what
it is for any other being: unbearable. What God, the standard,
finds unbearable, will be rejected by him, and what is rejected
by the standard is illegitimate. We should be on the side of
reducing unbearable suffering. If we are, then we are more
in tune with God and thus more fit for eternal life. I would
agree with Karnofsky in the goal of ending factory farming,
although it's not my highest priority. But, I think, from my
point of view, it's valuable to look at Karnofsky's worldview,
the one which so strongly and counter-intuitively urges us
that "the thing that matters is the suffering of sentient
beings" with some suspicion. Strong moral content says "this is
'The Answer'", but to have "The Answer" too soon, before you have
really found the real answer, is dangerous. I don't think anyone is
trying to scam me by presenting that urgent psychological thing
to me, but I think it could be a scam in effect if it distracts me
from the ways in which our eternal salvation and our relationships
with God are at stake, and really matter the most.
[I suppose I'm saying that the theistic worldview is more satisfying
to hold in one's head; satisfies, more or less, Karnofsky's concerns
with animals; and would be missed if I said "okay, utilitarianism +
sentientism must be right no matter what", so that I go against my
intuitions of discomfort, even ones which might somehow intuit that
there should be a better worldview out there.]
When people are forceful with you and try to override your
intuitions, that's a major red flag. Although counter-intuitive
truths may exist, we should be cautious with things that
try to override our intuitions. In fact, things that are too
counter-intuitive simply can't be believed -- we have no choice
but to see them as false. This is the foundation of how we go
about reasoning.
--
Should I feel confident that I have futureproof ethics? No,
I guess not. I do think that according to my own beliefs,
it's clear that I could, if I were only consistent with my
beliefs. But my beliefs could be wrong. I don't know that,
and currently can't know that. This goes for Karnofsky as
well. The best you can do is approach the question with
your whole heart, mind, soul, and strength, and be open to
revision. Maybe then you can hold better beliefs within
your lifetime, which is the best you can do.
--
Cold Takes (Holden Karnofsky) "Defending
One-Dimensional Ethics"
As I read this, I think that this post may be mostly not
on the topic of the Long Reflection.
However, since I'm reading it, I will say that I think
in Karnofsky's "would you choose a world in which 100 million
people get a day at the beach if that meant 1 person died a
tragic death?" scenario, I would probably say, if someone asked
me "do you want to go to the beach if there's some chance that
it caused someone to die a tragic death?", it might make me
question how necessary the pleasure of the beach was to me.
If there were 100 million people like me on the beach, and
we all somehow knew without a doubt that if we stayed on the
beach, one person would die a tragic death, and that we all
thought the same, we would all get off the beach. How could
pleasure seem worth anything compared to someone else's life?
Arguably, in real life, 100 million beach afternoons make us
all so much more effective at life that many more lives are
saved by our recreation. But I don't think that's the thought
experiment.
Does my intuition pass the "veil of ignorance" test?
If I don't know who I'm going to be, would I rather be the
person who went to the beach, and somehow all else being
equal that was 1/100 millionth of the share of someone
else dying, or would I rather save the one person? What's
so great about the beach? It's just some nice sounding
waves and a breeze. Maybe as a San Diegan, I've had my
fill of beach and a different analogy would work better.
Let's say I could go hear a Bach concert. Well, Bach is
just a bunch of nice notes. I like Bach, and have listened
to him on and off since I was a teenager. He is the artist
I am most interested in right now, someone whose concert I
would want to attend. (I'm not just using him as a
"canonical example".) But, Bach is just a bunch of nice
notes, after all.
I find that the thought
of someone not dying is refreshing, in a way that Bach isn't.
I can't say I have no natural appetite for the non-ethical,
which I may have to address somehow, but it's not clear to
me that producing a lot of "non-ethical" value (if that makes
sense) is easily comparable to producing "ethical" value.
We are delighted with things and experiences when we are
children, but when we see things through the frame of reality,
lives are what count.
[By "lives" I mean something like "people", and people
exist when they are alive. (And I think that non-humans can
matter, as well, as people, although I'm not sure I've thought
through that issue in enough depth.)]
Now, that's my appetites, and thus, I guess, my preferences
in some sense. But what does that have to with moral reality?
I guess one way to look at morality is that it's really just
a complicated way to coordinate preferences, and there is no
real "ought" to the matter. So then it would make sense to
perform thought experiments like the veil of ignorance. But
as a moral realist (a theistic moral realist), I believe that
my "life-over-experience-and-things" intuition lines up with
what I think God would want, which is for his children to live.
Their things and experiences are trivial for him to recreate,
but their hearts and thus their lives are not. God simply
is the moral truth, a person who is the moral truth, and what
he really wants necessarily is what is valuable.
--
jasoncrawford's EA Forum post What does
moral progress consist of?
I chose this post for this reading list hoping that the title indicated
it would be an examination or questioning of the very concept of moral
progress. But I wouldn't have chosen it if I had read it. But now that
I think about it, maybe I can make something of it.
I guess the part about how Enlightenment values and liberalism
are necessary for progress (of any sort), might mean that somehow we
would need the Enlightenment baked into any Long Reflection, as the Long
Reflection is an attempt at moral progress (seeking better values).
Perhaps looking at values as an object of thought comes out of the
Enlightenment, historically at least? Or the idea of progress (perhaps)
was "invented" in the Enlightenment, and can only make sense given
Enlightenment ideas, like reason and liberalism? I can tentatively
say that I'm okay with the idea that Enlightenment influence is necessary
for progress, and that I'm in favor of progress, if I can mix other things
with the Enlightenment, like deeply theistic values. And, I think that
any other stakeholder in world values who is not secular, would want
that or something equivalent.
(I'm not sure I can endorse or reject the claim that the Enlightenment
could be an essential part of progress, given what I know.)
--
rosehadshar on EA Forum How moral progress happens:
the decline of footbinding as a case study
What I will try to use from this post is the idea that moral progress
comes through both economic incentives changing, and people deliberately
engaging in campaigns to change behaviors and norms.
The Long Reflection, I would guess, will not occur in isolation from
culture. If it proceeds according to my assumption that it is done both
rationally and intuitively by all people, and not just rationally by a
cadre of philosophers, then campaigns of moral progress will be part of the
"computation" of the Long Reflection. All those people adopting the
apparently morally superior values would be the human race deciding that
certain moral values were better than others, offering their testimony in
favor of the new values, thus (at least partially) validating them, just
as the cadre of philosophers, when they agree on premises, all testify
to the values that follow from those premises.
Economic changes affect how people approach reality on the level of
trusting and valuing. I would guess that in cultures with material
scarcity and political disestablishedness, people would have a stronger
feeling of necessity -- thus, more of a sense of meaning, and less of
a sense of generosity. And the reverse being true of cultures as they
have less material scarcity and more political establishedness.
It might be very difficult to preserve a sense of necessity in a
post-scarcity future, and this would affect everyone, except maybe those
who deliberately rejected post-scarcity. A lack of meaning, if taken
far enough, leads to nihilism, or, if it doesn't quite that far, to
"pale, washed-out" values. Perhaps these would be the values
naturally chosen by us after 10,000 post-ASI years. [The 10,000 years
we might spend in the Long Reflection.] But just because
we naturally would choose weak values, doesn't mean weak values, or a
weakness in holding values, is transcendentally right. What if our
scarcity-afflicted ancestors were more in tune with reality than our
post-scarcity descendants (or than us, where we are with less scarcity
but still some)? Can we rule out a priori that scarcity values are
better than post-scarcity values? I'm guessing no. What we think
is "right" or "progressive" might really just be the way economic
situations have biased us. It could be the case that meaning and
selfishness are transcendentally right and our economic situation
pries us away from those values, deceiving us. Thus, for a really
fair Long Reflection, we have to keep around, and join in, societies
steeped in scarcity.
So can we really have moral progress, or is it just that biases
change in a somewhat regular, long-term way, such that if we are
biased to the current moral bias-set, we see the intensification
of it as progress?
A cadre of philosophers will be biased from their economic and
other experiential upbringing. The cadre may have either watched
or was formed secondhand by TV and movies (or in the future, VR
equivalents?) which are based in blowing people's minds. (Secondhand
exposure to such artifacts through the cultural atmosphere shaped by
those who did watch them.) You can feel something happening in your
brain when you watch such mind-blowing movies as The Matrix and
Fight Club, and that blown-open, dazzled, perhaps damaged
mind (which might still be clever, but which loses its sense that
there is such a thing as truth that matters) perhaps remains with
people their whole lives. I suppose having written this, now people
could try to raise a subculture of Long Reflection philosophers,
who have not been shaped by TV, movies, or VR -- only books. But
books condition people as well. In fact, philosophical reflection
conditions people, makes them "philosophical" about things.
Being in physical settings shapes a person. Driving a car is about
taking risks and acting in time. Taking public transit is about those
things too, but more so about waiting and sitting. Being in VR spaces
could be about personal empowerment, flying like a bird, wonder and
pleasure (I'm assuming that VR systems won't have any bizarre and
terrifying glitches).
Ideally philosophy is pure truth -- but what is philosophy?
Is philosophy a "left-brained" thing? Is the truth only known that way?
Or is it a "right-brained" thing as well? If we are all raised somewhat
similarly, we might all agree on a definition of philosophy, as, perhaps
a more left-brained thing (although our premises come from intuitions,
often enough). But why should we all have been raised the same way?
--
Thinking Complete (Richard Ngo) Making
decisions under multiple worldviews ("for real" this time)
I read this, but at this point, with the level of focus I can give,
I can't go in depth on it. But it does seem to be something that some people
interested in the Long Reflection should read (unless something
supersedes it?). It's about what to do when you can't merge everyone's
worldview into one worldview, but you still have to come up with
a decision. I think it significantly possible that the Long Reflection
will reach an stalemate and civilization will still have to make the
decisions that the Long Reflection was supposed to help us make. While
epistemic work can resolve some issues (get people on the same page /
show armchair Long Reflection philosophers more evidence as to what
really matters), I'm somewhat not optimistic that it will make it all
the way to unity, and we will still have to decide collectively.
--
Thinking Complete (Richard Ngo) Which
values are stable under ontology shifts?
This is an interesting post, and perhaps three months ago, I would
have written a post on this blog responding to it more in depth. It is
relevant to the Long Reflection, I suppose, by saying that values may not
survive changes in "ontologies" (our understanding of what things are or
how they work?), and may end up seeming foreign to us.
(One thought: what is it about the new ontology that is supposed to
change my mind? I would guess, some form of reason. Why should I care
about reason? Why not just keep my original way of thinking? Or -- is
reason the base of reality, or is it rather experience, or the experiences
that a person has? My experience of happiness, and myself, are rude facts,
which reason must defer to. I can find things to be valuable just because
I do, and I want to. (Maybe the best argument against my "rude fact sense
of happiness" being valid is someone else's "rude fact of unhappiness"
caused by that happiness of mine.) Something like
the "ordinary" and the "ontological".)
[I can value whatever I want, regardless of what people say reality is,
because base reality is me and my experiences, the cup I drink from that
was sitting on the table next to me, my own history and personal plans.
Sure, people can tell me stories about where my desires came from (evolution,
of course), or about how I am not as much myself because my personal
identity technically doesn't exist if I follow some argument. But my desires
and my personal identity exist right here in the moment, as rude facts,
rude enough to ignore reason, and they are the base on which reason rests,
after all.]
[These rude facts put a damper on reason's ability to change our values,
at least, they protect each of our unique persons, our thickness as
personal beings, as well as the objects of immediate experience and
consciousness itself. But reason can persuade us to see reality in
different ways. Perhaps it can help us to see things we never saw before,
which become new parts of our experience, just as undeniable as the
cool water flowing in us after we have drunk it. Reason can show us
the truth, sometimes, but there are limits to reason, and ultimately
personal beings experiencing is reality.]