See also the postview of this reading list.
I want to think more about the Long Reflection. As I understand it,
it is a search for the truth about moral values, a long process of
reflective reason that seeks some kind of coordinating value set for
our species as a whole.
So I want to do some readings, related to this. These readings have
been chosen somewhat unsystematically. Given my resources at this time,
this project will have to be provisional.
--
Here are the things I want to read or re-read:
The parts of The Precipice (by Toby Ord) that refer to
the Long Reflection. This gives a basic definition of the Long
Reflection. [My copy is the hard-bound / ISBN 978-1-5266-0021-9]
I decided that the discussion in The Precipice of the Long
Reflection, which is brief, belongs in this preview, because it helps
validate whether my concept of what the Long Reflection is (in the
sense of a basic definition of the phrase) is in line with Ord's,
which I'll take as either "the established view" on the Long Reflection
or "likely enough basically the established view". Michael Aird, in a
reading list on the EA Forum doesn't list a lot of
other sources. Looking at the "long reflection" tag,
I see that I have
read all the other articles (few in number) that are in the category
(as of 22 August 2022) except Gloor's and Vinding's,
and I just read the short one about Robin Hanson's views on the Long Reflection.
I have listened to a majority of the podcast episodes
on Aird's list, and my sieve-like memory can't support me saying the
guests never saying anything to the contrary, but I feel like if they
radically disagreed, I might remember that. (Similarly with the articles
I read.) [I see in Hanson's post a link to another critical article on
the Long Reflection by Felix Stocker. I
may include Hanson and Stocker in this reading list, as well as Vinding, and
maybe Gloor if I have time. Also through the
EA Forum tag page, I see this post by Paul Christiano.
There may be other articles not directly accessible from the EA Forum, but
for a provisional post like this, I think I have done enough research.
// The Precipice notes are in blockquotes.
A list of blog posts and EA Forum posts about normative uncertainty,
the future of ethics, and moral progress:
Cold Takes (Holden Karnofsky): Defending One-Dimensional
Ethics, Futureproof Ethics.
How moral progress happens: the decline of footbinding
as a case study - EA Forum post by rosehadshar.
What does moral progress consist of? - EA Forum post
by jasoncrawford.
Thinking Complete (Richard Ngo): Making decisions using
multiple worldviews, Which values are stable under
ontology shifts?
Teaching Children to Care (Ruth Sidney Charney): Where
does morality come from? Maybe from teaching children to care
as part of managing classroom behavior.
On the Genealogy of Morality (Friedrich Nietzsche): Where
does morality come from? (Whatever Nietzsche says.)
--
Here are some preliminary thoughts that I have about the Long
Reflection.
Some problems I see with the Long Reflection:
I have supposed that it is a search for something that can coordinate
action as a society. That means that it must be some way for all / most
of us to come to the same views. I see two ways to do this. One is
to find the "epistemic answer" (good reasoning) which will convince
everyone / most people of the truth. The other is for everyone to agree
on a pragmatic society-wide behavior, despite the fact that people
disagree.
Is it really the search for something that can coordinate action as
a society? Ord says (p. 191 / ch. 7 "Grand Strategy for Humanity" section):
--If we achieve existential security, we will have room to breathe. With
humanity's longterm potential secured, we will be past the Precipice, free
to contemplate the range of futures that lie open before us. And we will
be able to take our time to reflect on what we truly desire; upon which
of these visions for humanity would be the best realisation of our potential.
We shall call this the Long Reflection--
Who is "we"? "We" is a plural word which means that there is some kind
of collective thinking to produce collective action. I guess from this
Ord might not think that we all have to be part of the same collective,
but he might.
Later, same page, continuing to p. 192:
--The ultimate aim of the Long Reflection would be to achieve a final
answer to the question of which is the best kind of future for humanity.
This may be the true answer (if truth is applicable to moral questions)
or failing that, the answer we would converge to under an ideal process
of reflection. It may be that even convergence is impossible, with some
disputes or uncertainties that are beyond the power of reason to resolve.
If so, our aim would be to find the future that gave the best possible
conciliation between the remaining perspectives.[10]--
In the note at the end of that passage ("[10]"), Ord discusses deal-making
between the different worldviews, dividing up the galaxies between
them. The "we" in question is some kind of species-wide (plus AI?)
elite, maybe? And the dividing up of galaxies is coordinated action.
My verdict, for now, is that the Long Reflection is conceived of as
a way to coordinate action on a very broad ("species-wide+") scale.
I think that the question of "how much would everyone agree with the
Long Reflection", as a process of discovery and then of the implementation
of the fruits of that process of discovery, is open and could easily be
answered by "the Long Reflection will leave a lot of people out, while
the pro-LR people will think they were doing the best thing for all
sentience-kind, if measures are not taken against that". For instance,
if the pro-LR people (the kind of people who would show up to the
discussion), with their lengthy reflection come to see that people who
disagree can only do so through lack of reflection, they may not have
sympathy for the views of those unreflective people, and thus (therefore
sort of, or really) not have full sympathy with those people.
In other words, assuming reflectiveness as the main criterion for
finding values of worth is a major assumption, which can exclude the
values of less-reflective people.
Ord brings up the parallel to the Renaissance in note 12 of Ch. 7.
Most people weren't involved in the Renaissance, but that's how we
remember that time period. So, did humanity decide, collectively, to
go through the Renaissance? Or was it a (semi-intentional) band of
artists, philosophers, etc. who made it happen? If you asked a random
sample of people at the time "Do you want to see X, Y, Z result of the
Renaissance?" what percentage would say "yes" vs. "no"? Was the
Renaissance humanity's decision, or did we sort of stumble into it?
The Long Reflection is trying to be more deliberate, but is it humanity
(or humanity+, to include AI or even animals in the process)
making the decision, or is it just a subset of humanity(+)?
--
Semi-related: Is the Long Reflection a case of moral progress? We
come to know the better views after "reflection"? Moral progress as
a pursuit only makes sense if we know what better morals are supposed
to look like, which means we have to have some minimalist a priori
moral knowledge. Otherwise we have moral motion, but we have
no idea if we have moral progress. I wonder what assumptions
are baked into "reflection", which might give people a sense that
they are value-neutrally evaluating moral change in such a way that
they can somehow know that there has been or will be moral progress.
The problem with finding the epistemic answer is that we can't
even know how to go about finding it. For instance, I personally
believe that reason is the interrelationship of every piece of
evidence, however gathered. To me, intuitions and things seen noetically
or imaginally of all sorts are evidence. For another group of people,
while intuitions are supposed to have something to do with reality,
sense perceptions are weighted much more. Also, there are people
who seemingly reject the idea that evidence has to come together in
one whole. To them, it's fine for there to be a wider body of evidence, and
then some other idea or experience that overrides everything else.
We can discount evidence and reason as a whole because we have some
overriding intuitive knowledge.
Who is right about the nature of reason? How do we know what
the right foundation of reason is? I don't know that there is
a way out of that dispute. But there might be. I feel like,
despite all the skepticism that one could muster, we do have
direct, unmediated access to the truth through personal experience.
Each of us can be a witness to what we observe.
I think that if it is possible to come to know the truth, it
partly comes from acquiring the right intuitions about how to
approach reality, including how to be rational. Where do these
intuitions come from? One source is through interpersonal
relating. Each of the witnesses relates to each other,
establishing their trustworthiness with each other both explicitly /
intentionally and subtextually / by osmosis. So the Long Reflection
can't be just an armchair exercise, but also one of relating to
other people. How can you know that a Stone Age tribe has the
wrong worldview (the wrong implicit attitudes toward reason),
without at least talking to them and really seeing things the
way they do? That worldview shapes their moral values.
Maybe this means that the Long Reflection will have to try
to resurrect extinct Stone Age cultures, foster new ones being
made, foster entirely new cultures. If subcultures of digital
humans, no longer fearing death, start to believe that 2 + 2 = 5,
the Long Reflection has to consider their point of view, as well
as the faction that believes that 2 + 2 = 6.
We don't really know what rationality is supposed to be, so how
can we use rationality to find the right moral values? And yet,
naively, I feel like I know exactly what rationality is. So I
can find the right moral values. Can't we all just be rational?
One could think of the Long Reflection as being a data-processing
process. Cultures can generate an infinite number of "X is good"
statements. But maybe it's a bounded infinity. Once you know how
to answer the "is it 2 + 2 = 4, or 2 + 2 = 5?" question, maybe
you know how to answer "is it 2 + 2 = 6?". Maybe, then, once you
have talked to enough Stone Age people, you basically know the
full range of their intuitions.
Normative and epistemic uncertainty can help us sort through
many different worldviews. Our method of dealing with uncertainty
may have a large, or even majority, determining effect on our
agreed-upon beliefs, the output of the Long Reflection. But can we
know how we ought to be normatively and epistemically uncertain?
Could different cultures view that differently? Even if naturally,
people tend to gravitate toward certain reasonable views about
normative and epistemic uncertainty, couldn't there be unreasonable
views that some digital human could hold, someone who isn't tethered
to physical reality as essentially as we are, perhaps? (Is less
affected by "physical reality bias".) How do you know that such a
hypothetical person wouldn't be right, no matter how ridiculous
their approach to normative and epistemic uncertainty may sound
to you?
I see problems with the idea that the Long Reflection could
be a sort of armchair philosophy that discovers the epistemic
truth about what we should all do. The epistemic truth is something
that is observed and witnessed to in the living of life. Being
an armchair philosoper is living real life, but only the armchair
philosophy part of it. Because people have to be involved with
each other in order to produce "that which people as a whole
agree on to coordinate their actions", and that involvement isn't
just a technocratic "a few people observe everyone without being
personally involved with them and come up with an answer for
everyone", the Long Reflection has to be something that everyone
works on, and has to be conceived of in the most open-minded
way possible, to be open to everyone's witness to reality.
So the Long Reflection seems to have to be a partly political
process. The difference between one person communicating their
intuitions to another, and politics, may not be very big. We
want to come to real harmony in our beliefs and intuitions, and
that is part of the Long Reflection, but we also have to make
decisions as a body of people. So, people trying to solve the
problem that the Long Reflection needs to solve need to understand
how to set up processes whereby people make pragmatic decisions
together despite disagreement. Politics is the overriding of
some people's witness for the sake of producing a pragmatic group
decision. That sounds horrible, so it is best to produce real
harmony instead.
(The overriding of the witness occurs in the implementation
phase of politics, and it also chills people's witness when they
internalize the thought that "someone else will decide rather than
me, whether I want to or not, so I don't need to see things for myself".
Politics (in general / the politics that would compete with the Long
Reflection) is ongoing / iterative. Perhaps the Long Reflection people
could try to encourage people to see for themselves -- "your view / witness
may someday affect what decisions we make". Particularly in cases
where people are, or suspect they are even if they aren't, in a
minority that can't hope to compete with the volume of mainstream
opinion.)
In a lot of cases, I would say "but we don't have time to
really get everyone on board, thus the need for pragmatic decisions
rather than group harmony". But the Long Reflection (The Precipice,
Ch. 7, note 7) has an open-ended timeframe: --It is unclear exactly how
long such a period of reflection would need to be. My guess is that it would
be worth spending centuries (or more) before embarking on major
irreversible changes to our future -- committing ourselves to one
vision or another--
Isn't it possible that everyone's witness could be wrong?
I don't know how to correct for that kind of systematic bias.
A correction for that systematic bias could be itself compromised
by however everyone is wrong. But we could remember a kind of
holy fear as we go forward in our consensus, and be open for
revision. The Long Reflection seems to require that at least
in some practical sense, we come to The Answer once and for
all, so we can go ahead and execute on moral value. But that
is dangerous.
Maybe, though, we can show to ourselves that the set of
moral values is a bounded infinity, and satisfy ourselves
that there is no more that we can suppose about morality
that we haven't accounted for? In principle, this process
could work, but how could we ever know that it had really
reached its completion?
--
Do I honestly believe that there's no way to do armchair
philosophy to find the epistemic answer? No. Being honest, I
admit that I feel like it can be done.
Everyone is an armchair philosopher, already having figured
out their own personal Long Reflection sufficiently to forge
ahead and execute on their views of moral value.
We are (whether we realize it or not), running away from
skepticism. Our method of running away from skepticism shapes
what we say about reality. Here is how I run away from skepticism:
I exist. Something which is not-me exists. I personally relate
to what is not-me. Everything that touches me is experience. The
way experience touches things is by experiencing them, and experience,
in what it is in itself (experience) can only be touched by experience.
So everything that can directly or indirectly interact with me is experience.
When I relate to "all things" (everything that exists), that
relationship connects me to "all things". That which connects me
to all things, facilitating that relationship, is conscious of all
things.
That which should not be, at all, does not exist. To be is to ought
to be (at least temporarily). To ought to be is to measure up to a
standard which is enforced. The standard and the enforcer are made of
experience. They must connect to all things, so that all things are
validated. The enforcer of a standard must be legitimate. To be
legitimate requires being willing to undergo the law, if possible.
The law is the standard and its makers and enforcers. If possible, the
law must include, be made by, a person who lives and dies like a person,
not knowing all things (this is the law we are under). So the law must
be (at least) two conscious beings: one who validates and experiences all
things all the time, and one who can live a life as a limited personal
being. To be legitimate requires that one keep the law, and part of any
ultimate moral law is that we must put it higher than anything else, and
thus be willing to die for it. So the standardmakers and enforcers of
the law must be willing to die for the law.
So that's the foundation of moral value: what a specific set of
personal beings decided was right and wrong. This set includes one
person who experiences firsthand what everyone experiences, so we know
at least that much about what is right and wrong, that bad experiences
are disfavored by the law which must experience what we do. And we
also know that one of the fundamental values that they have and must
submit to is the willingness to risk everything for the law.
But I feel a kind of fear when I write this, because couldn't someone
come from out of nowhere and prove it wrong? Aren't I making some assumptions
with what I write? Wasn't I taught in Western schools, shaping how
I think about things? Maybe if I had been raised as an Aborigine, or
in India, they wouldn't make sense. What about the as yet unimagined
culture of the digital humans of the 24th century, who live on a specific
part of some specific server? The outside view is terrifying and rules
out any belief. But the inside view of what I have described makes sense
to me. So it is my witness, and perhaps by asserting it, I force someone
else into the terror of the outside view.
Perhaps I can make my ideology that of normative and epistemic
uncertainty. But what is implied by that view? It's not perfectly
agnostic. Here is my attempt to make it my creed:
I exist. (Uncertainty doesn't matter to me unless it's relevant to
me, which requires that I exist.) (And I exist in full thickness, because
the word "I" when I say it implies all of me: hopes and profound fears,
all that I have done and all that I intend.) I should think that some
things are knowable, and that others are not. There are many worldviews
held by many people. I don't know (I shouldn't say that I know) in advance
which are correct. I should weigh all of them, considering the
consequences of them supposing they are true. Somehow I should come up
with some sort of guide for how I believe, act, and trust, given what
I know.
I think (maybe tentatively) that the "should"s in the above paragraph
are necessary either for normative and epistemic uncertainty to fit
their (normative uncertainty's and epistemic uncertainty's) own definitions,
or for them to be used as an input into finding moral guidelines. But all
those "should"s make me ask, do we know that we have enough of a foundation
for normative and epistemic uncertainty (the epistemic practice), in order
to even begin with normative and epistemic uncertainty?
We could say "Well, we're just uncertain about moral and epistemic
uncertainty. It's uncertainty all the way down." One thought that
occurs to me is that the methods by which we evaluate uncertainty, are
things about which we might have uncertainty. If uncertainty says "there's
an X% chance, given uncertainty, that uncertainty is itself valid", maybe
we spawn a meta-uncertainty about uncertainty. But that meta-uncertainty
has a Y% chance of being valid. Now, X and Y are each between 0 and 1.
So every layer of meta-uncertainty is multiplying increasingly many
numbers between 0 and 1, which, mathematically, should eventually produce
an infinitesimal / Pascalian number -- effectively zero, even if
technically non-zero. (The implication being that stacking uncertainties
and meta-uncertainties high enough undermines all of our credences, leaving
us no way to rationally prefer an action over another.)
But maybe math is the wrong way to evaluate
these things? I feel like not everything is math-apt, something which I
see most clearly with Pascal's Mugging and the difficulty of defining
exactly what is a Pascalian or non-Pascalian non-zero number. Maybe we
can escape from the math? (Or maybe I've messed up some detail in the
previous paragraph.)
Assuming that we can't escape from the math, is there any way to
come up with the certainty needed to get a worldview of normative
and epistemic uncertainty off the ground, such that we can use it to live
our lives? Let's say that you are an uncertainty-using person, read the
previous paragraph and say "Your explanations sound like they make
sense, but I can just intuitively tell that normative and epistemic
uncertainty makes sense to practice. I have some kind of certainty
that can ground that method. I don't know where it comes from, but
I have it." If you don't know where it came from, should you trust
it? I'm not sure you can be certain either way. But I think from
my own experience, there are things I can be certain about. For
instance, I can see the computer screen of my laptop (and at this
moment, you are likely looking at one as well, or that of some other
device, or printed words on a page). We see images that are words,
the words that I am typing right now. I am 100% sure that I am
seeing the words that I'm typing. Similarly, if I see a cup on a
table, I see a cup. No amount of philosophy makes the cup anything
other than a cup. I could name it something else and think of it
as having a different function. But the object, whatever its meaning
to me, would remain, and I could see it, touch it, and so on, and
interact with it as part of my ordinary human life. So maybe we can
be certain about the validity of a worldview of normative and epistemic
uncertainty, (be certain on the level of meta-certainty) in the same way.
We just see it. If other people don't see it, maybe they don't have
the equivalent of a cup on the table in front of them. I wouldn't
be concerned if a blind person said they couldn't see something I
saw. Or if a young person didn't understand my perspective, having not
lived through the unique experiences I've lived through. I would go on
having my perspective, understanding that they don't understand what
I do. This sounds like a case of "some people witness some truths,
others others".
Now it looks as though, given the above, that the very worldview
of normative and epistemic uncertainty is something that is witnessed,
and is not an absolute frame that can be relied on to form top-down
judgments about reality.
--
However, let's look at the problem of meta-uncertainty again. What's going
on when we evaluate the uncertainty and meta-uncertainty of something?
Well... we exist, we evaluate. These are elemental actions
that go into any reasoning about uncertainty and validity. Just as
I can in complete reality reach for the cup on the table, I can in complete
reality reach out, in my mind, to an uncertainty, which is a thing that
is not me. What does it mean for "we" to exist, and for us to "evaluate"?
We are applying a standard, and a standard about standard-application.
These things (or at least enough of them) we know like we know that in
this sentence there is the word "that". We have the standard in even
our uncertainty, and through the standard we can be uncertain. So, a
standard exists, absolutely, regardless of meta-uncertainty.
Meta-uncertainty, and thus would-be self-destructive meta-uncertainty,
has no validity without the standard.
So we exist, each of us in our own full-fledged personhood, and a
standard exists. A standard validates each standard, so it's standards
all the way up. We can see that the chain is valid, because
we know that arguments can be invalid, 100% invalid. These standards
cannot be argued with. There is no infinite regress of standards that
validate standards, because there is a meta-standard that immediately
validates all other standards. So this is moral value. And what
can we know about it? (At this point, maybe just the things from my
personal belief system, like that it must live up to itself, put itself
first in such a way that it risks itself, and subject itself to its own
laws, and even that it be a conscious being.)
--
I'm currently planning to read Nietzsche, who I think will provide
a cynical, weird, and (perhaps) implausible account of how morality came to be,
as well as Charney, from whose book I can make a laminar, reasonable, and
plausible (possibly "just-so") account of how morality comes to be.
Despite Nietzsche's weirdness and (likely) wrongness, I think he will
problematize "genealogy of morality" more vividly than Charney, who
after all (I assume) isn't trying to talk about where morality comes
from.
If morality comes from somewhere -- somewhere political, or from
evolution -- does that make it invalid? Is the concept of morality
even worth defending, if it has nothing to do with what is real, and
is simply an artifact of how humans choose to program humans, or how
human biology gives us certain instincts? Why even care about maximizing
value? Most or perhaps even nearly all of us have natural instincts
to seek value. But why should we care? Could we just get rid of them,
or let them wither away? I have reasons for caring, because I believe
that standards / law / morality / legitimacy have their own existence,
and in themselves inherently call for them to be made first. "Should"
refers to something real which I directly experience, and which must
in some form come before any events in a genealogy of the morality
we currently have.
The Long Reflection is what (more or less) apolitical people would
favor. The "rough and tumble" (brute force) political process is what
often runs the world. People who care about the Long Reflection probably
would or ought to care about how "values are created" through the application
of socially mediated psychological power. Nietzsche connected with brute
force things, and maybe his book will have discussions of them. If
morality is something that is witnessed, then teachers have a special
witness to children as they "teach them to care". The "physics" of
teaching people may be a limit on what things they can be taught, especially
when what is being taught is conducive to the teaching process. It
could be that the Long Reflection would favor beliefs that are practical
to spread and believe, and that are conducive to the education process.
It could do so by being agnostic(-enough) about value and not do anything,
and then lo and behold, education-aligned values would outcompete ones
that go against education. (Or, "education as practiced by the 21st, 22nd,
etc. century education system".) If the Long Reflection people say "X is
best", but everyone else says "Y is best", probably Y will be done. If
the Long Reflection is careful and diffident, political and educational
actors will not be, and it may be difficult for Reflective
people to effectively object, if they feel like they ought to.
(So should the Long Reflection have some political muscle of its own?
Probably it would have to find a way to be culturally competitive, or
better, connect with everyone in a non-competitive but thoroughgoing
way, without compromising its own process.)
--
Here's an earlier attempt to address the same topic as this, with
a lot of overlap [now the blockquote indicates something other than
content responding more or less directly to The Precipice]:
(Troubling?) Questions for the Long Reflection
This is a draft of a post I would have posted to the EA Forum. My
(preliminary) answers to the issues raised in it are in []s.
--
The Long Reflection is a time in which we will (may) try to find the
best possible values, so that we can propagate them through space and
time far into the future. We don't want to mess up this process, because
if we do, we will forego a lot of value (or cause a lot of disvalue).
I like the thought of trying to search for the best values, and I also
like using reason and reflection to do so. I'm a religious person and
the kind of approach to religion I like is (aspires to be) rational and
reflective. I like thinking of the Long Reflection as a time where
we finally get to the bottom of what is good in all of human culture.
But I feel like there are some potential important questions to answer
about whether the Long Reflection is even feasible.
--
What is the best way to go about finding the best values? Does
reflection actually help? Should we use reason?
[My working definition of "reason" is the "interrelationship of all
evidence / premises, however we gather them". Reflection is that
processing that enables us to interrelate. If we want to talk about
the truth as a whole, then it looks like reason is how we know the
truth. In principle, reality is a whole and thus is known through
reason. It's hard to imagine reality (that which may become relevant
to us) not being a whole of some sort. I am not sure this 100%
proves that reason and reflection is the best way to go about
finding the best values. But maybe it lends enough weight to
reason to favor it. Also, a big part of "reason" involves saying
"this is a valid premise, but that is not" and so "reason" can
be a loaded term, loaded with "what the habitus of the people
speaking consider acceptable or unacceptable as premises". It's
not as clear that our culture's loading of "reason" produces a
"reason" which is the way to go about finding the best values.
Similarly with "reflection". We think about what are reflective
people? What is the vibe of a reflective person? Isn't there
a bias there?]
Can we prove that non-reflective thinking is less reliable than
reflective thinking?
Can we prove that irrational thinking is less reliable than reflective
thinking?
Assuming we are rational, I suppose we have premises and logic, which
lead to conclusions. Where do we get our premises from? In areas of
value, many of our premises seem to come from interpersonal relationships.
Will we try to interpersonally relate with all kinds of people in order
to inform our premises? (In principle, I don't see why not, although in
practice it might or might not be feasible.)
Whose minds are we trying to make up?
If we don't try to make up everyone's minds substantially in accord with
our own, then are we going to force the consequences of our ideas on them
against their will / preferences when we start to implement policies based
on the conclusions of our Long Reflection?
If we do try to make everyone's minds up, can we deal with obstacles
like "different, supposedly rational versions of reason" (for instance,
the idea that you can have the entire world of truth, and then some other
basic belief that is not part of that interconnecting world of truth
takes precedence over it and that's just fine -- (a thought inspired by
Alvin Plantinga in Warranted Christian Belief, which I think
says something similar).
Our intuitions about what are true, the bedrocks of reason, come from
culture and biology. What happens when we can change biology freely?
Can we have a coherent definition of what is rational, if future biological
brains, or AI, or digital humans, have radically different intuitions about
what is rational? Should we defer to them as being more rational than
us, or insist that we are more rational than them for some reason? Or
take a kind of normative / epistemic uncertainty approach, where we can
find a way to consider all the different possibilities and "collapse it
down" into concrete action? How many different rationalities can be
evolved? Is it possible to take them all into account to form a practical
rationality (one which we can act on)?
(One version of rationality is based on something like Yudkowsky's
illustration of the shepherds, where one character ignores physical
reality, dies, and is thus not represented in the gene pool. But what
if a) we don't think that this life is the only one, or b) just don't
care about non-existence? What if the actual best values are to not
exist? Or if values themselves don't matter? Can anything constrain
our definition of what is rational then?)
[The Yudkowsky story can be found
here.]
If we come to figure out what the best values are, will we freeze
cultural evolution so that new values are not evolved? How would we
do that? Maybe we want to ensure that we have considered all possible
values before doing that. Is that possible?
Will thinking about values be dominated by "epistemic utility monsters"?
(Hell being maybe the ultimate one.) Will all our values be based on
"avoiding the worst possible hell as proposed by religion X"?
If you
look at human beliefs currently, you see the unfortunate scenario that
there are two huge religions that preach hell (some form of Christianity
and some form of Islam), and these two religions seemingly teach very
explicitly that either "You have to accept a God who is somehow more
than one person" and "You cannot accept a God who is more than one person."
Apparently there is no right answer. Naively it looks like you just flip
a coin between the two of them, and hope you guessed right (although
their adherents are convinced that there are reasons to pick one side
over the other). Is there a rational way to "gate off" Islam and
Christianity, to where their competing infinite disutilities do not
affect how we think rationally at all? Or to determine that one of
them is more rational than the other, and is the one we should actually
follow? But then, what if someone evolves a new religion, consisting of
1,000 adherents, and they also have a belief in hell? Are they less
valid for only having 1,000 adherents? What if the religion only has
1 adherent? Could someone make up an ad hoc religion to push whatever
value they care about? Would there be any way to rule out that they
should be taken seriously?
Mentioning the tactic of making up ad hoc religions leads to the
question of "is what this is all about more a competition of preferences"?
Instead of using reason, using rhetoric, or stronger psychological force.
Will the Long Reflection become, rather than a search for the truth that's
"out there", more of a political process whereby people promote their
own values, whatever they may be, which is to be "won" by whoever
can? Maybe this warrants a different name than "Long Reflection", but
would go on at the same time and contest some of the same issues, and
perhaps the two processes would fight each other. How, and why, would
reason be safeguarded against personal preference?
--
I'm not sure that these questions can't be answered in a way that
allows the Long Reflection to go forward, but they seem like things that
might be a problem.
Some guesses where to find answers: I didn't finish reading the Sequences,
so maybe that addresses these issues? As mentioned, wrestling with
"alternative rationalities" like Reformed epistemology or maybe something
in Buddhism.
--
I think that's all I have to say on the topic for now, so I will see
what I see in the readings.