22 April 2021: see this addendum.
16 June 2022: this post contains a section about this post, with an update
partially in favor of some criticized aspects of transhumanism.
See also this review's preview and
postview.
I would characterize X-Risk, by Thomas Moynihan, as a kind of
sermon exhorting people to take X-risk more seriously, perhaps
becoming "longtermist" effective
altruists. He "preaches" by talking about the history of ideas that
led up to our realizing that (as it appears), we are completely responsible
for maintaining all that is good, for even the possibility of having the
perception of anything as good, having any kind of concept of goodness.
Thus, we must do whatever is in our power to avoid X-risks. X-risks are
"existential risks", which threaten the existence of human civilization or
even the human species itself.
I am not competent to judge history writing (in terms of being able to be
critical of his use of evidence). I did find the book to be fairly
well-written. Moynihan obviously gets some enjoyment out of writing (for
instance, his humorous section titles and the panache with which he says
things like "not the sense of an ending, but the ending of
sense").
As to the history, I did appreciate learning about concepts like
plenitude, seeing the non-biblical but non-modern worldview that
was replaced by modern science. Also it was interesting to see how
far back people were seeing trends develop that we're still in the
grips of. (For instance, seeing humans as parasites on their machines,
and machines as parasites on us.) Things that a few people saw
generations ago, but which generally were not discussed and were not
decisively addressed, as though people just forgot about them in the
course of living their lives.
I do agree with Moynihan that the future is something worth investing
in, and I generally agree that avoiding various kinds of existential risks
is a good thing. In order to explain why I give this book three stars
instead of four, I mention that I disagree with Moynihan's worldview. I
don't think that expansionary values are obviously better than those of
rest, nor that humans (individually or collectively) are in any (implicit
or explicit) sense obligated to try to live forever, unless there is
something outside human judgment that compels us to keep living even if we
don't feel like doing the work to make it happen. I do think that
Moynihan's point of view does well as an atheistic rationale
for motivating work -- at least for many atheists. But not everyone
is an atheist.
If Moynihan's basic project is to motivate people to pursue a vision
of superlative human well-being in the far future, requiring that they
oppose X-risks, I think that human well-being is better described by
(a certain kind of) theism, valuing quality of people more than quality
of people's lives, and that theistic metaethics / motivational structure
stands a chance of motivating the average person (and even many of the
not-so-average people who are probably Moynihan's intended audience, or
adjacent to it) to care about superlativeness, motivating action, while
atheistic metaethics / motivational structure does not. Moynihan may
wish to arouse a sense of value by describing some sort of superlative
future we could someday bring about, and a sense of "iniquity" if we
don't, but I find that I don't really care about what he says, and that
I don't have to. But this is not as true if I am concerned what an
active and worthy God thinks, or one who will respond negatively to me
if I don't care.
I think Moynihan may hope for a future atheistic moral realism to
present itself, but absent that hoped-for realism, what I see left is
moral anti-realism that's vulnerable to us hacking our own biology to
rewrite our moral intuitions to something cheap and convenient, so that
we can maximize some variable we choose (likely for its maximizability),
on the one hand; or, on the other, (a certain kind of) theistic realism
which holds a higher view of human nature (higher than being wireheads,
certainly) as being necessary for human survival on some level, and thus
something that we must include in what we try to maximize. (I'm
also aware of one attempted atheistic moral realism, that of Sharon
Hewitt Rawlette, which (I think) if adopted implies that we should
wirehead to produce the positive normative qualia she favors.)
Perhaps Moynihan hopes that the hoped-for atheistic moral realism
never needs to be brought to light -- even if one doesn't exist, we
can hope that there will be one, and the hope will get us to do whatever
Moynihan thinks is good. But people might just say "Well, I want to
do what I want to do, so let me know if there's ever a moral realism
that actually constrains that". Certainly some people may say that
theistic moral realism is tangible and present (if someone can offer
one), and prefer that, and to the extent that that conflicts with
Moynihan's axiology, then it might come to fruition in contradiction
to him.
It occurs to me that Moynihan may not really care what the majority
of people think, but only those who might form the elite that can do
anything about X-risk. Perhaps most people will only be consumers in
the future, patients but not agents. Will they still be voters? If
they are disenfranchised, then there is some risk that the elite will
lord it over the non-elite, a risk factor for the abuse or annihilation
of the non-elite. The average person should have some power, and
thus needs to understand why X-risk matters. Therefore it is best if
we have an ideology that motivates the greatest number of people to
avoid X-risk.
Thinking pragmatically, perhaps it is best to offer atheistic
anti-X-risk ideologies to those atheists who can (or will only) be
motivated by them, and theistic ideologies to those who can (or will
only) be motivated by those. Perhaps there need to be ideologies
based in each of the major religions.
For more detail on what I think about this book, see these notes
below, which I took while reading. (They also get into my own intellectual
project.)
--
X-Risk notes
[Quotes from Moynihan set off by --.]
p. 11
--And the more we accept responsibility for this truth, the more we are
compelled to do what is prudent and righteous within the world of the
present.--
A way I might put this is "good futurism makes for a good present". I
agree broadly with Moynihan that if we are to have a future, it has to
come from a better sense of history, of having a project as
civilization.
[I would suggest the project of seeking kinship with God, being set
apart to God (holiness). My idea of God being this.]
p. 22
--There is untold scope for self-improvement, self-exploration, and
self-expression across such time frames, in ethics as much as politics,
in the arts as much as in the sciences. Our current capacities to do
good, to pursue justice, to comprehend nature, and to produce beauty may
be just a vanishing fraction of our full potential.--
Moynihan is painting a picture of high expected value, if we should
manage to not kill ourselves off. In what way could we really have
"untold scope for self-improvement, self-exploration, and
self-expression, across such time frames, in ethics as much as politics,
in the arts as much as in the sciences."? I can see that we could pile
up works of art and scientific discoveries, but why would we care? And
ethics and politics are both things that it seems like you would just
want to get right, from which point they could only get worse, like how
you cease to be able to climb higher than the peak of a mountain, and
can only go down if you want to change your location.
Could we care about piles and piles of art and science? Maybe if we
modified ourselves to care. At some point, there's something silly
about that. We are trying to create meaning (one meaning of "meaning",
importance to us) apart from meaning (the other meaning of "meaning",
communication from the ultimate). We avoid reality (that communication)
in favor of wealth. Moynihan talks about our great responsibility. But
the future he looks forward to is one where most or all people would
lose touch with reality, and instead live in dream worlds engineered to
be superlative by the standards humans naturally fall for. If there is
a God, and that is a real possibility, we should be concerned about
hearing from him (God). He is the being in which ultimate reality has its
concrete ontological instantiation. Therefore a vision of human-engineered
superlativeness is risky, if we do not also listen for God.
[Re: silly self-modification. Progress Quest is a game where you leave
your computer running and your character keeps on leveling up without
you doing anything. Before Progress Quest, it used to be that playing,
putting in attention, got you a level up. And a level up felt like an
accomplishment. So surely, leveling up is what makes the game good?
Similarly, getting cool items that make you more effective makes a game
good. So Progress Quest just gives you the cool items and levels, more
and more. So on a certain level, we are playing Progress Quest with
ourselves when we engineer ourselves to like more and more art and
science and whatever, the things which are civilization will provide
in abundance. We choose to play an easy game, setting ourselves up
to be rewarded without having to go through the inner motions that
accompany accomplishment, which are, to recognize reality and rise up
to meet it. The engineers and the engineered alike play an easy game
in their respective pursuits. The inner motions are a personal reality.
The engineers who make people such that they fit into a world of easy
rewards knowingly avoid the personal reality, even if the engineered
don't know what they're missing. One way around this, if we absolutely
have to engineer reality, is to engineer a reality of frightfulness and
the desert experience for people to go through, not knowing it's only a
simulation -- but we can make it a different kind of frightful and a
different kind of dry, so that it does not have the evil consequences
that life can have on people as personal beings, in our world where we
can have a kind of raw, chaotic and even evil frightfulness and
dryness.]
10 July 2021: Why would giving people frightful and dry
experiences connect them with meaning, or reality? I think the MSLN
answer is that God is present in all conscious experience, and can
speak to us through all conscious experience, but it is possible that his
register of communication is limited by the kind of experiences we
actually have. From another angle, our spiritual development is
incomplete if we do not learn to trust. We can't come to trust fully if we do not learn to trust
(to the extent that is possible) frightening or dry experiences.
While God can always cause us to experience these things after this liife,
if we grow to love pleasure or other psychological wealth too much in
this one, we may close ourselves off to the experience of dryness or fear
or pain or whatever, and may find it easier to close down on idols
which get between us and really loving God. Some of us may choose
those idols rather than God and reject God permanently. Ease can be
like plaque that eventually becomes calculus on teeth. Without
the calculus, we can hear God. As above, not all frightfulness or
dryness (or pain, etc.) leads to trusting on a deep level. So there
may be some room to improve on the brutal, wild status quo.
p. 43
--This brings us neatly to the core claim of this book: that the
discovery of human extinction may well prove to have been the very
centrepiece of that unfolding and unfinished drama that we call
modernity. Because, in discovering our own extinction, we realised that
we must think ever better because, should we not, then we may never
think again.--
Moynihan may get into this later (a page later, he mentions a future
chapter on "extinction is the inevitable culmination of technological
modernity"), but it occurs to me to point out that the insecurity that
makes humans think harder makes them think hard enough to come up with
anthropogenic X-risks. [When we think harder, for instance come up
with basic research, we feed uncontrolled or insufficiently-controlled
competitive dynamics which are why we have anthropogenic X-risks. You
have to have all of technology or none of it, more or less.]
And anthropogenic X-risks are the most urgent. We could have had a
history where we solved our social problems, and thus didn't have Molochian dynamics pushing us.
(Apparently Moloch was not as strong in the pre-Enlightenment days,
since we didn't have to think as hard, lulled by theistic and
perennialistic assumptions.) At some point in the future, we could
have devised the technology to ward off asteroids or survive
supervolcanoes.
Which would be better, to spend 10,000 years
overcoming Moloch through cultural change, and then turning our
attention toward asteroids and the like, or our current course, where
we figure out technology without adjusting human nature first? A lot
depends on how much we really could accomplish through culture alone
in adjusting human nature, and how strong culture can be in the face
of warfare. [Warfare being able to conquer "nicer" civilizations,
though they are superior in the sense of being more humane to
themselves. So conquering cultures supplant humane ones, at least
in principle.] Culture is enough of what makes a person who they are
that it seems like maybe culture could create a society
immune-enough from Moloch.
We could say that the Enlightenment was the moment when humanity really
started going crazy with fear, and it seemed like we might drive
ourselves to death in our unstable state. Whether this craziness was a
necessary part of our development (the view that culture couldn't have
tamed Moloch) or unnecessary, is an empirical question that we may not
be able to answer. But if in some unexpected way, we find a way to make
culture stronger than technology, we could prove that Moloch can be
tamed by culture. And if we find a way to believe in God, we see that
the Enlightenment was the moment where we ran away from our father,
taking our inheritance with us, out on our own in the world, trying to
make it without him. [Is it really a mark of maturity to reject your
parents? Maybe we should see Enlightenment maturity as a somewhat
sad thing. Even if atheism is correct, we have to live in the
world we live in, and while the Enlightenment sense of maturity has
its good points, and, on atheism, may simply be the truth, something
that's always valuable, there's something sad about us, striving so
hard to live.]
[Maybe a more defensible way to try to argue this point would be to say that
theistic and perennialistic assumptions, or ones which might tend to go
with them -- the overall sense that things work out in the end, the sense
that things do not change, the sense that there is justice inherent in the
universe and so we couldn't (as much) imagine abusing people and being
abused, the sense that there really is a way things should be which we
can expect of humans -- were things that we increasingly lost in the
modern era. As we lost our old beliefs (our relationship with God), we
entered a precarious world, where we used our freedom to try to maximize our
wealth and power, and also increased our anxiety, a process (the maximizing
and increasing) that is out of our control. The world always had some
element of the "there is no God -- rejoice in your freedom (to do good and
bad), and be afraid (of bad people or blind circumstance)", but it had
the theistic and perennialistic (and other similar, I suppose) things as
somewhat of a counterweight.]
pp. 94 - 95
--A key driver behind the philosophy of the Enlightenment was a growing
realisation that moral values are questions of self-legislation.
That is, we do not inherit them unquestioningly from divine fiat or the
state of nature. We render them through our own -- fallible and
revisable -- search for what is right. What, after all, is any 'decree'
that is not the result of our own deliberation, except for an imposition
of arbitrary -- and thus immoral -- force? The key idea of the
Enlightenment was this: only that which has been given a justifying
reason is to be considered as legitimate -- in belief as in action, in
epistemology as in ethics, in science as in politics.
This master idea of the Age of Reason reached its culmination in Kant's
mature philosophy of the 1780s and 1790s, where Kant realized that
values are maxims by which we elect to bind ourselves and are,
accordingly, always dependent upon this ongoing election. Such values
are not part of the furniture of the natural world -- they do not exist
apart from our active upholding and championing of them. This entails
that our values are entirely our own responsibility, and that we
are accountable for everything we care about and cherish.--
But why should anyone care about the above paragraph enough to
inconvenience themselves? Our culture has a kind of nihilist strain to
it. Is that less valid than Moynihan's urgings that we be responsible?
It's the product of his reason, the way he deliberates -- he feels a
certain way and therefore acts on it. Maybe he will make a case for
moral realism in this book (such as Rawlette's). But if he doesn't, and
to the extent that that's a controversial topic, people will take the
self-legislating option to be anti-realists, and then, if they do not
happen to feel like saving the world, they won't.
Moynihan strikes me as being like a preacher. He uses turns of phrase
(like his "instead of the sense of an ending, the ending of
sense" device) like a preacher would. Moral anti-realism breeds
preaching. The preacher has to use the force of rhetoric to move his or
her audience, to create the inner urges that cause them to have the same
moral values as the preacher. If not, something terrible will happen --
from the preacher's perspective.
[Part of the notes starting here seemed about equally relevant
to Rawlette's project, so I split
them off for easier linking. See
here.]
--
It's possible to want humans to care about getting whatever it is they
care about. But, do the things we care about really matter? Is it that
when we care, we are of the opinion that something really matters? I
think so. But then, does it really matter, in fact? Is there any way a
thing can inherently matter, in an absolute way, apart from human
judgment?
--
"Teaching children to care" [I have a book by that title, by Ruth Sidney
Charney, which I intend to read someday] -- instilling reason in people
who might not naturally have it -- are these educative acts justifiable?
If we know the value of reason, then they may be. If by reason we see
that reason is valid and implies certain things, we can teach it and its
deliverances to those who are irrational or nihilistic, using means
other than reason (since irrational and nihilistic people aren't
interested in reason). Unfortunately, this sounds like it is justified
or even obligatory to use relatively impolite means of inculcating
reason and caring, by contrast with the relative politeness of reason.
So there is additional nuance needed, to not traumatize people as we
teach them apart from reason.
--
Plenitude is a potestas clavium [term taken from Lev Shestov's Potestas
Clavium], [in the limited sense of] a reason which is taken to save. If
it is a comforting illusion, then it does so based on the prestige of
reason. People who trust in reason, at least to some extent, are the ones
who need comforting reasons, as opposed to just not caring about, for
instance, the cold sterility of space and the utter contingency of human
evolution and culture.
--
[A gap of a month or two between writing the above and what follows.]
p. 349
[Whether he intends to or not,] Moynihan seems to offer a constitution
for culture [to prevent cultural drift as
X-risk], such that the thing that we most know to be true morally is
to preserve moral agents, and if we follow this, we know what to do for
the rest of our existence. (my addition:) This is the thing that we can
all agree on as right, and so it may not be necessary to figure out
something like moral realism vs. anti-realism. As long as there's the
possibility of there being some kind of value, which we suppose can only
be within the minds of humans or other sentient beings, then to preserve
those minds would seem to be worth perhaps full commitment, something we
could know firmly, producing an outpouring of effort.
I think maybe this meta-morality axiology ("moral agent survival necessary,
all else serving this") is a "One True Axiology", a moral realism,
although one that is somewhat open-ended. It would seem to be the
default choice for atheists in selecting a constitution for human
society.
["One True Axiology" comes from Lukas Gloor's sequence on
moral anti-realism.]
But if value is only in our minds, is it real? Does value itself need
to be treated with great reverence? Or could value be like a pile of
old one dollar bills? -- trash, basically. What grounds value itself?
That is, the value of the perception of value in human minds. Maybe, as
anti-realists, we don't care about such things, and just want our desires
fulfilled, with respect to value itself.
Is there a good reason for an atheist to not be a nihilist? Atheists
might personally prefer not to be, but is that supported by reason? If
not, then the meta-morality moral realism has an Achilles' heel.
[Meta-morality moral realism has its greatest force given an atheistic
background. Theism can play the role of perennialism in blunting
the meta-moral responsibility: keeping alive some kind of moral agents so
that those agents can figure out what morality is someday. Assuming that
atheism becomes an intellectual monoculture, it could produce both
responsibility, like Moynihan's, among those who care, and nihilism, among
those who don't care. Some people feel like caring no matter what, some
people don't feel like caring, no matter what you try to get them to
believe, but there are a lot of people in the middle, for whom a good
moral realism might shift their perspective and make them care more. And
given that (perhaps through genetic engineering) we will have the ability
to choose how much people care, we will have to ask "should we care?" If
there isn't really a solid reason why we should care, why, for instance,
even the idea of value should matter, or the vocation of metamorality
(keeping burning the torch of someday figuring out what morality really
is) should matter, then what will we decide? And we may have the question
come up for our culture as a whole over and over, once every few
generations.]
[I suppose a caring atheist could engineer everyone to care meta-morally,
because it fits the atheist's preferences. Note that this closes off one of
the possible answers that a moral agent could come up to define what morality
is -- that morality doesn't matter, or that it is moral to leave everyone to
their own judgments about morality, even if that allows them to be
nihilistic with respect to survival. By engineering desirable outcomes,
you run the risk of closing yourself off to reality. Maybe the moral
intuition to just let things happen and not control things, or to trust the
universe, or even the lack of moral intuition, are all signals from some
deeper reality. Is there a good reason why the urgent responsibility to
preserve meta-moral ability is better than those? It seems like we need
the fruits of meta-morality to really justify it. And who knows what
morality is other than people more or less as they are now, with their
various more or less natural moral intuitions? Similarly, a caring atheist
with the ability to engineer people to care may feel like lopping off
inconvenient aspects of human cognition while they're at it, like the
capacity to believe in God. But maybe we are like antennae for hearing
from God -- at least, many of us are.]
[Maybe more realistic (or graspable)
by someone like me or Moynihan, is the more immediate situation where
persuasion is not yet dominated by bioengineering, and people need to be
roused not from stark, pure nihilism, but from a kind of low-key mix of
nihilism and caring about what is present to them (rather than distant
or future people), which seems like a plausible description of most
well-adjusted people in our culture, both atheists and theists. How do
you rouse those people to go beyond what is required into what would
seem to them to be supererogatory before they grasp its necessity? That
seemingly supererogatory thing being to care about X-risk.]
Another Achilles' heel would be, why do we have to care about reason? If
we can arbitrarily disregard reason, then we can have whatever culture we
want, for good or ill [this freedom allowing for cultural drift as X-risk].
I suppose any ideology is vulnerable to this. Is there a reason within
atheistic meta-morality moral realism to compel people to be rational?
[Maybe the struggle to survive does. But there are utopian reasons for
us to forget that struggle.]
In MSLN, rationality is enforced by, if you're not looking for the truth,
you might miss God, and if you throw away truth because you want to, you
might be closing yourself off to God's voice, in danger of
hardening and losing your personal salvation. It
appears from the Bible that there is a punishment you will experience if
by your self-closing to reality, you fail to fully grow up in God. So
it is instrumentally rational for individual humans to be epistemically
rational and thus more-expansively-instrumentally rational / altruistic.
It's still possible to ignore this rationality-enforcement, but I think
the warning reaches further into the pool of people who might need to
hear it.
[MSLN is a set of natural theological arguments
that I begin to propose. It has implications for motivation and meaning.
Another, perhaps better, way to explain MSLN's position on reason is
to say "If you ignore reason, you might not find God. If you have
found God, you may need reason to be in tune with him." But I would
add that there are practical limits on a person's ability to pursue
reason, and that it is possible that in the search for more knowledge,
the further application of reason, you neglect seeking to love and
trust God, or fall into or fail to address some kind of already-known
sin. Nevertheless, having a disposition that disregards reason
(whatever your ability to exercise reason is) is dangerous, both for
the non-believer and the believer.]
[People are rational in the sense of common sense, without help, but
to say "I'm going to apply common sense consistently and rigorously"
(which might be just what reason is) is something that is hard to do
and generally not done, nor that all aspire to.]
--
Also, I could see a civilization deciding that value was very important,
but that the finitude of civilization was very important, of inestimable
value, of greater value than the continuance of it. (Why not locate
value outside human psychology and thus its continuance?) This might
be the [or a] anti-natalist way to counter the meta-morality moral realism.
--
Having a moral philosophy that is all about the survival of moral agents
through time could lend itself to a Temporal Repugnant Conclusion: for
every good civilization there is a civilization barely worth
instantiating which lasts longer. [Maybe a better point to make would
be that there's at least theoretically a tradeoff between the longevity
of a civilization and its quality. So if we think "avoid X-risk at
all costs" we might have to sacrifice something -- like conflict or
struggle (to protect us from Moloch), or perhaps in some resource-saving
way (austerity to save resources; or simplifying humans so that they
are happy with less, at the expense of their capabilities).]
--
pp. 380 - 382
This section mentions that love and altruism come from nature, but they
just blindly evolved. Why should we think that love and altruism are
all that great? Why are they more meaningful than the tendency of water
to drip from trees, or for species to go extinct? Why not say that
extinction and fluid dynamics are of inestimable worth? Or say that
nothing is of inestimable worth except what we feel like saying it is?
[And so working hard to protect civilization isn't worth it if we don't
feel like it's worth it.]
The MSLN answer is to say that worth itself (legitimacy) is what
everything is made out of, and it is a person, about whom we can know
some things. The things that truly would be of worth to it, are, and
they are necessarily -- worth itself declares them worthy.
pp. 414 - 416
This section talks about (I think) turning humans into art- and game-
-making and -consuming machines. In my personal life, I have been
fortunate (or not) enough to have a lot of free time, and I have had my
fill of art and games. I have been fortunate (or not) enough
to be able to make art and games, and I have also had my fill of doing
that. I need to pass time, and so I keep doing things. But I don't have
a deep need to do things, I think from having had my fill. When I was
younger, I could get into these things, but now I'm older. I think this
might be considered maturity (although I don't think of it as true
maturity). Why isn't this [having had my fill] maturity just as good as
the maturity of "tiling the universe" with art and games, making super
sure that we can do this for the maximum amount of time?
[On further reflection, I would say "I try to find, for instance,
music, with which I might form some kind of connection and I begin to
succeed sometimes, so maybe I haven't had my fill. But compared to
my younger self, and on a deeper level, I have."]
Maybe in some objective sense, more is better, like in Total
Utilitarianism. There's a total utilitarianism of artworks. But who
really cares?
We might be made to care, for objective reasons (X artworks are good;
the creative processes of making X artworks are good; so 10^100 artworks
must be so much better), but why? And if we do make people into the
kind of people who care about that thing our society was engineered to
bring about, that's like making a video game where if you
click "OK" on the dialogue box on the opening screen you win and get
148,297,237,923 points. We would have engineered ourselves to be easily
pleased. We could have gotten that result cheaply, with some simple
wireheading, without bothering with art. And, Repugnant Conclusion-fashion, we could generate
even more "win states" if we go with the cheap version, even more
(apparent) moral value.
[Once we start to engineer things (including humans), if we get good
enough, we might as well wirehead, unless there's some reason why we
wouldn't. We can engineer the definition of "win state", make anything
seem like a win to humans that we want. Moynihan might say that the
point of continuing civilization is so that someday we can figure out
how values really work, and thus come up with that reason. As though
there's something outside human engineering that ought to constrain
human engineering. Is there an atheistic reason that's solid enough
to prevent wireheading? Rawlette's moral realism isn't quite.
(Maximizing qualia of "ought-to-be-ness" / minimizing qualia of
"ought-not-to-be-ness" seems like something wireheading could
accomplish, and would accomplish more cheaply even than an experience
machine.)]
[Maybe another way to put all this is: 1. Is value human-judgment-dependent?
1. a. If yes, then we can engineer value itself and if we are trying to maximize
some variable, we should define value so that it can be instantiated as
cheaply and numerously as possible. 2. Is value human-judgment-independent?
a. We need some kind of moral realism. Does Rawlette's work? No, it's
as vulnerable to wireheading (generating positive normative qualia is
potentially very cheap, and she recommends simple aggregation of value).
b. Is there another atheistic moral realism that can avoid wireheading?
c. Should we bite the bullet and wirehead? d. If neither b. or c. work,
are we left with theistic moral realism?]
["Human-judgment-dependent" comes from Rawlette's "judgment-dependent".
The impression I got from her book is that there is a clear distinction
between moral realism and moral anti-realism. For her, moral realism
is not judgment dependent, while moral anti-realism is. But then I read
Gloor's sequence
on moral anti-realism (as much as has been written as of end of 2020),
and now I am not so sure there is a clear distinction, at least that
Gloor's version isn't clearly distinct from realism. Maybe it's
anti-realist to not be clearly anti-realist. Nevertheless, the question
remains, in terms of the risk of wireheading, is Gloor's anti-realism
effectively human-judgment-dependent or not? Can a person hack the outputs
of "what is moral?" (as processed by his kind of anti-realism) by changing
human biology? And not just in the sense that if all thinkers recognize X
as valuable, X is effectively valuable, but in the sense of "what is moral?"
in some sense ought to be tied to human judgments, and those judgments can
then be bioengineered?]
[Biocultural change takes time. By leaving the meta-moral process
running longer, it's more vulnerable to drift or engineering. So it might
be good to bring it to a close sooner rather than later, choose a
motivational structure / find the true moral realism.]
We might want some analogue to MSLN's population ethics
(the rest view), for the
value of artworks.
One thing that we lose out on with an easier and nicer, even if creative
[make lots of games and art] rather than narcotic [wireheading], future,
is the capabilities of human beings, the moral capabilities. Humans are
currently capable of undergoing childbirth to bring humans into the world,
going mad alone for the sake of the truth or art, of living lives of stress
and fear to resist corrupted social orders, or of willingly suffering,
experiencing psychological defeat, and dying, for the sake of other people.
The altruism of the cleaned-up future, whether the other for which we live
altruistically is human, intellectual, or aesthetic, is pale in comparison
to these things.
At the end of the movie, you can ask "was it a happy ending
because the heroine got what she wanted, or because she was good,
herself?" That's the question I asked myself after watching the great
X-risk movie Testament, from 1983. I thought the movie had a happy
ending [in a sense] in that the protagonist acquitted herself well, although
by conventional standards suffering through a hopeless situation (humanity
dying out from fallout after a nuclear war). [This "acquitting ourselves
well" requires that we really be put to the test. Utopia tends to
prevent this.] We may be unable to resist lives of ease and pleasantness,
but at least we could recognize that we have lost what was truly greatest
in humanity, in our idyllic future.
It's true that the cruelty and brutality of the cross is something that
we shy away from, and to an extent, rightly so. But I think we could
have a society that is in some sense "millennial", if not heavenly,
where people themselves genuinely learn to value what is good, learning
lessons that are bitter, but only bitter if necessary for learning, and
for which they get adequate support. (This world may be what is
depicted in the second part of
"The Future of Beauty".)
[This millennial world one in which people themselves are brought up
to some kind of standard, where value resides in people themselves.]
["Millennial" comes from the
Millennium in Christian eschatology, a time where people are taught
morally / spiritually so that they can be fit for heaven.]
[I think Moynihan might reply to this whole section on art and game
making / consuming, something like: "If you find yourself teleported in
time to the far future where everything is art and games, you certainly
don't have to participate", although he could take what he said on p.
416 as the opposite of that:
--Instead of creating 'dead semblances of
what has passed away' or 'simulations' of the currently flawed universe,
art could be the genuine restitution of all the wasted opportunities that
past extinctions and deaths represent. In this, intelligence will
finally have justified its existence -- and thus fulfilled its vocation
-- by reversing and recompensing all the countless silent sufferings and
unjust extinctions that provided its past and prologue. To achieve this
future is to justify its past. The implication here is that the only
way to fully escape the iniquity of extinction, in this irrational
universe, is for reason to rectify all the past perishings that made us
possible.--
I would have to make art whether I wanted to or not, to
help rectify past perishings. Since this is taken to be a moral
imperative, how strictly would it be enforced? And if it is
the
moral imperative (if it is the deliverance of the conclusion of
the meta-morality process), then how can we allow people to rebel
against it, by not being wholly adapted to bringing it about? How can
we be allowed to see things any other way? Moral realisms always
threaten a certain amount of human freedom, but I personally would
not want my freedom taken away for the sake of art and games.]
[I used to hang out with Nietzsche fans, and have read a few of his
books. The Übermensch teaching I found to be kind of odd.
Supposedly, once God has died, we create our own values. I've tried to
think what that would mean, and would guess that values are simply opinions
that X is good, maybe ones that we share with other people. So, to create
our own values, we... plug in different things for X? That doesn't sound
markedly better than herd morality to me. Or at least, not as
revolutionary, or universe-expanding. Maybe not even new at all -- each of
us already selects different things to consider good, selections which we
don't share with people around us -- perhaps ones which no one else has
ever valued before. So we are already übermenschen? (Maybe I
don't understand Nietzsche on this point.) I think of this when I read
transhumanist or posthumanist superlatives about the great things we
can engineer for ourselves in the future. Art is basically just art,
games are basically just games, sex is basically just sex, the feeling
of well-being is basically just the feeling of well-being. You can
make it more refined, but not make something really new. Experience
with experience leads to diminishing returns on all experience seeming
that amazing. (And if my jaded view means I need to be re-engineered,
then we might as well re-engineer people to appreciate the cheapest
good-feeling lives possible, and we would have closed off some important
meta-moral possibilities by no longer paying attention to whatever in
me seemed to need something better than super-art, super-games, and
super-sex in order to not be jaded.) I guess my feeling is, why should
I invest in Moynihan's cause, running so hard and putting my shoulder to
the plow, if the outcome is not that much better than what we have now?
It seems nice for people to keep living into the far future, but I'm not
that inspired, and lack of political will is what happens when everyone
thinks something (that's really important) is nice, but don't find
themselves inspired to do anything for its sake.]
[One Nietzsche teaching I took to a lot more was the one about
(paraphrase) one moment of true reality and joy making the rest of
life, no matter how miserable, worth living. I think for some
people, when stressed by terrible experiences, this is tested, and
the sources of joy and reality come out even stronger. Life is
affirmed even more. But for other people, when the terrible times
come, they come to hate stress so much that they only want things
to be happy, safe, and fun. And this comes at the risk of them
perhaps hating life, or turning away from reality. But I'm not
sure I can blame someone for being traumatized by life, thus
turning away from it; instead of being challenged by it, thus drawn to
trust it. I'm not sure what the proper
treatment is for that condition (something favoring both health
and strength, both mental and spiritual). But maybe someday I
will know better what to prescribe for it.]
--
p. 424
--Keeping history going means acknowledging our ability, and thus our
duty, to learn from our mistakes. It means acknowledging our obligation
to continue and to survive, to avoid the precipice of X-risk, in order
to find out where we might be taking ourselves. This remains our duty
even when the world of tomorrow appears a progressively worse place; it
remains regardless of immediate disillusionment, weariness, or
resignation, because, as long as ethical beings are still around,
there is at least potential for the world to become astronomically
better.--
If we keep going in our world-birthing process despite how bad things
may seem in the short term ["short term" relatively speaking -- a short
term that could consume a whole human lifetime], we may have to suffer
and die for what we believe in. Altruism can be bitter. There's the
story of the mugger who offers a nonzero chance of giving some
arbitrarily astronomical reward to someone in exchange for a measly
$100. The mugger could have offered an even higher reward to someone
in exchange for them experiencing a lifetime of loneliness, suffering,
and madness. A rational altruist would have to take the offer, in both
cases, because they knew if they won (and the expected value would
justify this rationally), they would distribute the reward to benefit
all living beings. There's a kind of self-giving, to reason and to
morality, as we must reach for what's best, a kind of excellence here,
which is both horrifying and transcendent of all that is cheap. But it
seems like altruists (or at least, the ones that would follow Moynihan's
basic path) are trying to make a world that can't elicit that anymore.
They seem to be trying to end their own kind, by fixing the world.