Cultural drift is the process by which a culture changes in a way that no one person intends. An X-risk (existential risk) is one which threatens the survival of humans or their ability to rebuild to our level of civilization. Or the definition can be broadened (as in this book) to include anything that threatens the long-term potential of the human species. This might include things that cause a great deal of suffering (S-risk).
Is cultural drift (as X-risk) something to be worried about? Would people freely choose a bad outcome for themselves? Of course, it would seem good to them. But we might not agree. Longtermists (future-oriented altruists) want to make the future be a certain way -- contain humans, for instance. They might be willing to make all kinds of plans to make that happen. It would make sense to try to provide for a way to resist cultural drift, if cultural drift could produce the same negative outcome as some other cause of extinction or S-risk that we currently plan to avert. We wouldn't want humanity to die out, voluntarily or through insufficient motivation to save itself, or for there to be a kind of totalitarianism in which the people have been "ministry of love"d to the point of approving of their own subjugation and torture. These are two possible outcomes of cultural drift, two things that could somehow seem good to future people.
How would this be protected against? We have to be willing to say that some human preferences, and some human cultures, are wrong. So we probably want some kind of moral realism that is culture-invariant, which does not progress / drift historically. The realism would imply a worldview which would enable us to build an axiology maintaining the importance of resisting extinction or S-risk. A kind of firm and consistent cultural constitution.
Why might we not want such a thing? One possibility is that we want to respect the ability of every culture, every generation of humans, to make their own choices. If we are moral anti-realists, we may want to extend the logic of that so far that we don't constrain any future people to what are our contingent judgments anyway. If a future generation wants to choose to die or become slaves, so be it. While we can certainly try to force future people to do what we want, regardless of their preferences, for some reason we prefer not to do that, perhaps because it appears wrong to us, and though we are moral anti-realists, we still obey our own consciences as though they matter, aren't just contingent themselves.
Humans may someday become useless, no longer needed to run civilization. We could drift toward being useless, with the decisions we make. We might choose convenience and enjoyment over maintaining an interest in human-AI affairs. Then, human-AI culture may drift to the point that AI no longer value us, and we no longer value ourselves. And then we die out, to save resources for the advancement of civilization.
This might not sound bad, if we assume that AI can carry on being conscious without us. But why should we be so sure that AI can be conscious? A materialist might assume that anything that acts like it's conscious, or maybe better, has sufficiently human-like neural structures (or equivalents to neural structures), is conscious. If structures and behaviors like ours are how we know that our peers have minds, if neural structures just are what make us conscious, then we can be pretty sure that AI can be conscious.
However, assuming materialism seems shaky to me. Immaterialism is at least as good as materialism, (I think better), as an explanation of what we observe. As an immaterialist, I wouldn't risk torturing an AI, because an AI (as it appeared to me) could happen to have a consciousness attached to it. But I also wouldn't risk "uploading my mind" into a computer, and then trusting that that worked, in the sense of creating a conscious copy of me. Maybe it wouldn't happen to have a consciousness attached to it.
Attempting to upload humans to computers and then letting the humans die out risks extinction of consciousness. How strictly will we be against taking such a risk when the day comes when it seems technically feasible? That's something that can vary culturally. The author of X-Risk makes out the prevention of extinction to be a serious responsibility which is up to us to take on. But what if our culture drifts so much, particularly through ease, that we just don't care, or don't care enough to hold back from taking the risk?
Another thing to consider is that cultural drift doesn't have to lead to a stark, conscious, official statement of "I am an antinatalist so I support the end of humanity" or "I am gungho to be subjugated" or "I am OK with humans dying out since I'm pretty sure computers can be conscious". Rather, for it to be a problem, there only has to be an erosion of people's determination, and an increased tendency for them to be confused. The more people are eroded and feel little stake, little capacity for outrage, in what goes on -- a fiducialism without connection to the best, perhaps -- the less they will be able to resist the small groups of people (or AI) who have other ideas for them. Likewise if they don't have their minds firmly made up, or don't have the capacity to hold onto their own beliefs, someone or something that wants to manipulate them can do so, to bend their minds to "willingly" support them.
--
We might find the project of avoiding drift attractive as a kind of cultural engineering project. If we make it so that people believe what they need to in order to not choose death, without having to ground that guiding belief in truth, we might consider ourselves successful. But there is some danger in us forming a premature consensus that excludes God or perhaps some other important truth. Whatever history-invariant cultural constitution we decide on, we should decide on it because it is true.
We could see this as a project for everyone to work on: finding the truth of what values are really eternal. In the meantime, the government (and the other elites who lead society) would try to facilitate that process, without making up our minds for us.
Is cultural drift likely to be a problem? I'm not sure that it's the most likely problem, but to the extent that any of our problems are parameterized by human motivation, it seems like coming up with a solid, rational, intuitively and emotionally moving thought pattern to motivate action is called for anyway, and a moral realism is the kind of thing that can work in that role. If there is a 10% chance of cultural drift being a potential problem, then someone should care about it. And since culture is in the hands of everyone, it's something everyone can work on. Some other X-risks tend to require specialist training to directly work on their prevention, but not this one.
No comments:
Post a Comment