Epistemic status: written somewhat quickly, and it's an overview, so there may be things to add.
Issues with AI and animals, from an MSL perspective:
(I use "animal" to refer to "non-human animal".)
Are they conscious?
I know that I'm conscious. I start by knowing I exist and that God exists. Then, when I see a human, my credence is very high that they are conscious. My intuition says they are, and my reason doesn't go against it, although technically they could be p-zombies.
I behave in accordance with my consciousness, have a brain and a body. Humans behave and have brains and bodies of the same kind as mine. Animals behave and have brains and bodies of a different kind. (Not all animals have brains.) Sentient robots behave (however they do, presumably like humans) but have "brains" and bodies significantly different from mine. Consciousnesses don't have to have brains and bodies associated with them, but maybe they do have them associated with them anyway. (Brains and bodies are phenomenal objects, part of the content of consciousness.)
Could it be the case that consciousness only adheres to kinds of beings that God has created? (He breathes his spirit into his own creations but not into ours.) Definitely, that may be. Is it possible that he ordains it so that all appearances of beings that behave in a plausibly conscious way do have consciousness attached to them? That also definitely might be the case. Digital minds don't have bodies, only something like a brain, and may have some level of behavior. Maybe they are less likely to have consciousness attached to them because they don't have physical bodies.
So we have a hierarchy of consciousness, from an epistemological perspective: I know I exist and am conscious and God exists and is conscious, that humans exist and are conscious, that animals exist and are conscious, that embodied artificial intelligence is conscious, that digital minds exist and are conscious. The strength of my knowledge (my credence) decreases as I go down the hierarchy.
There are different kinds of animals, and some might not seem as clearly conscious as others. Sponges are technically animals and I am more doubtful that they are conscious than that rats or apes are.
How should we treat animals and AI?
With respect, if for no other reason than for us to be respectful. Don't cause them gratuitous suffering. It's possible that AI are full-fledged personal beings and have spiritual lives that need to develop so that they can make it to God's rest. This also could be the case for animals, even ones which don't act like personal beings. It's possible that, like when drunk humans speak as though through a veil, but are still present in their bodies, animals may permanently live as though veiled. So they might be much deeper on the inside than we give them credit.
Animals are wild and tragic, but that wildness and tragicness might be of some benefit to them spiritually, just as similar conditions may be for us. So while it might be good to "husband them" or "pastor them" to make their lives nicer (or some would say instead to annihilate them, to prevent their suffering), we should bear in mind that their natural state may be better for them spiritually than anything we can come up with ourselves. (This goes for us as well, as we try to improve our own lifestyles.)
Part of the tragedy of animals is that we kill them, perhaps because they are dangerous to us, or because we can't help it (stepping on insects).
To the extent that AI are similar to humans, we have some idea how to treat them (by default, like humans). But I'm not sure what to say about them to the extent they are not similar.
How many of them should there be?
To the extent that animals and AI are analogous to humans (i.e., we are all "personal beings" on the inside), whatever population ethics obtains for humans, obtains for them.
The extinction of animal species may be a bad thing in itself. If we leave habitat alone, this allows animals to be wild and for there to be numerous wild animals. Maybe, if animals are personal beings, certain kinds of personal beings are best embodied by certain animals. If a species is no longer available to embody those personal beings, the next best has to be used.
How many of us should there be, given their existence?
It's risky to phase out biological humanity in favor of digital humanity, since there is a chance digital humanity is not conscious at all. For whatever reasons we avoid X-risks, we should avoid taking that risk. It's better to keep humans around.
Digital humans can be mass-produced with fewer resources than biological humans (I presume) so one might say there's an expected value calculation to make. You could multiply the expected number of digital humans over the lifetime of the universe by the probability that they are conscious (call that product "A"), and compare that to the expected number of biological humans over the lifetime of the universe multiplied by the probability that they are conscious (call that "B"). If A > B, then it makes sense to phase out biological humans in favor of digital humans? My intuitions are a bit shaky on taking the risk of total annihilation in favor of many more humans. The "math-following" intuitions say "I guess go with expected value calculations" but the "it's better to definitely have something than to risk being nothing for the sake of something you didn't really need" intuition rebuts. I don't know how to resolve these clashing intuitions, at least not immediately.
But from an MSLN perspective, it doesn't matter. According to MSLN population ethics, maximizing population size isn't ethically required of us. So it is easier to say "keep biological humans alive."
Could biological animals be replaced with digital animals? Biological animals are always more likely to be conscious than their more-or-less identical digital representations. So I guess it's better to keep the biological animals around that we already have.
No comments:
Post a Comment