Introduction to my Writing, for Effective Altruists

I first heard about EA in 2013. I was at a talk given at a university where I used to hang out. What really struck me about the talk was the Drowning Child Illustration. I didn't join the movement, but that idea got into me. In 2017 I started attending an SSC meetup group and there were a few people there into EA. I didn't think much about EA, except the Drowning Child Illustration. In 2020 I started looking at the EA Forum, and thought a lot about EA and its current concerns. I decreased my interest in 2021, but still followed the Forum some in 2022.

I think in some ways I am the kind of person who would be in EA, but I don't feel like I fit in the mainstream EA movement.

I would describe my blog as usually being "notes" and not "finished product". This is not a complete list of every post an EA might find interesting.

The posts -- in roughly descending order of how close they are to mainstream EA concerns:

Long Reflection Reading List -- thoughts about the difficulty of the Long Reflection, what it would take to pursue it, and some related topics like moral progress.

Cultural Altruism (Hubs) -- "cultural altruism" as an alternative or adjunct to effective altruism and the Long Reflection, and discussion of where to site a cultural altruism hub.

Reading list on population ethics -- presents the "rest" view of population ethics, in contrast to the "total" view.

Agency and AI -- a perhaps too short post about a value to program AI to promote.

What if ASI Believed in God? -- If a logical case for the existence of God can be made, or even for the "non-Pascalian" likelihood that God exists, could this affect AI behavior? If such a case wouldn't affect AI behavior, what would that mean about AI?

Economic vs. Personal Lens on Morality -- often I feel like EA's conception of morality is as "maximize a variable" or "pile up a specific kind of wealth". But an alternative is "relate to other beings properly". This post discusses the two.

The Drift of Financial and Electoral Systems and AI -- if ASI are under human control, they might be under existing financial or electoral incentive structures (or the descendants of those structures), which could end up turning people into buying- or voting-machines.

Survival Moloch vs. Hedonic Moloch -- "Moloch" can be stretched to be about any optimization which gets us to throw away "delicate" values. Over time, hedonism as psychological force can get us to throw away delicate values, like the drive to survive. Holiness and strength suggested as alternate optimization goals.

Philosphical AI; AI and Trust -- "Will AGI be able to do philosophy? Would that be a natural side effect of its ability to reason?" Plus some discussion of AI and trust.

Turn Toward Politics -- Maybe a lot of EAs or EA-adjacent people should get into politics and what is adjacent to politics (religion, art, culture), because if we can figure out AI alignment, that might be the remaining bottleneck to good being done.

ASI Centralizing Technological Development? -- Maybe ASI will be so good at research that it will have a monopoly on research, or something in that direction.

Space Colonization? Or End to Moloch? -- Maybe there is a conflict between ending Moloch and space colonization.

Ending Moloch -- Maybe we can end Moloch by bringing about international cooperation. Not by creating a one world government, but by making all nations friendly with each other.

X-Risk review -- a review of X-Risk by Thomas Moynihan. His book argues for the need to oppose X-risk for certain reasons (it's also a history of X-risk as a concept, but that proved less interesting to me). My review claims that for a really effective anti-X risk ideology, there needs to be a stronger moral realism than what Moynihan suggests; also opposes Moynihan's worldview and generally any worldview that puts pleasure above being a real person.

Feeling of Value review -- a review of The Feeling of Value by Sharon Hewitt Rawlette. Her book argues for a moral realism founded in normative qualia. My review claims that her moral realism does work to some extent but is incomplete. (The review is lengthy, and I don't remember all of what I said. I think there may be more than that.)

Cultural Moloch -- a post about how cultural drift could lead us to sacrificing old values (without even thinking of it as sacrifice).

"There's Nothing Special About Where You Came From" -- a heuristic that leads to a more objective, cosmopolitan worldview, or maybe to nihilism as a logical conclusion. Maybe consciousness is just a thing we are emotionally attached to because of where we came from.

MSLN -- a set of arguments, proceeding from weaker, more defensible / more mainstream claims to stronger, less defensible / less mainstream claims, which argue for the existence of, 1. a vague possibility that a certain kind of God exists, 2. a stronger sense that that God does exist plus more information about him/her/them, 3. a thicker version of the same God, who founds moral realism (this argument also implies a kind of proto- or minimalist Christianity without necessarily getting the Bible involved), and finally 4. an articulation with work someone else has done which finds a compatible God / worldview in the Bible. The thing that is found in all of the arguments is the view that God (or the possible God) is one who needs us to come to be 100% in tune with him so that we can live with him, and we have to do this ourselves on the most important level of aligning our own values with his. This process is necessarily finite in duration, but is too much for us to complete in this life. Therefore, there will be another one in which we can complete it. This could be relevant to many EA concerns, but is this far down the list because it's probably the furthest of what I've mentioned so far from mainstream EA. However, I did at least in the area of population ethics (and perhaps in others I'm not remembering) try to directly relate the logical consequences of "my version of God" with questions that EAs ask about what to do. The MSLN readings are long (if you look up all of them). (This is a work in progress.)

My blog in general -- In the latter part, from 2021 to now, there is more discussion of politics. I think if humans control AI, politics will become the biggest secular concern. I don't want to say I have above-average insights into politics, nor below-average, but I do think I have my own point of view which may be unique in some ways.

Each of the links to my writing so far in this post are found on my blog, dating from 2019 to 2022. I also have written some weirder (/ less mainstream EA-like), more personal, more religious writing, some non-fiction and fiction.

I released a fiction book that talks about issues of rationality, religion, romance, friendship, altruism, cultural decay, and meaning, entitled Waiting for Margot. It's in the format of a sitcom, with heavy dialogue and themes.

I have a sequence of books, two of which are non-fiction, the rest fiction, which could be seen as the story of having the Drowning Child Illustration plus some other driving ideas stuck in your head while you try to deal with the isolation that comes from that / the dangers of the people around you who do not accept that.

No comments:

Post a Comment