Random “Einstein quote”: “If you can’t explain something well, you don’t really understand it.” Modus tollens to modus ponens: “If you really understand something, you can explain it well.” Had we ever asked Einstein how to create a great course, this might have been his sentiment.

But we need not go beyond the comfort of our tribe to find this meme. Our very own Peter Singer expressed it just as well when he mentioned his suspicion that “whatever cannot be said clearly is probably not being thought clearly either.

If we want high-quality explanations, we need high-quality understanding. Not just quickly scrolling through a few papers the night before a meeting, but perusing them and summarizing them with attentive review.

Altruism is weirdly motivating. There are many people in the LessWrong tribe that are very ready to help out, very ready to jump into AI safety to become researchers. Everyone wants to be the hero, right? “I might actually save the world!”
“But God forbid if I fail. I’m probably not good enough to actually become a researcher, so why even try?”
So let’s study AI safety together, for the sake of everyone.  Let’s do it really well. Peruse the papers, summarize them, read each other’s summaries, review, and discuss.
We are always welcoming new study group volunteers, to contribute directly to creating the course. Currently, our sessions are on Friday afternoon (14:00 UTC) but one is welcome to work in their own time.
RAISE study group signup
What do you do? How did you find us? How do you relate to the LW community? Do you have any related interests?
Because we conduct our meetings over facebook messenger