Random “Einstein quote”: “If you can’t explain something well, you don’t really understand it.” Modus tollens to modus ponens: “If you really understand something, you can explain it well.” Had we ever asked Einstein how to create a great course, this might have been his sentiment.

Altruism is weirdly motivating. There are many people in the LessWrong tribe that are very ready to help out, very ready to jump into AI safety to become researchers. Everyone wants to be the hero, right? “I might actually save the world!”

“But God forbid if I fail. I’m probably not good enough to actually become a researcher, so why even try?”

So let’s study AI safety together, for the sake of everyone. We have a curriculum and we got a study group running, which is doing two things:

  1. Creating an online course that will allow people to learn everything needed for AI safety research quickly. Right now this means creating exercises for the curriculum.
  2. Have members of the study group learn this stuff. Some of us have started knowing almost very little of the curriculum.

Currently, the curriculum includes set theory, logic and computability theory. You will have peers to discuss things with and get motivation from.

This study group started on 2018-04-20. Released content is available in our online course. You will probably need some catching up, but you can join us anyway. We have video chat session during which we study or create the online course every wednesday 11:00 UTC – 14:00 UTC. They are optional.

RAISE study group signup
What do you do? How did you find us? How do you relate to the LW community? Do you have any related interests?
https://distill.pub/2017/research-debt/
Because we conduct our meetings over facebook messenger