The AI safety camp is organized by volunteers and depends on funding to cover costs, e.g. for accommodation, catering, and low-income travel reimbursement. If you are interested in supporting us financially or in helping to organize a future camp, please contact us at:
contact [at] aisafety [dot] camp
You can find information and summaries of previous camps here.
We highly recommend reading the blog post The first AI Safety Camp & onwards
The following project proposal gives a more detailed overview of the camp’s objectives.
Aim: Efficiently launch aspiring AI safety and strategy researchers into concrete productivity by creating an ‘on-ramp’ for future researchers.
- Get people started on and immersed into concrete research work intended to lead to papers for publication.
- Address the bottleneck in AI safety/strategy of few experts being available to train or organize aspiring researchers by efficiently using expert time.
- Create a clear path from ‘interested/concerned’ to ‘active researcher’.
- Test a new method for bootstrapping talent-constrained research fields.
Method: Several weeks of online collaboration on concrete research questions, culminating in a 10 days intensive in-person research camp. Participants will work in groups on tightly-defined research projects in the following areas:
- Strategy & Policy
- Agent Foundations
- Value Learning
- Corrigibility / Interruptibility
- Side Effects, Safe Exploration
- Scalable, Informed Oversight
- Robustness (Adversarial Attacks, Distributional Shift)
- Human Values
Projects will be proposed by participants prior to the start of the program. Expert advisors from AI Safety/Strategy organisations will help refine them into proposals that are tractable, suitable for this research environment, and answer currently unsolved research questions. This allows for time-efficient use of advisors’ domain knowledge and research experience, and ensures that research is well-aligned with current priorities.
Participants will then split into groups to work on these research questions in online collaborative groups over a period of several weeks. This period will culminate in a 10 days in-person research camp aimed at turning this exploratory research into first drafts of publishable research papers. This will also allow for cross-disciplinary conversations and community building, although the goal is primarily research output. Following the camp, advisors will give feedback on manuscripts, guiding first drafts towards completion and advising on next steps for researchers.
Example: Multiple participants submit a research proposal or otherwise express an interest in interruptibility during the application process, and in working on machine learning-based approaches. During the initial idea generation phase, these researchers read one another’s research proposals and decide to collaborate based on their shared interests. They decide to code up and test a variety of novel approaches on the relevant AI safety gridworld. These approaches get formalised in a research plan.
This plan is circulated among advisors, who identify the most promising elements to prioritise and point out flaws that render some proposed approaches unworkable. Participants feel encouraged by expert advice and support, and research begins on the improved research proposal.
Researchers begin formalising and coding up these approaches, sharing their work in a Github repository that they can use as evidence of their engineering ability. It becomes clear that a new gridworld is needed to investigate issues arising from research so far. After a brief conversation, their advisor is able to put them in touch with the relevant engineer at Deepmind, who gives them some useful tips on creating this.
At the research camp the participants are able to discuss their findings and put them in context, as well as solve some technical issues that were impossible to resolve part-time and remotely. They write up their findings into a draft paper and present it at the end of the camp. The paper is read and commented on by advisors, who give suggestions on how to improve the paper’s clarity. The paper is submitted to NIPS 2018’s Aligned AI workshop and is accepted.
Expected outcome: Each research group will aim to produce results that can form the kernel of a paper at the end of the camp. We don’t expect every group to achieve this, as research progress is hard to predict.
- At the end of the camp, from five groups, we would expect three to have initial results and a first draft of a paper that the expert advisors find promising.
- Within six months following the camp, three or more draft papers have been written that are considered to be promising by the research community.
- Within one year following the camp, three or more researchers who participated in the project obtain funding or research roles in AI safety or strategy.
Next steps following the camp: When teams have produced promising results, camp organizers and expert advisors will endeavour to connect the teams to the right parties to help the research shape up further and be taken to conclusion.
Possible destinations for participants who wish to remain in research after the camp would likely be some combination of:
- Full-time internships in areas of interest, for instance Deepmind, FHI or CHAI
- Full-time research roles at AI safety/strategy organisations
- Obtaining research funding such as OpenPhil or FLI research grants – successful publications may unlock new sources of funding
- Independent remote research
- Research engineering roles at technical AI safety organisations
Research projects can be tailored towards participants’ goals – for instance researchers who are interested in engineering or machine learning-related approaches to safety can structure a project to include a significant coding element, leading to (for instance) a GitHub repo that can be used as evidence of engineering skill. This is also a relatively easy way for people who are unsure if research work is for them to try it out without the large time investment and opportunity cost of a PhD or masters program, although we do not see it as a full replacement for these.