About & FAQ

What is AI Safety Camp?

We help people who want to work on AI Safety to team up together on concrete projects.


There are two ways for you to join AI Safety Camp:


As a research lead, it’s your job to suggest and lead a project. You apply by sending us a project proposal. We’ll give you feedback to improve your proposal, and if you’re accepted we’ll help you recruit a team.


As a team member you’ll join one of the projects suggested by the research leads. What you’ll be doing depends entirely on the project and your role in the project. 


We ask all participants (including research leads) to commit at least 10h per week for 3 months (mid January to mid April). Several teams continue to work together after AISC is over, but you’re only committing to the initial 3 months.


AI Safety Camp is entirely online, and open to participants in all time-zones.

What do you mean by “AI Safety”? 

I.e. what kinds of projects are you looking for?


We as organisers do not have an entirely unified perspective on the exact nature of AI risks, and we do not require AISC participants to share any of our concrete views, either.

We believe that AI progress could potentially lead to human extinction, or at the very least poses severe large-scale risks, and that it is imperative to work towards ensuring that future AIs are both developed and deployed in robustly safe ways. Put inversely, we want to ensure that uncontrollably unsafe AI is not developed at all. We welcome a diversity of approaches and perspectives in service of this goal.


Each of us organisers (Remmelt, Linda, and Robert) is unilaterally allowed to accept projects (although we do listen to each other's advice). This means that, if you want to lead a project, it’s enough to convince one of us that your project is worthwhile. When submitting a project proposal, your application will be handled by whichever of us has assumptions most aligned with your project proposal, subject to workload constraints.

Remmelt’s perspective – let’s stop harmful AI 

With safety, I mean constraining a system’s potential for harm. 


To prevent harms, we must ensure that future AI systems are safe:


Today, companies recklessly scale designs and uses of AI models. There is a disregard for human safety. To hide this, companies utilise researchers to give users the impression of safety rather than actual safety. Safety researchers chase after the companies – trying new methods to somehow safely contain the growing combinatorial complexity (and outside connectivity) of models already in use. Billionaires sympathetic to the cause even support the companies to start up ‘safely’. This is a losing game.

Sincere researchers strove to solve lethal risks. Instead they discovered deeper problems that they could at best solve partially, using fragile assumptions then questioned by other researchers. No-one found a method to control machinery once it starts scaling itself (to not converge on deadly changes to our environment needed for its self-preservation, etc). Researchers are in fact discovering sharp limits to controlling AI. 


There is dignity in informing the public: ‘We did our best to solve safety for autonomous open-ended AI systems. Sadly, we discovered that this problem is intractable.’


Therefore, it is not on us to solve all the risks that accelerationist CEOs and their engineers introduce by releasing unscoped designs. It is on us to hold firm: ‘You shall not pass. No longer shall we allow your reckless behaviour to put our world in true peril.’


We are not alone. Many communities want to prevent companies from harmfully scaling AI. Creatives and privacy advocates aim to stop AI freely feeding on personal data. Workers and whistleblowers aim to stop cheap but shoddy automation. Consumer organisations and auditors aim to stop unsafe but profitable uses. Environmentalists and local country folk aim to stop the polluting energy-slurping data centres. 


Let’s ally to end the careless pursuit of ‘powerful AI’, at the cost of everything we hold dear in life.


Some reasons to start a project:


Email me if you are taking initiative and looking for collaborations. I’m limited on time, but would gladly share my connections and offer insight into questions. 


As an AISC organiser, I take projects that are well-scoped around an aim to robustly help pause/stop AI, and are considerate of other communities’ concerns about AI. For examples of projects, see last round


I’m excited about:


Linda’s perspective – mech-interp and miscellaneous 

I’m worried that out-of-control AI may kill everyone on earth. I would very much like this not to happen. I have some ideas and opinions on what are good research directions for preventing this outcome, but I’m also open to all sorts of project ideas, as long as you have a coherent theory of change.


Theory of change is your story for why your project would help. If you succeed, why is the world now a safer place?


I think it’s important to try a diversity of approaches to AI safety, since we don’t yet know what will work. And I further think the way we get real diversity, is for as many people as possible to think for themselves, and pursue whatever you yourself think is a promising direction, even if your intuitions don’t line up with mine.


That said, personally think that there are two broad categories of avoiding AI driven extinction:


Or more likely, some mix of the above, such as:


Mech-interp is short for Mechanistic interpretability which means research that is trying to understand what neural nets (NNs) are doing, by analysing what is going on inside. I don’t expect us to be able to align something we don’t understand, which is why understanding NNs is the first step. With mech-interp solved, we can see what various alignment techniques are actually doing to the NNs. But more importantly, we’ll be less fundamentally confused about what it is we’re dealing with. Not being confused about a problem is typically a prerequisite to finding solutions.


I know that there are also smaller scale harms and risks from AI (i.e smaller than total human extinction). I am happy that there are people paying attention to these problems, both because these are real issues in their own right, and because there are possible synergies between focusing on smaller and more current harms, and solving the problem I most care about. But if I’m honest, if I was not concerned with extinction scale risks from AI, I would probably do something else than co-running AI Safety Camp.


I’m excited about:


Robert’s perspective – approaches that seem conceptually sound 

I believe that advanced AI systems are unlike any previous technology in their potential to have vast and diverse impacts across many domains and scales, and in their potential to exhibit dynamics that make them hard to control. In addition, leading AI companies explicitly aim to advance their systems' problem-solving capabilities beyond human abilities, a goal that seems increasingly within reach.


I think this is incredibly reckless, because it doesn’t seem likely that current alignment or control techniques will scale with AI systems becoming ever more capable and distributed.


Once we cross the threshold of creating AI systems that are competent enough to disempower or extinct us, we probably only get one critical try. We don’t know where that threshold is, but I think we should take the possibility seriously that it is not far out. 


As an AISC organiser, I am interested in your project if it is aimed at addressing this overall risk scenario or a part of it.
Since I don’t think that we have a sufficient conceptual handle on the problems yet, I welcome diverse and speculative projects that cover more ground in terms of exploring frameworks and angles of analysis - basically, as long as you can explain to me why the project might be useful for AI Safety, I will lean towards accepting it.


In particular, I’m excited about:


Because conceptual research is rarely formally trained in academia, the bar on project leadership and flexibility will be a bit higher than for those projects featuring more tractable and concrete milestones and their associated difficulty. I’ll want to check your thinking about how to ensure that your team members spend their time efficiently, rather than on unproductive confusion. 


However, don’t worry about your project getting rejected if you are still figuring that out. As with other aspects of the project proposal, I’ll be happy to discuss this and give you time to refine your approach. I just want this to be developed by the time your project is opened for the team member applications.

The structure of AISC

This section describes the current format of AISC, which we’ve been doing since 2023. We expect to keep this structure for the foreseeable future, because we found that this format works very well, and is efficient in terms of organiser time.


The goal of this structure is to help collaborators find each other. More specifically, we set up teams to collaborate on concrete projects, part time, for 3 months. AISC is about learning by doing.

Applications / Team formation

The first step in doing this is opening up the research lead (RL) applications. Anyone with an idea for a project is invited to send us your project proposal. Next, we’ll give you feedback on your project, and some time to improve it. We aim to have at least one call with every RL applicant, where we discuss your project and, if needed, lay out what you would need to fix to get your project accepted for AISC.


The second step is us publishing all the accepted projects on our website, and opening up applications to join each project. We encourage everyone who has some spare time, and is motivated to reduce AI risk, to have a look at all the projects, and apply to the one that interests you. 


Next, each RL will evaluate the applications for their projects. It’s the job of the RL to interview and choose their team. RLs are given guidance on how to do this from the organisers, but it’s up to the RL to decide who they want on their team. 


It’s the job of the organisers to onboard and support the RLs. It’s the job of the RLs to onboard and support their team members. 


The program

We start together with a joint online opening session, and end together with every team presenting their results. In between, each team mostly works independently on their project.


Each team will have their project proposal, written by the research lead, when they applied, and approved by the rest of their team, when they chose this project to apply to. This means that you know what to do, at least to start with. The RL will guide the research project, and keep track of relevant milestones. When things inevitably don’t go as planned (this is research after all) the RL is in charge of setting the new course.


We require that every team have weekly team meetings. Other than that each team is free to organise themselves in whatever way works best for the team and the project.


There will not be much else happening aside from your projects. We have found that when we try to organise other activities, most participants prefer to spend their often limited time on their team’s projects. We’ll probably do something to encourage inter-team interactions, but we’re still figuring out how to best facilitate this.


Ending / After

At the end of the program, it is up to each participant to decide if you want to continue working together, continuing the project or possibly something new, or if it’s time to go your separate ways. Some teams stay together, and multiple orgs have come out of AISC.

Timeline (for the 10th AISC)

We don’t know what the future will hold, but as long as the world has not gone too crazy, we expect to follow this approximate timeline for future AISCs too.


2024

Late August - September 22: 

Research lead (RL) applications open.


Late August - October 20: 

We help the research lead applicants improve their project proposals.


October 25 - November 17: 

Team member applications are open.


November 18 - December 22: 

RLs interviews and selects their team members.


2025

January 11-12:

AISC opening weekend.


Mid-January to Mid-April: 

The camp itself, i.e. each team works on their project.


April 26-27 (preliminary dates):

Final presentations 


After April: 

AISC is officially over, but many teams keep working together.


FAQ

Questions regarding Research Leads 

What are the requirements to be an AISC research lead?

Your research lead application will mainly be evaluated based on your project proposal (see next question).  We think that if you can produce a good plan, you can probably also lead a good project. However we still have some minimum requirements on you as a research lead. 


The most important requirement is that you have enough time to allocate to your AISC project. You will need to read applications and conduct interviews before the start of the program, and you need to spend at least 10h per week on your project throughout the program.


If you’re going to lead a research project you need to have some research experience, preferably in AI safety but any research background is ok. For example, if you are at least 1 year into a PhD or if you have completed an AI Safety research program (such as a previous AI Safety Camp, MATS, PIBBSS, etc), or if you have done a research internship, then you are qualified. Other research experience counts too. If you are unsure, feel free to contact us.


We also accept non-research projects. In this case you’ll need some relevant experience for your particular project, which we’ll evaluate on a case by case basis. 


Regardless of project, you also need some familiarity with the topic or research area of your project. You don’t need to have every skill required for your project yourself, since you will not be doing your project alone. But you need to understand your project area well enough to know what your knowledge gaps are, so you know what skills you are recruiting for.

How are project proposals evaluated?

As part of the RL application process we will help you improve your project plan, mainly through comments on your document, but we also aim to have at least one 1-on-1 call with every RL applicant. Your application will not be judged based on your initial proposal, but on the refined proposal, after you had the opportunity to respond to our feedback. 


Every project is different, and we’ll tell each RL applicant which (if any) aspects you need to improve to be accepted. But in broad strokes, your project proposal will be judged based on:


What is the theory of impact of your project? Here we are asking about the relevance of your project work for reducing large-scale risks of AI development and deployment. If your project succeeds, can you tell us how this makes the world safer?


Do you have a well-thought-out plan for your project? Does this plan have a decent chance to reach the goal you set out for yourself? How well does your plan fit the format of AISC? Is the project something that can be done by a remote team over 3 months? If your project is too ambitious, maybe you want to pick out a smaller sub-goal as the aim of AISC?

What are the downside risks of your projects? What is your plan to mitigate any such risk? The most common risk for AI safety projects is that your project may accelerate AI capabilities. If we think your project will enhance capabilities more than safety, we will not accept it

What should the project proposal look like?

Here’s our template for project proposals. Please follow this template to make it easier for team member applicants to navigate all the projects. 


See here for projects that were accepted for AISC9. You have to follow the links to see the full project proposal for each project. 

What can I get out of being a research lead?

Questions regarding other participants (aka team members) 

You need to be able to spend on average at least 10h per week, on whichever project you’re joining, from mid January to mid April. 


Most projects (but not all) will also have some skill requirements, but that’s different for each project. For some projects, specific skills are less important than attitude, or having plenty of time to contribute.

What can I get out of participating in AISC?

What is the application process like? 

The steps are as follows:

Can I join more than one team? 

No.


When we have allowed people to join more than one team in the past, they always end up dropping out of at least one of the projects. However, we are not stopping you if you want to informally collaborate across team boundaries. 

General questions

What are ways to help stop harmful corporate-AI scaling?

There are many ways to help stop AI. So many that it gets overwhelming.

Here’s an overview that might come in handy.


You can restrict:

A corporation extracts resources to scale AI:


This extraction harms us.


Communities are stepping up to restrict harmful AI, and you can support them!  


For example, you can support legal actions by creatives and privacy advocates to protect their data rights. Or encourage unions to negotiate contracts so workers aren’t forced to use AI. Or advocate for auditors having the power to block unsafe AI products.


You can learn more by reading this short guide.

What are some other ways to get involved in AI Safety research? 

Some other AI safety research programs


Or look for upcoming events or programs here: Events & Training – AISafety.com


SPAR is probably the program that is most similar to the current version of AISC, since SPAR is also online and part-time. 


We’d also like to highlight Apart Sprints, which are weekend long, AI Safety hackathons, which you can join online from anywhere in the world (as long as you have internet).


If you don’t know where to start, talk to AI Safety Quest for some guidance.


If you want to take some time to learn more, either by yourself or with a group of friends, here are some curriculums you can use:


How is AISC different from other similar programs?

When AISC started, there were no similar programs. Now there are lots, which we are happy about. But we still think there are ways AISC stands out. 


Other programs have mentors, AISC have research leads (RLs). The main difference between an RL and a mentor is that we require our RLs to be actively involved in the project. The RL is not just an advisor but also a team member themselves. Another difference is that we don’t require RLs to be experienced AI safety researchers. RLs should have some research experience, but mostly RL applicants will be evaluated based on their project proposal and not on their CV. 


However, we also don’t think that it is necessary for AISC to be different to be valuable. It’s more important for us to do a good job than to be unique. The interest in AI safety is growing quickly, and there is clearly enough interest for all the programs we are aware of.

Will there be any future in-person AISCs?

AISC used to be an in-person event. We shifted to online during the pandemic. After that AISC alternated between in person and online for a time, since we found both formats valuable in different ways. We went back to only online in 2023, since the funding situation for AI safety had gotten worse, and online events are much cheaper to run. 


We currently don’t have the funding or the staff necessary for an in person AISC. If you want to help us change this, please reach out.


However, even if AISC is currently online, there is nothing stopping you and your team from meeting up in person during the program. For example, EA Hotel offers free housing, food and co-working space for people working on important altruistic projects (e.g. AI safety). If you apply to stay there, and tell them you’re an AISC participant you’re likely to be accepted.

Do AISC team members get stipends? 

Maybe, if we can afford it. 


We’ve been able to pay out stipends to AISC participants for the last few years, although the amounts have varied. 


Stipends is by far the most expensive part of the current version of AISC.

I live in the Asia-Pacific, will my time-zone be a problem? 

Camp wide activities, such as opening and closing weekend, will end up at inconvenient hours for you. If you want to skip these you’re excused. 


Team meetings however are much more important. You absolutely need to be able to attend the majority of your team's weekly meetings. If this is a problem depends on who else is on the team and where they live. 


It’s possible to have calls with people from two out of the three continental regions: Americas, Europe/Africa, or Asia-Pacific. But usually not from all three regions at the same time.


Meetings are much more important when collaborating remotely.


In a general work or research environment where everyone works in the same office it’s often a good idea to eliminate meetings as much as possible. But in those environments you can easily communicate with each other all the time and you get a lot more casual interactions with people. For remote teams, sometimes the meeting may be the only involvement you have that week with the project.


We’ve noticed that when teamwork breaks down, it often starts with failing to have regular team meetings.

Any advice for remote team work, for AISC teams or otherwise? 

Yes, see tips for starting as a new team here

I find AI extinction risks stressful. How to deal with this? 

This is not easy.


If you don’t know anyone else who is thinking seriously about this problem, a first step could be to just find others who understand your concern. Not being alone is a good first step. Look for communities and events where you can find like minded people. 


We also recommend this blogpost, Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023), which contains: 


Group photo from the first AISC

What are the requirements to be an AISC team member? 

Can I join AISC several times?

Yes! 


We welcome returning participants. You’ll bring both your research experience, and your experience doing this type of team work, which will benefit your new team.

Why do you insist that all teams have weekly meetings?