What do you mean with safety?

With safety, we mean constraining a system’s potential for harm.

  • Safety is context-dependent. Safety is not intrinsic to an AI model. Harms result from the artificial components’ interactions with a more complex surrounding world.
  • Safety must be comprehensive. Safety engineering is about protecting local users and – from there – our society and ecosystem at large. If you cannot even design an AI product to not harm current users, there is no sound basis to believe that scaling your design will not harm future generations.
  • The precautionary principle goes against introducing new scalable technology:
    There are many more ways to break the complex (local-context-dependent) functioning of our society and greater ecosystem that we humans depend on to live and live well, than there are ways to foster that life-supporting functioning.

❝ The majority of algorithmic assessments aim to audit general properties of a system without considering its operational envelope. The lack of a defined operational envelope for the deployment for general multi-modal models has rendered the evaluation of their risk and safety intractable, due to the sheer number of applications and, therefore, risks posed.
—Dr. Heidy Khlaaf
  Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems

❝ At first such changes might not result in inevitable human extinction. However, many vectors of change would, if compounded over time, end up violating one of the many physical, chemical, and biological prerequisites needed for human survival. Over the past century it has become clear that human-driven changes to the Earth have the potential to destroy the human species as a side effect. The variety of possible AI-driven changes expands and accelerates this potential. 
— Dr. Andrew Critch and Dr. David Krueger
    AI Research Considerations for Human Existential Safety


Why the two areas for projects?

We hold different views as organisers, so we facilitate different projects:

  • Remmelt facilitates projects to restrict corporations from recklessly scaling the training and uses of ML models. 
    Remmelt got a rude awakening from an ex-Pentagon engineer’s reasoning:  Any self-sufficient learning machine propagates effects too uncontrollable to stay safe. Their population of components, of learned code and produced hardware, would come to change the environment in any directions that happen to maintain and increase their artificial existence – destroying our organic ecosystem.
  • Linda facilitates diverse other projects, including technical control of AGI in line with human values.
    Linda has uncertainty around what approaches would genuinely help ensure that future AIs are safe.  With her years of technical experience in AI Safety, she wants to empower thoughtful people to work on what they think makes sense.

We end up having lots of interesting discussions as organisers!  We hope you will have too.


How do I know I am prepared for AISC?

Each project has different skill requirements. Please check the skill requirement for the project(s) you are interested in.


What can I specifically get out of the camp?
  • Meet insightful and competent individuals dedicated to ensuring future AI is safe.
  • Learn about research concepts, practices and mindsets in the field.
  • Deliberate how to move a project forward with a research lead.
  • Find collaborators to work with over the months and years ahead.
  • Be more systematic and productive in your project work.
  • Learn what it’s like to take on a specific role in your team.
  • Test your personal fit to inform your next career steps.




What is the application procedure?

By 1 Dec:  Fill in the questions doc and submit it through the form. This includes indicating which projects you are interested in.

Dec 1-22:  You may receive an email for an interview, from one or more of the research leads whose project you applied for. Each research lead is responsible for selecting their team, including deciding who they chose to interview.

By 28 Dec:  You will definitely know if you are admitted or not. Hopefully we can tell you sooner, but this is the date for which we pinky-swear you will know for sure.


How can I prepare for the interview?

While their evaluation mostly hinges on your written answers, your interview with the research lead is an opportunity to clarify and collaboratively discuss your ideas real-time. 

Here’s how you can prepare:

  • Look through the answers you gave on the application document (especially on your selected project) and make sure you can answer follow-up questions the research lead might ask to clarify your reasoning further.
  • Prepare 5 minutes before the interview by pointing a light source at your face, putting the camera at eye level, and checking that your microphone is working – to ensure the research lead can follow what you’re saying.


Might I be admitted for my 2nd or 3rd pick project?

If you are declined for your 1st pick project, there is still a chance that the RL of your 2nd or 3rd pick project reads your application and invites you for an interview. You may even receive multiple invites.

We know from last camp that many projects will get too many applications to review them all. When this happens, the Research Lead prioritizes higher-preference projects first.

Last year, the Research Lead for the most popular project only had time to review applications from the people who chose that project as their #1 top choice.

We’re sorry that we can’t give all of you a fair chance for all the projects you are excited about, but we just don’t have the staff to do this.

Might I be admitted for a project that is not one of my top 3?

Probably not.

We’re letting you indicate interest for more than 3 projects, to cover for the eventuality that there is some project that sufficiently many people want to join for the project to happen, but almost no-one chooses as one of their top 3, because other projects are more attention grabbing. 

In this case the project lead will look beyond people who chose their project as one of their top 3 choices. But this will not be the case for most projects, and might not be the case for any project.


The money question: how much does AISC cost? / will I get a stipend?

Attendance is free. 

  • If you have extra money to share, we’d appreciate a donation, since AISC is currently low on funding.
  • If you yourself are low on money, we still have a small stipend pot. You can request a stipend in the application.
    We expect to be able to pay around 1000 USD per participant.




How is AISC different from other AI safety programs or research internships?

There are currently many ways to get involved in AI safety. You can have a look here, for a diverse list of events and programs. 

One way that our virtual edition stands out is that we provide a combination of learning-by-doing and accessibility. You get to contribute to a real research project, and you can join from any part of the world. However, see also Alignment Jam, who organize open AI safety hackathons.

Other opportunities:


What is the difference between research leads and mentors?

Research leads have experience doing research (eg. are PhD student, or completed a research program). They are personally working on and leading the project you are applying for.

Mentors are often senior researchers paid full-time to work on AI Safety. They mentor separate projects on the side for a program. The downside is that mentors are tight on time and usually unable to offer much personal guidance.

In summary, a typical mentor is more senior but less involved than a typical research lead, but there is also a lot of overlap.


How much of a time commitment is AISC?

From January to April 2024, we expect you to:

  • join weekly team calls.
  • be able to block out at least 5 hours of work each week
    (some research leads may ask for more hours – check “Skill requirements” in their doc).

If you are too busy to work on a few weeks, that’s okay. Just let the research lead know in advance, and make up for it during other weeks.
But please join every team call you can, because that’s the moment you can check in where the project stands and resolve any confusions.

Narrow down the scope of your research and try to not take up any further commitments than you already have. It’s common for graduating students to doubly underestimate the hours they’ll spent in the end – to complete their university thesis and their research at the camp. This gets sucky for their AISC teammates when they become slow to respond or finish obvious tasks.

Avoid being just another person who plans forward optimistically, by accounting for usual ways that projects like this will get side-railed for a while. To reliably estimate the time you’ll need to be able to spend on another commitment, recall any similar project in the past and tally up the hours that roughly took to complete.


Can I join more than one team?

No, you won’t be able to officially join two teams at the same time.

AISC participants have to choose one team as their main team. But also, nothing is stopping anyone from informally helping out other teams. We do encourage cross team collaborations whenever this seems useful.

Why this rule?
It’s more likely than not that things don’t go entirely as planned. Either some step of the project takes more work than expected, or other things in life happen that absorbs time and energy. When this happens we want each participant to have only one primary commitment to only one team.

Any help a team member offers to other teams should be treated as loose intentions, not strong promises.


Any tips how I as team coordinator could help my team?

Yes, a bunch!

When the research lead asks you to be team coordinator, do plan another call.
Dedicate an hour to talk things through. Clarify the project’s milestones and essential outputs. Check where you can help with logistics. Note down tasks you will handle each week to coordinate meetings, coworking, and tasks.

Notice where you can help the team run more smoothly, and act to remove any bottlenecks.
Here are things to consider:


  • Make sure you meet weekly as a team, at a regular time.
    • When2meet is a good tool for finding overlapping free time.
    • Don’t lean too heavily on the assumption that the time chosen in week 1 will work forever. Most team members have other commitments that will override AISC. 
    • At the end of each meeting check whether everyone can again make the default meeting time next week 
    • If you are regularly 1 or 2 people short during the regular meeting time, change to finding a new time every week
      1. Work out the meeting time with everyone 2 or 3 days before the meeting rather than just after the last meeting to have a better chance of avoiding failure. This also acts as a prompt to team members that the next meeting is soon and they need to get stuff done. 
    • JJ’s take: Generally I recommend meeting too often for these types of groups. In a general work or research environment where everyone works in the same office I would prefer to eliminate meetings as much as possible. But in those environments you can easily communicate with each other all the time and you get a lot more casual interactions with people. Sometimes the meeting may be the only involvement you have that week with the project.

Coworking and paired work

  • Co-working tips:
    • Agree on session times & break times beforehand
      • Establish what kind of work routine suits best for you and for the task at hand; e.g. pomodoros (coding) vs sustained effort (lit review)
    • Set the goals of the session at the start and brief each other at the end
      • Try to avoid having your break time coinciding with briefing time
        • Down-time for recuperating is valuable & invigorating
      • Sometimes we don’t manage to do exactly what we set out to do – that’s alright, tangents can lead us places. However, when briefing each other, try to be efficient & progress-relevant.
  • Pair programming tips  (kudos @Ben Greenberg):
    • The basic purpose of pair programming are for the participants to (1) stay engaged with the problem and (2) help each other get un-stuck.
      • The pair should agree on their goals/conceptual understanding before coding. If there are disagreements, or the pair gets stuck, they should try to resolve their confusion before trying random fixes.
    • The roles: one person “drives” (codes) while the other watches. Ideally the “watcher” is responsible for thinking a few lines ahead (and on a slightly higher level of abstraction), while the driver writes code line by line.
      • If the driver gets stuck, the watcher helps them debug.
      • If the watcher doesn’t understand what the driver is doing, they discuss until both participants are on the same page.
    • The pair should switch roles every 20-30 minutes (or however long makes sense) in order to keep both participants engaged.
      • If one member of the pair is faster than the other, the faster participant should be patient when driving (and watching). It might make sense for the faster participant to spend more time driving (while taking care to make sure their watcher isn’t getting lost). If the gap in skills is too large, pair programming might not be a good idea.

Accountability and task management 

  • Document your goals, tasks and individual responsibilities. It’s easy to “talk” about what everyone’s doing and leave with the impression that you have a shared understanding. Write it down to reveal any disconnects. 
  • Take the lead on listing crucial tasks at the end of each meeting. Check-back the list at the start of the next meeting.
  • Try listing the tasks on a platform that teammates liked using before, like Asana or GitHub Issues. Assign any agreed-on task to someone by the end of each meeting. When a task is overdue, DM them to ask how it’s going.Then suggest updating the task based on that.
  • Take ownership for essential tasks, and move nice-to-have tasks to a backlog column. Your life is busy. Work takes longer than you expect it to. Do the essential work you committed to, or post a quick clear explanation for why you didn’t get around to it!


  • Please be generous and kind. Treat others with respect.
  • Everyone has their own preferences, expectations, working styles, annoyances, etc. It is valuable to make your teammates aware of them. In turn, please respect your teammates’ needs, even if yours are different.
  • Share your ideas. Let others get a sense of your thoughts, uncertainties, and intuitions.
  • Ask others what they are thinking about, especially if they haven’t said much about a direction the team is considering or heading in.
  • If you are a person who talks a lot, be mindful of this fact. Ask quieter teammates for their thoughts on an area where they have experience or responsibility. 


I find AI extinction risks stressful. How to deal with this?

We can relate with this.

As organisers, we think that controlling ‘AGI’ to stay safe would be very unlikely, if not fundamentally unsolvable.
There is also ambiguity around what ‘AGI’ even is, how such a technology could be introduced into society, how long this must at least take, etc.
These are resolvable questions, but still uncomfortable to face. We must reason this problem through comprehensively and consistently.

The weight on your shoulders from this is a lot. Some participants live in remote areas, where they do not even have anyone to talk about this.
We will try to offer peer-to-peer calls – called Microsolidarity – where you can listen and relate.

We can relate with the overwhelm and stress, and personal insecurities.
None of us are mental health professionals though. Sometimes talking with a professional is needed.

First and foremost, try be kind and take care of yourself!  Sleep and exercise.


I live in the Asia-Pacific. Would I be able to make team calls during the day?

This depends on who else is on the team and where they live. It’s possible to have calls with people from Asia-Pacific and Europe/Africa, or Asia-Pacific and the Americas, but usually not from all three time-zone regions.

If you get invited for an interview, definitely bring this question up!



Still have questions? Contact us at contact@aisafety.camp.