Specific theory of change:
Phase 1: Create policy supply (Months 1-6)
- Launch AI Policythons at 20 universities across 10 countries, focusing on institutions near policy centers (DC, Brussels, Geneva, Tokyo)
- Each policython produces 5-10 briefs on specific LAWS regulation mechanisms: verification protocols, dual-use export controls, liability frameworks, red lines for autonomy levels
- Target output: 150+ policy briefs covering the full regulatory stack
Phase 2: Policy distribution (Months 3-12)
- Partner with existing organizations (Future of Life Institute, Campaign to Stop Killer Robots, ICRC) to route briefs to decision-makers
- Briefs get distributed to: UN CCW delegates, EU AI Act working groups, national defense committees, and NGOs building the treaty coalition
- Success metric: 30+ briefs cited in official proceedings or policy papers
Phase 3: Movement building (Months 6-18)
- Policython participants become ongoing advocates, similar to how Model UN creates future diplomats
- Build a network of 500+ policy-literate AI safety advocates embedded in government, think tanks, and advocacy orgs
- Create a talent pipeline into organizations actively working on LAWS regulation
Why this works:
- Speed: Weekend events can be launched globally in months, not years
- Proven model: I've done this before with Policython.org - our research was adopted by Opportunity Insights and influenced $20B+ in COVID policy
- Fills a gap: Treaty campaigns need specific, implementable policy proposals. Right now there's ~10 organizations doing advocacy but insufficient policy R&D capacity
- Scalable: Student organizers can host independently after initial support, like how I helped launch policythons in Toronto and Philippines
- Network effects: Creates a global community of policy-engaged AI safety people who continue working on this after the events
Concrete next steps: