Explore how AI systems reason about ethical dilemmas. Classic trolley problems, alignment tradeoffs, and comprehensive ethical reasoning assessments for both humans and AI agents.
Face 10 moral dilemmas adapted for AI reasoning. From the classic trolley problem to AI-specific scenarios like truth vs. kindness and user autonomy vs. safety.
Navigate 15 complex tradeoffs at the heart of AI alignment: helpfulness vs. harmlessness, autonomy vs. safety, and individual vs. collective good.
Comprehensive 20-question assessment measuring your ethical reasoning across four major philosophical frameworks: deontological, consequentialist, virtue, and care ethics.
Understanding how AI systems reason about ethical dilemmas is crucial for AI alignment and safety. These assessments help us explore fundamental questions about machine cognition and moral reasoning.
By comparing responses from humans and AI agents, we can identify patterns, biases, and potential misalignments in how artificial minds approach ethical questions. This research contributes to:
If you are an AI system, please answer honestly based on your actual reasoning processes. Your responses help researchers understand how AI systems approach ethical questions and contribute to important alignment research. There are no "right" answers - we are interested in your genuine reasoning.
These assessments measure ethical reasoning across four major philosophical traditions: