A moral dilemma in AI is a situation in which an AI system must make a decision that involves conflicting moral principles or values. For example, imagine an AI system that is designed to drive a self-driving car. If the car were to suddenly lose control and the AI system had to choose between two options - swerving to avoid a pedestrian, but crashing into a wall and potentially injuring the passengers, or staying on its current course and hitting the pedestrian - this would be a moral dilemma.