Discussions

Ask a Question
Back to all

How do you personally define “human values” when discussing AI alignment?

I keep seeing the phrase “human values” used in discussions about AI safety and alignment, but it often feels vague or assumed to be universal.

When you talk about aligning AI with human values, what do you actually mean in concrete terms?
Are you thinking about moral principles (like fairness or harm reduction), cultural norms, legal frameworks, emotional well-being, or something else entirely?

I’m asking because different people seem to mean very different things, and I’m curious how others here define it for themselves rather than in theory.

golf hit