Teaching AI Ethics: A Semester-Long Framework
This framework provides a rigorous, discussion-based approach to AI ethics suitable for computer science, philosophy, business, and interdisciplinary programs.
Unit 1: Foundations (Weeks 1-5)
Week 1: What Makes AI Different?
Core Question: How is AI ethics distinct from traditional technology ethics?
Seminar Discussion: "If an AI makes a decision that harms someone, who is morally responsible - the developer, the company, the user, or the AI itself?"
Logic Chain: Building Ethical Arguments
1. Identify Claim
What ethical claim is being made about this AI system?
↓
2. Examine Evidence
What empirical data supports or contradicts this claim?
↓
3. Apply Framework
How do major ethical theories evaluate this?
↓
4. Propose Action
What should be done, and who should do it?
Regulations cited in this article
Founder rule: every Guardian Posse article on a cybersecurity, AI, or privacy practice
names the controls it stands on. These are the published regulations this story rests on.
-
NIST AI RMF 1.0 §GOVERN-1.1
— Legal and regulatory requirements for AI are understood and communicated.
-
NIST AI RMF 1.0 §GOVERN-2.2
— Personnel and partners receive AI risk management training tailored to their roles.
-
NIST AI RMF 1.0 §MAP-1.1
— Context, intended purposes, and potentially beneficial uses of the AI system are documented.
-
UNESCO Recommendation on the Ethics of AI (2021) §III
— Values and principles for ethical AI in education.
-
EU AI Act (Reg. 2024/1689) §13
— Transparency obligations toward deployers and users of high-risk AI.
See the full regulatory baseline →