Prohibited practices to eliminate immediately: social scoring systems, real-time biometric identification in public spaces (with limited exceptions), manipulation techniques that exploit vulnerabilities, and emotion recognition in workplaces or educational institutions. If any of your AI systems touch these areas, they need to be shut down or fundamentally redesigned. There is no compliance pathway for prohibited uses.
High-risk system requirements: if your AI is used in hiring, credit scoring, insurance, education, law enforcement, or critical infrastructure, you need conformity assessments, quality management systems, comprehensive technical documentation, automatic logging, human oversight provisions, accuracy and robustness standards, and cybersecurity measures. Start with a gap analysis against the full requirements list, as most organizations have significant gaps to close.
Transparency obligations for generative AI: any AI-generated content must be machine-detectable as such. Users interacting with AI systems (chatbots, voice assistants) must be informed they're communicating with AI. Deepfakes and synthetic media must be labeled. Detailed training data summaries must be provided. These obligations apply regardless of risk classification.