Organized by ARENA2036 | Stuttgart
The European Union’s AI Act is poised to become a landmark regulatory framework, setting global standards for the responsible development and deployment of Artificial Intelligence. As AI becomes increasingly embedded across research, innovation, and industrial applications, understanding and preparing for its legal, ethical, and technical implications is crucial.
With the European Union’s AI Act soon to come into force, companies, research institutions, and developers face pressing questions: What defines “responsible AI”? How do we translate abstract legal requirements into concrete technical practices? And how can we prepare now for compliance tomorrow?
This hands-on workshop, developed by ARENA2036, offers practical orientation for engineers, researchers, and decision-makers. Through six structured modules, participants gain insights into responsible AI development under the upcoming regulatory framework.
Workshop Modules
Module 1: Basics of Responsible AI
Understand key terms like transparency, fairness, accountability, and human oversight. Explore the societal and ethical dimensions of AI—and what they mean for your work.
Module 2: Governance
Learn how AI governance works across legal, organizational, and technical domains. Get to know roles and responsibilities in AI risk management.
Module 3: Procedure and Policies
Dive into internal company and institutional procedures needed to align with the AI Act—risk classification, conformity assessments, and documentation.
Module 4: Engineering Responsible AI
See how to translate legal and ethical requirements into system architecture and technical workflows. Includes concrete examples of AI system design and evaluation.
Module 5 & 6: Case Studies
Analyze real-world AI applications (e.g. predictive maintenance, medical diagnostics, or automation) from both compliance and engineering perspectives. Explore risk mitigation and value-sensitive design through group discussion and guided templates.
Who Should Attend
- AI and software engineers
- Compliance officers & innovation managers
- Researchers and doctoral candidates in AI, ethics, or engineering
- Legal advisors & data protection leads
- Startups and SMEs using or developing AI systems
Why Attend
After this workshop, participants will be able to:
- Identify AI systems subject to regulation under the AI Act
- Understand how to embed responsible AI principles into engineering
- Start implementing early-stage compliance and documentation processes
- Use templates and governance tools for future-proof AI development