Our Intelligent Engineering solutions across products, plant and networks, combine our engineering expertise with advanced technologies to enable digital engineering & operations, develop autonomous products & platforms, and build sustainable energy and infrastructure
The European Union Artificial Intelligence Act (EU AIA – 2024/1689) marks a significant regulatory milestone in the global governance of AI. Focused on fostering safe, trustworthy, and human-centric AI systems, it sets out a harmonized legal framework applicable across sectors, with particular implications for healthcare and medical device industries.
This whitepaper provides a practical guide for deployers, manufacturers, and providers of AI-enabled medical devices (AIMDs)— to understand, prepare for, and comply with the Act. It decodes the regulation’s intent, structure, and scope while outlining compliance obligations such as risk classification, documentation requirements, conformity assessments, and AI literacy.
With healthcare at the forefront of AI innovation, this document offers actionable insights to navigate the evolving regulatory landscape while promoting responsible AI use that aligns with ethical and safety standards.
Artificial intelligence (AI) presents immense opportunities across healthcare—from accelerating diagnostics to enhancing treatment accuracy. Yet, its complexity introduces risks to safety, privacy, and fundamental rights. The EU AI Act provides a structured framework to harness AI’s benefits while mitigating its risks through a risk-based regulatory approach. Key objectives include:
AI System
A machine-based system capable of autonomy and adaptiveness after deployment, producing outcomes such as predictions, recommendations, or decisions.
Substantial Modification
A post-market change to an AI system that was not anticipated during the original conformity assessment. Such changes affect the system's compliance with high-risk AI requirements or alter its originally intended purpose.
AI Regulatory Sandbox
A supervised framework established by a competent authority that allows AI providers to develop, train, validate and test innovative AI systems —often in real-world conditions— within a controlled, time-bound regulatory environment.
AI Literacy
The combination of skills, knowledge and understanding required by providers, deployers and impacted individuals to responsibly engage with AI systems. This includes awareness of AI's benefits, limitations, and associated risks.
Biometric Identification
The automated process of recognizing individuals physical, physiological, behavioral, or psychological traits by comparing biometric data against stored datasets to verify identity.
Emotion Recognition System
An AI system designed to detect or infer the emotional states or intentions of individuals based on their biometric inputs.
AI Office
A European Commission body tasked with implementing, monitoring, and supervising the governance of AI systems and general-purpose AI models. References to the "AI Office" in the regulation refer to this central authority.
Systemic Risk
Risks stemming from powerful general-purpose AI models with wide-reaching impact—such as threats to public health, safety, rights, or societal structures—that can cascade across sectors or markets.
Downstream provider
Any entity that integrates an AI model into its system—whether self-developed or sourced through partnerships—thus contributing to the deployment and operation of the final AI. solution.
The Act aims to regulate AI in a manner consistent with European Union values, with the following core objectives:
Scope: The Regulation Applies To | Out of Scope: The Regulation Does Not Apply To |
---|---|
Providers placing AI systems or general-purpose AI models on the EU market—regardless of where they are established | Areas outside the scope of Union law, including matters of national security |
Deployers of AI systems established or operating within the EU | AI systems used exclusively for military, defense, or national security purposes |
Non-EU providers and deployers where AI output is used within the EU | AI systems not placed on the market, but whose output is used in the EU for military/national security |
Importers and distributors of AI systems | Public authorities in third countries or international organizations involved in law enforcement cooperation, where adequate safeguards exist |
Manufacturers placing AI systems integrated within their products on the market under their name | Provisions under Regulation (EU) 2022/2065 concerning intermediary service liability |
Authorized representatives of non-EU providers | AI systems or models developed and used solely for scientific R&D |
Affected persons located within the EU | GDPR, ePrivacy Directive, and criminal justice data protection regulations still apply (Regulations (EU) 2016/679, 2018/1725, 2002/58/EC, 2016/680) |
High-risk AI systems as per Article 6(1), limited to Articles 102–109 and 112, plus Article 57 only when harmonized rules apply | Pre-market research, testing, or development of AI systems before placement on the market (unless conducted in real-world conditions) |
Data protection, privacy, and confidentiality obligations under EU law remain applicable | |
Consumer protection and product safety rules under other EU legislation remain unaffected | |
Natural persons using AI systems for purely personal, non-professional purposes | |
Laws or agreements offering greater protection for workers' rights than the EU AI Act provisions | |
AI systems under open-source licenses, unless they fall under high-risk or prohibited categories (Article 5 or 50) |
The EU AI Act explicitly bans certain AI applications deemed to pose unacceptable risks:
Prohibited AI Practices
Manipulative AI: Use of subliminal or deceptive techniques that distort user behavior and cause harm is prohibited.
Exploitation of Vulnerabilities: AI systems must not exploit individuals based on age, disability, or socioeconomic status.
Social Scoring: Systems that generate social scores leading to unfair or discriminatory treatment are banned.
Predictive Policing: AI cannot be used to predict criminal behavior based solely based on profiling or personality traits.
Facial Recognition Databases: Creating facial recognition databases by scraping images from the internet without consent is forbidden.
Emotion Recognition in Work/Education: Emotion recognition systems are not allowed in workplaces or educational institutions, except in cases related to medical or safety reasons.
AI literacy requirements
Providers and deployers of AI systems are required to:
Biometric Categorization: AI systems must not categorize individuals based on race, political opinion, religion, sexual orientation, etc., using biometric data.
"Real-Time" Remote Biometric Identification: Use in public spaces for law enforcement is permitted only under specific conditions, such as for serious threats (e.g., terrorism or abduction), and requires judicial authorization.
Safeguards and Conditions: The deployment of real-time biometric identification must be necessary, proportionate, and subject to strict safeguards. These include conducting impact assessments and maintaining formal registration processes.
National Laws: EU Member States may implement stricter national laws regulating biometric identification.
Reporting and Oversight: Member States must report on the usage of real-time biometric identification. The European Commission will publish annual reports, excluding sensitive data.
EU AI Act is organized into 13 Chapters and 13 Annexes, comprising a total of 113 Articles. The structure outlines the regulatory scope, obligations, governance mechanisms, and supporting documentation.
Articles | Key Subject Area |
---|---|
Articles 1-4 | General Provisions: Subject matter, scope, definitions, and AI literacy |
Article 5 | Prohibited AI Practices |
Articles 6-49 | High-Risk AI Systems: Classification, requirements, provider and deployer obligations, conformity assessments |
Articles 51-56 | General-Purpose AI Models: Classification, provider obligations, codes of practice |
Articles 57-63 | Regulatory Sandboxes and Innovation Support: Real-world testing, SME support |
Articles 64-71 | Governance: AI Office, European AI Board, national authorities |
Articles 72-94 | Market Surveillance, Enforcement and Monitoring: Reporting and compliance controls |
Articles 95-113 | Final Provisions: Penalties, delegation, committee procedures, amendments, entry into force |
Annexes Overview
Annex | Key Subject Area |
---|---|
Annex I | Union harmonization legislation list |
Annex II | Criminal offences referred to in Article 5(1)(h)(iii) |
Annex III | High-risk AI systems as per Article 6(2) |
Annex IV | Technical documentation under Article 11(1) |
Annex V | EU declaration of Conformity template |
Annex VI | Conformity assessment: internal control |
Annex VII | Conformity assessment: quality management system and technical documentation |
Annex VIII | Registration information for high-risk AI systems (Article 49) |
Annex IX | Real-world testing registration (Article 60) |
Annex X | Legislative acts on large-scale IT systems in freedom, security and justice sectors |
Annex XI | Technical documentation for general-purpose AI model providers (Article 53(1)(a)) |
Annex XII | Transparency information for downstream providers (Article 53(1)(b)) |
Annex XIII | Systemic risk designation criteria (Article 51) |
The technical documentation for an AI system must include, at a minimum, the following details, tailored to the system’s characteristics and intended purpose.:
Section 1 - General description of AI system
Section 2 - System Development and Architecture Methods and steps followed during development
Section 3 - Monitoring, Functioning and Control
Section 4 - Performance Metrics
Section 5 - Risk Management System
Section 6 - Lifecycle Changes
Section 7 - Standards and Compliance Solutions
Section 8 - EU declaration of conformity
Section 9 - Post-Market Performance Monitoring
The EU AI Act classifies AI systems into four primary risk categories, with corresponding regulatory obligations:
The EU AI Act references several international and emerging standards to ensure safety, transparency, and robustness in AI-enabled systems—especially in regulated sectors like healthcare. These standards support compliance efforts for manufacturers, deployers, and providers of AI-enabled medical devices (AIMDs).
Selected Released Standards
Standard | Description | Status |
---|---|---|
ISO/IEC 24029-2:2023 | Assessment of neural network robustness – Part 2: Methodology using formal methods | Released |
ISO/IEC 8183:2023 | AI – Data life cycle framework | Released |
ISO/IEC 25059:2023 | SQuaRE – Quality model for AI systems | Released |
ISO/IEC TR 24368:2022 | Overview of ethical and societal concerns | Released |
ISO/IEC TR 24029-1:2021 | Assessment of neural network robustness – Part 1: Overview | Released |
ISO/IEC TS 4213:2022 | Assessment of machine learning classification performance | Released |
ISO/IEC 23894:2023 | AI – Guidance on risk management | Released |
Standards Under Development
Standard | Description | Stage |
---|---|---|
ISO/IEC CD 42005 | AI system impact assessment | Committee |
ISO/IEC CD 5259-5 | Data quality governance for analytics and ML | Committee |
ISO/IEC CD TS 8200 | Controllability of automated AI systems | Committee |
ISO/IEC CD TR 17903 | Overview of machine learning computing devices | Committee |
ISO/IEC CD TS 6254 | Explainability of ML models and AI systems | Committee |
Additional Relevant Standards
Medical Device-Specific Standards
These standards play a key role in aligning AIMDs with both EU MDR/IVDR and the AI Act.
https://www.bartolozzi.it/medical-device-ai/
https://eprints.dkit.ie/id/eprint/892/1/My1stPaper.pdf
The implementation of the EU AI Act follows a phased timeline to ensure stakeholders have sufficient time to adapt and comply. Below is a general overview based on the regulation’s staged rollout:
AI is rapidly transforming the healthcare industry by enhancing diagnostic accuracy, optimizing treatment plans, streamlining workflows, and improving patient outcomes. Here are some of the major use cases of AI in healthcare, categorized by domain:
Integrating AI into medical devices presents complex compliance challenges, particularly due to the dual regulatory landscape governed by the EU AI Act-2024/1689, and the EU MDR-2017/745 or EU IVDR-2017/746. The intersection of AI functionality and medical safety introduces both technical and procedural hurdles.
AIMD classified as High Risk as per EU AI Act shall comply stringent requirements such as human oversight, transparency, robustness and post-market monitoring
When an AI system is integrated into a medical device or constitutes a standalone AI-enabled medical device, the conformity assessment process is not handled separately under the EU AI Act. Instead, it is embedded within the existing regulatory pathway defined by the EU Medical Device Regulation (MDR 2017/745) or In Vitro Diagnostic Regulation (IVDR 2017/746).
The EU AI Act (2024/1689) marks a transformative step in establishing a robust and harmonized robust regulatory framework for artificial intelligence across the European Union. For the healthcare sector particularly medical device manufacturers integrating AI, this regulation introduces not only new compliance obligations but also opportunities to drive innovation within a well-defined legal and ethical structure.
By adopting a risk-based approach, the Act ensures that AI systems, especially those used in critical sectors like healthcare, are subject to appropriate oversight and accountability. It mandates transparent, safe, and human-centric AI while promoting public trust and technological progress.
To meet these evolving expectations, manufacturers, developers, and deployers of AI systems must align their internal processes with both existing medical device regulations (e.g., EU MDR/IVDR) and AI-specific obligations under this new law.
Proactive compliance will involve:
Ultimately, the EU AI Act not only safeguards individuals but also lays the foundation for sustainable and responsible digital health innovation, supporting the ethical use of AI while enabling Europe to lead in the global AI landscape.
Sathish Kumar Thiagarajan is a seasoned Controls & Automation Engineer with over 18 years of global experience in managing large-scale industrial automation projects involving PLCs, SCADA, and Drives. He specializes in optimizing technical workflows, ensuring regulatory compliance, and leading cross-functional teams to deliver seamless IT/OT integration solutions. Known for enhancing operational efficiency and driving cost-effective innovations, his expertise helps shape transformative strategies in industrial automation.
Srinivasu Parupalli is an experienced Systems Engineer with expertise in program management and delivery across multiple domains, including Industry 4.0, Manufacturing, Embedded Systems, IoT, Software Applications Development, and Cloud Integrations. He has extensive experience in end-to-end product development and has been instrumental in building and training teams on emerging technologies such as Ignition, Solumina, Aveva, and SCADA systems for deployment in diverse customer projects. With a strong background in industrial automation, he has worked across various industries, including Manufacturing, Energy, Utilities, Healthcare, and Process Automation, developing MES, SCADA, and HMI solutions integrated with other applications. His expertise lies in customer engagement, requirements analysis, and risk management, ensuring the successful execution of complex automation projects.
Abhishek Kumar
Subject Matter Expert in Medical Device Regulatory and Quality Assurance
Abhishek Kumar is a Subject Matter Expert in Medical Device Regulatory and Quality Assurance with over 14 years of experience. He has led the EU MDR 2017/745 sustenance program, managed multiple global engagements for top medical device companies, and supported the gap assessment, remediation, and submission of 70+ technical documents across EU MDR, ASEAN MDD, NMPA (China), Taiwan, and 10+ 510(k) submissions. He has authored 40+ Clinical Evaluation Reports (CERs) for Class I–III devices in line with MEDDEV 2.7.1 Rev-4 and developed proposals for market access in the U.S., Europe, and APAC (including ASEAN, China, Taiwan, and Japan). He also prepared and implemented regulatory plans for new product development across 90+ countries through feasibility analysis and cross-functional coordination.
Cyient (Estd: 1991, NSE: CYIENT) delivers intelligent engineering solutions across products, plants, and networks for over 300 global customers, including 30% of the top 100 global innovators. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable tomorrow together with our stakeholders.
For more information, please visit www.cyient.com