Why the EU AI Act Matters for Digital Mental Health Services
Artificial intelligence is rapidly reshaping mental-health care. Therapy apps, conversational tools, screening systems and decision-support technologies now play a role in how people seek support and how professionals work. But when AI influences care, emotions, privacy and clinical decisions, the responsibility grows as well.
In 2024, the European Union adopted the AI Act, and for digital mental-health services, this is not simply a policy discussion. It is a legal obligation. Aligning with the AI Act is not optional. Systems that analyse emotions, help assess risk, guide triage, support therapists or process highly sensitive personal data may fall into categories that the law treats as high-risk. When that happens, organisations must show that their systems are designed carefully, tested, documented, transparent, supervised by humans and respectful of the people who use them.
What follows is not a detailed legal guide, but an overview of Articles 4–40 of the AI Act. To fully understand the obligations, readers should always consult the official text directly: https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-4
Article 4 — AI literacy
Organisations must train staff so they can use AI correctly and responsibly, in a way that fits their role and the people affected.
Article 5 — Prohibited AI practices
Certain AI uses are banned, including AI that:
manipulates or deceives people in harmful ways.
exploits children or vulnerable people.
performs social scoring.
predicts crime based only on profiling or traits.
builds facial-recognition databases from scraping.
uses emotion recognition in schools/workplaces (except safety/medical).
infers sensitive traits via biometrics.
uses real-time biometric identification in public, except in rare, authorised cases.
Strict safeguards and authorisations apply where narrow exceptions exist.
Article 6 — Classification rules for high-risk AI systems
AI becomes high-risk when:
It is part of a regulated product requiring third-party conformity assessment.
It appears in Annex III categories.
Some Annex III systems may be exempt if they pose no significant risk and only support narrow procedural tasks but systems that profile people remain high-risk.
Article 7 — Amendments to Annex III
The Commission may add or remove high-risk use cases if risks evolve, considering scale, severity, autonomy, past harm, reversibility, user vulnerability, ability to opt out, and existing legal protections.
Article 8 — Compliance with the requirements
High-risk AI must meet all applicable requirements, follow the state of the art, and rely on the risk-management system described in Article 9. Processes may be aligned with other EU product laws.
Article 9 — Risk-management system
Providers must maintain a documented, ongoing process to:
identify and analyse risks
assess normal use and foreseeable misuse
update risks with real-world data
reduce risks through design first
ensure remaining risks are acceptable
Priority: eliminate → mitigate → inform/train
Article 10 — Data and data governance
High-risk AI using data must ensure:
clear origin and lawful purpose of data
quality, completeness, and relevance
representativeness where needed
detection and mitigation of bias
documented preprocessing steps
strictly limited use of sensitive data — only when necessary and protected
deletion when no longer needed
Note. Privacy laws like GDPR still apply
Article 11 — Technical documentation
Providers must create and maintain detailed documentation (Annex IV) proving compliance before market entry and keep it updated. SMEs can use simplified formats.
Article 12 — Record-keeping
High-risk AI must support automatic logging to ensure traceability, incident investigation, change detection, and ongoing monitoring. Extra logging applies to biometric systems.
Article 13 — Transparency and information for deployers
Systems must include instructions covering:
identity of provider
purpose
accuracy and robustness
conditions that may reduce performance
risks and limitations
human oversight procedures
datasets and metrics (when relevant)
maintenance, updates and logging
Instructions must be clear and understandable.
Article 14 — Human oversight
Systems must be designed so humans can oversee and intervene. Oversight personnel must be able to:
understand system limits
detect problems
override or stop outputs
decide not to rely on AI
Special verification rules apply to biometric identification.
Article 15 — Accuracy, robustness and cybersecurity
High-risk AI must:
meet declared accuracy levels
perform consistently
avoid harmful feedback loops
resist tampering and cyberattacks
protect against model/data poisoning and adversarial inputs
Article 16 — Obligations of providers
Providers must:
ensure legal compliance
implement a quality-management system
maintain documentation and logs
undergo conformity assessment
issue an EU declaration of conformity
apply CE marking
register high-risk systems
correct non-compliance
cooperate with authorities
meet accessibility obligations
Article 17 — Quality management system
Providers must have documented procedures covering:
design and verification
conformity processes
data management
validation and testing
risk management
post-market monitoring
incident reporting
communication and accountability
Scaled proportionately to organisation size.
Article 18 — Documentation keeping
Providers must keep documentation and certificates for 10 years and make them available to authorities when requested.
Article 19 — Automatically generated logs
Providers must store logs they control for at least six months, or longer if law requires — respecting data-protection rules.
Article 20 — Corrective actions and duty of information
If a risk or non-compliance occurs, providers must immediately:
correct
disable
withdraw, or recall the system
inform relevant parties and authorities, explaining findings and actions taken
Article 21 — Cooperation with competent authorities
Providers must supply requested information, documentation, and logs to regulators in appropriate languages, while confidentiality rules apply.
Article 22 — Authorised representatives
Non-EU providers must appoint an EU-based representative to support compliance, hold documents, respond to authorities, and notify risks. Mandate must be written and can be terminated for violations.
Article 23 — Obligations of importers
Importers must verify compliance before placing systems on the EU market, ensure documentation exists, mark appropriately, keep records for 10 years, and cooperate with authorities.
Article 24 — Obligations of distributors
Distributors check visible compliance elements, avoid distributing suspicious systems, protect compliance during storage/transport, and support corrective actions.
Article 25 — Responsibilities along the AI value chain
A third party becomes a provider when it:
rebrands the system
substantially modifies it
or changes its intended purpose to a high-risk use
Contracts must clarify roles; cooperation must respect IP and trade secrets.
Article 26 — Obligations of deployers
Deployers must:
follow instructions
ensure human oversight
monitor performance
stop unsafe use
report incidents
use appropriate input data
inform affected persons when required
register certain public-sector systems
comply with biometric-use authorisations
keep logs for at least six months
Article 27 — Fundamental rights impact assessment
Certain deployers (mainly public authorities and providers of public services) must carry out a FRIA before first use, notify authorities, and update when risks or context change.
Article 28 — Notifying authorities
Member States designate bodies responsible for designating, notifying and supervising notified bodies, and must inform the Commission.
Article 29 — Application for notification
Conformity-assessment bodies apply for notification and submit proof of competence, ideally accreditation.
Article 30 — Notification procedure
Notifications are shared across the EU. Bodies may begin operation only when no objections are raised within defined time limits.
Article 31 — Requirements for notified bodies
Notified bodies must meet strict requirements: independence, impartiality, competence, confidentiality, liability insurance, cybersecurity, and adequate expert staffing.
Article 32 — Presumption of conformity
If notified bodies comply with relevant harmonised standards, they are presumed to meet Article 31 requirements where standards apply.
Article 33 — Subsidiaries and subcontracting
Notified bodies may subcontract tasks but remain fully responsible and must inform authorities and document qualifications.
Article 34 — Operational obligations
Notified bodies must perform proportionate, rigorous assessments and provide documentation to authorities upon request.
Article 35 — Identification numbers and lists
Each notified body receives a single EU identification number, and the Commission maintains a public list.
Article 36 — Changes to notifications
Covers suspension, restriction, and withdrawal of notified-body status, handling certificates, and ensuring continued protection during transitions.
Article 37 — Challenge to competence
The Commission may investigate doubts regarding a notified body’s competence and order corrective measures including withdrawal.
Article 38 — Coordination of notified bodies
Supports EU-wide coordination and consistency through cooperation and information sharing among authorities and notified bodies.
Article 39 — Conformity assessment bodies of third countries
Bodies outside the EU may act as notified bodies only under specific agreements and if they meet equivalent requirements and protection levels.
Article 40 — Harmonised standards and standardisation deliverables
Systems following EU harmonised standards benefit from a presumption of compliance for covered requirements.
The Commission issues standardisation requests, consults stakeholders, and ensures standards align with EU laws, values, safety, and fundamental rights.
Reference:
https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-4?utm_source=chatgpt.com