EMA issued „Guiding principles of good AI practice in drug development, January 2026.” Looking at this, I’m sharing my non-enthusiastic thoughts.
Tomasz Kosieradzki – KOSIERADZKI.com
1
„Human-centric by design the development and use of AI technologies align with ethical and human-centric values.”
Ethics: a set of moral principles: a theory or system of moral values (https://www.merriam-webster.com/dictionary/ethic)
I see the problem that moral values often surrender to the business values and to pure greed. If there are no laws and controlling systems that defend ethics, purely appealing to ethics might not be enough to prevent harm that AI might cause.
2
„Risk-based approach. The development and use of AI technologies follow a risk-based approach with proportionate validation, risk mitigation, and oversight based on the context of use and determined model risk.”
Risk is subjective – If you take two groups AI enthusiasts and AI sceptics their risk evaluation of AI in drug development will be completely different – so what to choose. This point does not add any value, as no universal risk assessment methodology and standards are available or implemented.
3
„Adherence to standards AI technologies adhere to relevant legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including Good Practices (GxP).”
The changes that AI implements and undergoes are far far quicker than any Legislation process. Will we put AI implementation on hold to wait for the relevant regulations or standards. Moreover, it’s not a conspiracy – it’s a fact that the industry standards undergo of the lobby review in order not to harm the business.
4
„Clear context of use AI technologies has a well-defined context of use (role and scope for why it is being used).”
Without legal requirements and control that will not be followed. No consequences no motivation to admit what was done by AI, or to what extend AI was used.
5
„Multidisciplinary expertise.
Multidisciplinary expertise covering both the AI technology and its context of use are integrated throughout the technology’s life cycle.”
This rule sounds like a “transparency should be kept”. In my opinion it will be sacrificed to the business security and keeping the competitors behind
6
„Data governance and documentation.
Data source provenance, processing steps, and analytical decisions are documented in a detailed, traceable, and verifiable manner, in line with GxP requirements. Appropriate governance, including privacy and protection for sensitive data, is maintained throughout the technology’s life cycle.”
That is currently a basic rule of any GXP requirements – why is there needed to repeat it? What is behind? What are the real threats that are hidden behind this repetition?
7
„Model design and development practices.
The development of AI technologies follows best practices in model and system design and software engineering and leverages data that is fit-for-use, considering interpretability, explainability, and predictive performance. Good model and system development promotes transparency, reliability, generalizability, and robustness for AI technologies contributing to patient safety.”
The points that should be openly addressed are susceptibility to bias, legality of data sources used for AI training, privacy, which seems to fade into oblivion, algorithmic transparency, replacement of human intelligence, etc.
8
„Risk-based performance assessment.
Risk-based performance assessments evaluate the complete system, including human-AI interactions, using fit-for-use data and metrics appropriate for the intended context of use, supported by validation of predictive performance through appropriately designed testing and evaluation methods.”
„Validation of predictive performance,” seriously, it sounds to me like an oxymoron.
9
„Life cycle management
Risk-based quality management systems are implemented throughout the AI technologies’ life cycles, including supporting the capture, assessment, and addressing of issues. The AI technologies undergo scheduled monitoring and periodic re-evaluation to ensure adequate performance (e.g., to address data drift).”
How can we evaluate „adequate performance” of AI? By another AI?
10
„Clear, essential information.
Plain language is used to present clear, accessible, and contextually relevant information to the intended audience, including users and patients, regarding the AI technology’s context of use, performance, limitations, underlying data, updates, and interpretability or explainability.”
So many complicated elements that are discussed between professionals with no consensus should be put in „plain language”. Congratulations.
To be aware, informed, and to be able to keep a decision in our own human hands and to limit the potential harm to the broad public that AI might cause is to start hard and quick work on regulations.
Reference:
„https://www.ema.europa.eu/en/documents/other/guiding-principles-good-ai-practice-drug-development_en.pdf”
0 komentarzy