With artificial intelligence (AI) becoming ever more sophisticated and prevalent across society, combined with advances in digital pathology and the fast-paced progress in deep learning, we may be on the cusp of the era of AI in pathology. However, with technical, logistical and ethical barriers in place, we look at what measures are needed before widespread clinical adoption.

Ashley Ballard

BMS Advanced Practitioner & Clinical Digital Lead, University Hospitals Dorset NHS Foundation Trust
AI is already proving to be hugely beneficial – boosting productivity and improving diagnostic quality. However, as with any new technology, there are always concerns. In many ways AI is no different to other medical devices – we have to ensure that results are accurate, reliable and reproducible. Given how AI apps are trained, there is also perhaps a requirement to ensure training data is representative of the population the app will be used on. The NHS already has the multi-agency advisory service to help ensure AI developers meet regulatory requirements, but perhaps there is a role for UKNEQAS to ensure ongoing EQA of AI apps in pathology.
Dr Owen Driskell

Deputy Training Programme Director HSST, NHS England
Any revolutionary technologies pose challenges in regulation. We must meet that need to get the benefits they offer safely and effectively. Where pathology applications
of AI involve running “machine-taught” algorithms, validation and monitoring performance, along with algorithm updates, will be key. Vendors might teach those applications from different populations, maybe learning to look for different things. It will be important to have something akin to EQA schemes to make sure applications are giving comparable answers for diagnostic purposes across the different populations. A bigger challenge might be regulating AI applications designed to learn in the field.
Chloe Knowles

Senior Laboratory Workflow Optimisation Consultant, Leica Biosystems
Digital pathology and AI workflows need to be patient centric and ensure that users are confident with the output. I would like to see safeguards and regulations that are in line with ISO accreditation, that are led by experts in the field, with laboratories that are using AI or are currently on their implementation journey. Having firsthand experience of the challenges and pitfalls helps to define regulations that are relevant and specific to protect laboratory staff and patients. This also includes training and education for both the public and the workforce, so they have a good understanding of the benefits on offer.
The online response
We put the question to IBMS followers on social media.
Here’s what they said…
- Carl Nkemdilim Onwochei
I think, for me personally, it’s whether there will be enough data to train applications to have a desired output, how often it will be updated to reflect changes to diagnostic practice and whether there is some kind of quality indicator to provide assurance that the data generated is robust, auditable and accurate.
- Richard McNamara
I’m currently on the IBMS Certificate of Expert Practice (CEP) in Laboratory Information Technology and Clinical Informatics and we’re discussing AI. It struck me that at some point in the future, upon implementation of AI, we’re going to have to explain to a UKAS assessor why the result generated is what it is. With these self-taught systems, this could be a difficult task!
- Jo Torres
The CEP in Laboratory Information Technology and Clinical Informatics is cool – I competed it last year. Likely the most enjoyable course I have ever done. I thought it’s a great offering from the IBMS.
If you have a Big Question topic you would us to discuss in the magazine, email [email protected]
Image credit | iStock Shutterstock | Supplied