AI in Medical Imaging
Medical imaging plays a central role in modern medical care. While human expertise and care are essential to provide a patient-centered service AI and machine learning will be central to our effort to improve patient experiences and outcomes by saving significant time and money on repetitive well-defined diagnostic tasks.

AI-driven diagnostics is also increasingly important given the sustained and significant increase in the volume of medical imaging data, driven by new and emerging technologies, a population of aging patients with increasing wealth to spend on preventative health care, a trend towards seven days service, and a shortage of skilled diagnostic professionals.

Furthermore, a significant number of examinations take several days or even weeks to complete due to work overload. Thus new care models and better outcomes require innovative approaches that maximize the capabilities of the entire imaging team.

Use cases

  • Fracture diagnostics: peripheral limb fracture detection
  • Skin cancer detection
  • Lung cancer detection: lung cancer malignancy prediction
  • Diabetic eye disease screening including diabetic retinopathy (DR) and diabetic macular edema (DME)
  • Histopathologic cancer detection: identifying metastatic tissue in histopathologic scans of lymph node sections

Algorithms

The idea of using deep neural networks for vision tasks is prevalent today. State-of-the-art models in vision are almost exclusively based on deep learning algorithms since 2012. Our general approach for project development is to capitalize on existing state-of-the-art open-source deep learning vision models and apply transfer learning to fine-tune models for different problem contexts and prune models to fit given hardware constraints. We apply rigorous evaluation pipelines along with different data augmentation techniques and model regularisation techniques to ensure model generalisability. We also train models from scratch using established architectures and setting baseline models to track progress throughout our projects.

Data

Besides modern deep learning algorithms and an increase in GPU computing power, a key enabler for the recent success of AI models was the availability of large training datasets. Therefore, a key enabler of AI-assisted medical image diagnostics is access to sufficiently large databases of labeled images. If you possess a sufficiently large dataset (the required size depends on the complexity of the use case and the required performance from the model) the innovation is mainly about the ability to shape and fine-tune models to fit a given problem context and ensuring model generalisability through rigorous evaluation methods.  

Workflow

If we look at a standard workflow the first step usually involves arranging approvals and data-sharing agreements. Once the data is shared or we are granted access to the computing environment where the data sits we need to perform exploratory data analysis. This is so that we can identify potential data quality issues and perform data cleaning as required. Next, we need to set up baseline models and an evaluation pipeline. This is to ensure we can track progress throughout the project with a solid evaluation framework. In order to find suitable solutions, we perform an extensive review of neural network model architectures and in case the data can't leave your organization we assess the feasibility of implementations of different algorithms on your computing platform. The main part of the development consists of iterating through different model architectures and hyperparameter settings - a process called hyperparameter tuning. This can further surface a number of data issues, for instance, we might need to reevaluate certain data points or collect additional annotated data. The final steps usually involve designing and wrapping the models into dockerized APIs that can be served as web applications and used through standard web browsers by medical staff.

Risks

There are a number of risks you need to be aware of before embarking on an automation journey like this. One is related to the quantity and quality of the available dataset. In short, the more data you have the better. It is also imperative to have quality annotation. Another risk arises from the transferability of pre-trained model parameters to your problem domain. It is possible that finetuning pre-trained models will be less successful than training models from scratch which can increase the time and effort required to train models. You might also need dedicated GPU hardware to train algorithms in case the data can't leave your organization. A final technical risk is that models can overfit for images acquired with certain technologies. Thus it might be difficult to apply the solution for settings where image acquisition is significantly different.

We are in early discussions with partners around the world to continue research on use cases, perform clinical validation studies and deploy solutions. If you are interested in future collaboration, please complete this form or send us an email to hello@neuralmachines.co.uk

Sounds interesting?
If so, please contact us at hello@neuralmachines.co.uk!
Send email