Health Technology: The Digital Revolution - Part 1: AI & Imaging


By Akash Patel, Outreach Manager and Writer, Science Entrepreneur Club

Our era is witnessing a technological revolution. Healthcare is becoming increasingly digitised, empowering both patient and physician. We’re using computational power and data to better predict, diagnose and manage patients with complex health conditions. In part one of this series we explore AI and imaging and the effects these have on the diagnosis and treatment of cancer.

Dr Raj Jena uses a Microsoft system called InnerEye to mark up scans automatically for prostate cancer patients. Photograph: Linda Nylind/Microsoft Project InnerEye Study

Dr Raj Jena uses a Microsoft system called InnerEye to mark up scans automatically for prostate cancer patients. Photograph: Linda Nylind/Microsoft Project InnerEye Study

Chronic diseases are on the rise, with a threefold increase in the number of those with cancer in the last 40 years (1). In the face of mounting pressure in healthcare, all available technologies are being leveraged to deliver innovations that will provide sustainable long-term solutions. 90% of all healthcare data is produced via medical imaging and, more staggeringly, 97% of this data remains unused and unanalysed. Hospitals are producing up to 50 petabytes (10^15) of data annually (2). With this in mind, image-analysis tools have been developed using AI that assist clinicians in the diagnoses of such conditions and provide increased precision for administering treatments such as radiotherapy.

Recent developments in imaging are able to show tissues in three dimensions, allowing these specially trained AI algorithms to analyse them AI and imaging is a particularly important innovation for cancer treatment as it enables the precise location, type and stage of tumour to be identified (3). Currently, much of this is done manually, but it is time- consuming for radiologists and oncologists to continuously review imaging data. AI software can do this rapidly, allowing doctors to focus on the more challenging cases. With an ageing population and more cancer screenings taking place such technology is proving indispensable, as proven by Addenbrooke’s Hospital in Cambridgeshire, UK.

At Addenbrookes oncologists and radiologists are using Microsoft’s Inner Eye system to comb through patient images of complex cancers located deep inside tissue, mapping specific areas to be targeted for radiotherapy. Traditionally, these doctors have had to spend hours reviewing up to 100 images from a single patient, each showing slices of complex regions such as brain tissue, and carefully map out the parts that the radiotherapy beam would target. This is incredibly time consuming and labour intensive, requiring painstaking accuracy. Microsoft’s system is able to carry out the entire 3D augmentation of complex cancers,such as a glioblastoma (brain tumour), automatically in a short space of time using a machine- learning algorithm. This is an algorithm that has been systematically trained using multiple images to recognise the image and structure of a glioblastoma, accurately isolating and mapping the area of the tumour for radiotherapy. The algorithm used is known as a ‘decision forest’ that classifies regions of the images as belonging to tumour or healthy tissue. Key to Microsoft’s system is not that it completely removes the physician, but that the AI is assistive to their work; physicians are able to maintain full control over the process at all times (4) (5).

Using AI, the time it takes for the patient to receive treatment is shortened significantly. Studies show that the average radiologist is required to interpret one image every 3-4 seconds in an 8-hour workday in order to meet the demands of their workload (6). The level of visual perception and attentiveness required by radiologists to maintain such a high level of decision-making makes the occurrence of errors almost inevitable. AI can remove this human error.

Artificial intelligence methods in medical imaging.

This schematic provides an outline for two AI methods for classifying image data, when diagnosis objects as either benign or malignant. a) The first method uses features of cancer extracted from regions of interest on the basis of expert knowledge. Such features include tumour volume, shape, texture, intensity and location. From this the most robust features become machine learning classifiers i. b) The second method relies on deep learning and does not require annotating of specific regions, but has knowledge of the types of tumours based on localisation. The neural network has several layers that enable classification to be learnt during training - this is very much the model Microsoft’s Inner Eye system is based upon.

Recent years have seen remarkable progress in the development of AI algorithms that accurately perform image recognition and, more specifically, in deep-learning and convolutional neural networks (CNNs). The deep- learning algorithms that have been developed are able to recognise complex patterns in image data and provide quantitative analysis of the radiographic characteristics of these images. Thus, image-based biomarkers for tumours can be identified, motivating more rapid treatment and increasing the likelihood of positive prognosis. Algorithmic analysis of radiographic images can aid in segmenting the tumours, analysing the individual images that have been taken of slices of tissue and, most importantly, optimising the dose of radiation given to the patient. AI can monitor the prognosis of these patients by performing assessments that evaluate the success of the radiation therapy, complementing the physician’s judgement and enabling rapid progress to further treatments if necessary (7).

Healthcare systems are inundated with copious amounts of patient data stemming from the increasing complexity of disease. Imaging is a powerful tool for diagnosis and monitoring disease. There is significant potential for AI, machine learning to be specific, to analyse the data and produce key insights to help physicians better understand and manage multifactorial complex diseases, particularly cancer.

Yet this presents another challenge: how can the data produced be integrated effectively into hospital IT systems? This is particularly problematic in the UK due to the lack of a centralised NHS IT system and ineffective IT systems in many hospitals.

Despite the challenges that remain for AI and imaging, it’s a promising future. The computational analysis of radiographic images enables a transformation in the role of the radiologist, as clinicians increasingly progress towards becoming ‘technologists’. Armed with AI, radiologists, alongside the wider array of physicians, are able to focus on the most critically ill patients and more efficiently diagnose and treat some of our most challenging diseases.

About Clustermarket

Clustermarket is helping scientists, engineers and other technology pioneers to rent lab equipment from nearby institutions and to find the best service providers. The equipment and services listed on Clustermarket are offered by universities, other research institutions and businesses, making research more sustainable.

About Science Entrepreneur Club:

The Science Entrepreneur Club (SEC) is a non-profit organisation of curious minds that aims to explore and unite the life science ecosystem by educating, inspiring and connecting. We give scientific entrepreneurs a network and a platform to showcase their innovative technologies, find investors and accelerate their company.