Today we are launching our AI in Health video series – where you get to hear directly from the scientists and engineers explaining the technology and impact of individual real-life applications of ML/AI in Health.

The series complements the Australian e-Health Research Centre’s Exemplars of Artificial Intelligence and Machine Learning in Healthcare (PDF) report. The report provides 34 case studies of research by our scientists and engineers of how we use Artificial Intelligence and Machine Learning. These digital health technology platforms and projects are transforming healthcare.

Today, we have four videos:

  • Dr Bevan Koopman providing an overview of the AI and ML primer which forms Section 2 of our report.
  • Case Studies from Dr Kerstin Pannek describing how we use Machine Learning to help with diagnosis and understanding progression of Cerebral Palsy and Dr Michael Lawley explaining how we use Description Logic to “reason” with health data.
  • And we have Associate Professor Denis Bauer with a special topic on cloud computing and machine learning in health.

We hope you enjoy today’s videos!

Transcript

Overview of AI and ML Primer

[Start of recorded material at 00:00] [CSIRO Senior Research Scientist Dr Bevan Koopman appears prominent on screen]

 

Machine learning and artificial intelligence have become real buzzwords. So have you ever wondered how to separate real world advancement from marketing spin? To help you do this in the health space we’ve put together a report that details twenty three real world applications alongside easy to understand technical concepts that are crucial to know in this space.

 

[Animation of Australian e Health Research Centre logo appears on screen]

 

[Bevan Koopman appears prominent on screen. His name and title briefly appear on screen]

 

My name is Dr. Bevan Koopman and I’m one of the lead authors on the report. Artificial Intelligence is a somewhat amorphous term. There are many different techniques and algorithms that make up the wide family that is AI the boundary can be somewhat subjective. In this primer we constrain our focus on AI to these four different domains.

 

[A slide is presented. Headline is Our focus domains for AI in Health. Four dot points are displayed and the top one is highlighted. Text reads: Predictive Analytics and Data Driven Intelligence]

 

[Bevan Koopman appears prominent on screen]

 

First, predictive analytics and data driven intelligence. This is concerned with extracting insights from existing data, often from large datasets, where it’s difficult for humans to derive such insights. In data driven intelligence the intent is for the insights to be bottom up, i.e., identifying trends and insights from often low level data.

 

Second, knowledge representation and reasoning. This is how we represent or classify information about the world in a form that a computer system can utilize to solve complex tasks, enabling us to infer new knowledge. In health care this is typically about representing medical concepts such as diseases, their properties and relationships in a machine readable and understandable form. In many instances, solving the knowledge representation problem is the key challenge. Once the data is represented in the right form, the problem becomes tractable. That is able to be processed using compute power in approximate time scale.

 

[The slide reappears. This time the third dot point is highlighted. Text reads: Imaging and Vision]

 

[Bevan Koopman appears prominent on screen]

 

Third, imaging and vision. This involves analyzing images or videos to derive insights into the cause and impact of medical conditions. Computer vision and image processing are two areas that have been transformed by new AI methods, particularly deep learning based methods.

 

[The slide reappears. This time the fourth dot point is highlighted. Text reads: Human Language Understanding]

 

[Bevan Koopman appears prominent on screen]

 

Finally, human language understanding, which uses AI methods to better deal with natural language. While we do strive to standardise data and make it machine readable, humans still communicate in natural language and as such, there will always be data in this form. AI methods therefore aim to handle natural language by extracting meaning, by searching, summarising and classifying such language. These are our four focus areas. Now let’s talk about some of the techniques.

 

[A slide is presented. Symbolic or Statistical Artificial Intelligence? There are three dot points that read. Symbolic AI representing human knowledge into known facts or rules. Dotpoint 2 Statistical AI learning from the underlying data Dotpoint 3 Historically segregated, health is one domain with successful hybrid approaches.

 

There are two major branches of AI, symbolic and statistical.  Symbolic AI represents human knowledge into known facts or rules.

 

[Bevan Koopman appears prominent on screen]

 

These facts or rules can be combined with mathematical logic to undertake verifiable and explainable problem solving. This paradigm of AI often uses an ontology that’s a collection of concepts with properties, including the relationships between concepts to describe a particular domain. While less common in other domains symbolic AI has a specific place in AI in health. This is mainly because in health the domains had considerable effort to capture and explicitly represent health data in standards such as the SNOMED CT ontology.

 

[The slide reappears. Symbolic or Statistical Artificial Intelligence?]

 

Statistical AI takes the opposite approach. Rather than predefining the knowledge and rules, it learns these from the data itself.

 

[Bevan Koopman appears prominent on screen]

 

This approach uses existing data and evidence, along with computational techniques, to extract patterns and insights and thus reason about the world. This process involves training a model using available data. Machine learning and its family of algorithms are the key techniques used in statistical AI. Large increases in the availability of data captured electronically and huge increase in computing power have driven the growth in statistical A.I. In many sectors, statistical approaches have superseded symbolic ones, or the two have remained quite segregated.

 

[The slide reappears. Symbolic or Statistical Artificial Intelligence]

 

However, in health, hybrid approaches that utilize symbolic representations with statistical learning have been successful.

 

AI depends on data and the quality of the data used to either train AI models or for AI based analysis will have a direct impact on the quality of outputs and downstream tasks. As many AI approaches are intrinsically linked to the data type, we outline the myriad of different forms that health data can take.

 

[A slide is presented with a table with column headings are, Data Type, Examples, Formats/Standards and Avg size/patient and column rows Electronic Health Record, Genomics, Imaging, Administrative and Sensor Data”

 

This table provides a brief snapshot of the detailed breakdown found in the full report.

 

[Bevan Koopman appears prominent on screen]

 

Perhaps the most well known branch of AI is machine learning. This gives computers the ability to learn without being explicitly programmed.

 

[A slide is presented. Headline is Machine Learning. Three dot points, classification vs regression, supervised vs unsupervised and deep learning]

 

There are three main techniques for machine learning. Statistical machine learning aims to find types of predictive functions from training data. Reinforcement learning approaches, provide AI algorithms with rewards or penalties based on their problem solving performance. And deep learning approaches make use of artificial neural networks.

 

[Bevan Koopman appears prominent on screen]

 

There are two main machine learning tasks: classification and regression. Classification involves using a machine learning model to classify some data according to a finite set of categories.

 

For example, classifying the type of cancer found in a pathology report as breast, lung, pancreatic, etc.. Regression, in contrast, involves using a machine learning model to predict a continuous value rather than a category. For example, predicting the length of stay for a patient given their condition.

Transcript

Title: Diagnosis and Understanding Progression of Cerebral Palsy

[Start of recorded material at 00:00:00] [CSIRO Senior Research Scientist Dr Kerstin Pannek appears prominent on the screen]

 

Kerstin Pannek

Imagine being told that your child is at high risk of serious long term disability, but you have to wait months or even years before you receive a proper diagnosis and even longer for intervention. This is the heartache parents of very preterm babies face every day in Australia. At the Australian e-Health Research Centre we’re using Machine Learning to find ways to diagnose babies at high risk of cerebral palsy soon after birth.

 

[Animation of Australian e-Health Research Centre logo appears on screen]

 

[Kerstin Pannek appears prominent on the screen. Her name and title briefly appear on screen]

 

My name is Kerstin Pannek and I’m a senior research scientist in the Neurodevelopment and Plasticity Team and an expert in brain MRI in babies. Together with the University of Queensland and Monash Health, we are part of one of the world’s largest cohort studies of preterm babies, with the aim of using early MRI to predict outcomes such as Cerebral Palsy.

 

Cerebral Palsy, or CP, is a lifelong physical disability that is caused by abnormal brain development or brain injury during pregnancy or early in life. Every year, more than 400 babies and children in Australia are diagnosed with CP. The brain is most likely to respond to interventions to help reduce the severity of disability between the ages of birth and two years. But the average age of diagnosis is about 18 months. So we’re missing that really important window where intervention can make the biggest difference.

 

Our team has been responsible for analysing early brain MRIs of around 300 very preterm babies who are at high risk of CP so that in the future babies can be identified much earlier.  We do these MRIs without using sedation, while the babies are sleeping. So one of the problems we have immediately is that babies often wake up and move, which corrupts the images.

 

[Brian images generated from MRI machines are displayed. On the left a corrupted MIR and underneath this one without correction and one with correction. On the right an image of a brain’s tractography.]

 

To overcome this problem, we developed machine learning methods that can detect and then correct the corrupted images. This will allow us to use a lot of the data that would otherwise not be usable.

 

We then use these MRIs to calculate brain images that allow us to quantify the baby’s brain development. Here you can see one of the images we calculate.

 

[Kerstin Pannek appears prominent on the screen]

 

Using tractography, we are able to visualize the connections in the baby’s brain completely non-invasively without injecting any dyes or other substances.

 

Recently, we used deep learning to predict a baby’s motor outcomes from brain MRI acquired very early in life. Babies in this study were born up to 16 weeks early and we used brain MRI from when they were two days to 10 weeks old.

 

[Brian images generated from MRI machines are displayed with different regions of the brain highlighted as Dr Pannek explains.]

 

Using our approach, we were able to predict the baby’s motor outcomes at two years with high accuracy. Here you can see which regions of the brain were associated with adverse outcomes. These regions are also known to play important roles in motor function.

 

[Kerstin Pannek appears prominent on the screen]

 

[A powerpoint slide is displayed with a screen shot of the MILXCloud website and a diagram indicating that data from an MRI machine is processed by cloud compute with a clinical report read by a clinician.]

 

This work and other work that we did using this data has been published in high profile journals, including Neuroimage and Neuroimage Clinical.

 

[A slide is displayed. The left side of the slide shows a diagram indicating that data from an MRI machine is processed by cloud compute with a clinical report read by a clinician. On the right side of the slide the screen shot of the MILXCloud website]

 

We are now implementing our tools in CSIRO’s MILXCloud. MILXCloud is a web interface that provides automated reports of biomedical imaging data. This will allow researchers worldwide to upload their MRI’s of preterm born babies and receive a PDF report detailing quantitative markers of the baby’s brain development. At the moment, these tools are for research use only.

 

[Kerstin Pannek appears prominent on screen]

 

They can be used to gain a better understanding of brain development in general, but they can also be used to recruit the right babies into clinical trials of new early interventions. Ultimately though, we plan to make these tools available for clinical use. This will help provide a diagnosis and prognosis much earlier than the current standard. This means parents can get answers sooner and children will be able to start interventions when their brains are most receptive. This could lead to a reduced severity of disability and better quality of life for the children and their families.

 

[Image of report cover appears on black background with male voiceover]

 

Download the report today for more insights into using artificial intelligence and machine learning for health applications. Read exciting case studies from Australia’s largest digital health initiative, the Australian e-Health Research Centre, and get in touch with us to discuss collaborations.

 

[End of recorded material 00:04:48 ]

 

Transcript

Title. Description Logic to Reason with Health Data

[Start of recorded material at 00:00:00] [CSIRO Group Leader Dr Michael Lawley appears prominent on screen]

 

Michael Lawley

Automated reasoning with formal logic has long been a major component of artificial intelligence research. In this case study we’ll see how technology from the Australian e-Health Research Centre is making its application more practical.

 

[Animation of Australian e-Health Research Centre logo appears on screen]

 

[Michael Lawley appears prominent on screen. His name and title briefly appear on screen]

 

Hi, my name is Dr Michael Lawley and I lead the Health Informatics Group at the Australian, e-Health Research Centre. We apply both statistical and symbolic AI techniques to improve health data quality so that we can get the best health care outcomes relying on this data. When clinicians record data about a patient, symptoms, a diagnosis, prescribe medications and procedures, they need to be precise and unambiguous so that computer algorithms can perform accurate analytics or provide trustworthy clinical decision support. To this end, controlled vocabulary is used where each idea or concept is given a unique code.

 

SNOMED CT is the most comprehensive international standard and currently consists of about a half million such codes. Furthermore, these codes are cross-linked to capture the relationships that exist between the different concepts so that computers can reason about them consistently.

 

[An image is presented. The image displays part of the SNOMED CT Hierarchy showing clinical concepts in boxes with lines connecting them representing that the concepts are related]

 

You can see a small part of this hierarchy, shown here horizontally, along with a representation of the relationships for ‘Fracture of lower leg’.

 

[Michael Lawley appears prominent on screen]

 

How does this all work? Let’s look at an example set of concepts representing parts of the anatomy, body structure, lower limb and tibia, as well as fracture, a type of disorder.

 

[An image displays of 5 concepts represented by circles with lines connecting the related concepts]

 

These concepts are connected by is-a edges representing specialisation.

 

[Slide updates. The previous image displays with additional lines, connecting further relationships between the concepts]

 

 

Now, if we consider a lower limb disorder, we can see it is a disorder and it is linked to the lower limb concept.

 

We can now define fracture of lower limb as being both a fracture and a lower limb disorder.

 

[Slide updates. Additional connecting lines appear representing further relationships between the concepts with dotted lines representing inferred relationships]

 

It is then possible to automatically infer that the site is the lower limb.

 

[Slide updates. Additional connecting lines appear representing further relationships between the concepts with a solid line between the Tibia and Fracture of Tibia]

 

Furthermore, we could define fracture of tibia as being a fracture located in the tibia and then automatically infer that fracture of tibia is a fracture of lower limb.

 

[Slide updates briefly. Additional connecting red pointer line representing inferred relationship]

 

[Michael Lawley appears prominent on screen]

 

These definitions, when expressed formally, use a branch of logic called Description Logic. For SNOMED CT there are definitions or axioms for every one of the half million concepts, and these can all interact, allowing new relationships to be inferred. Reasoning with this many axioms can take a long time without the right algorithm. At CSIRO, we developed the world’s fastest automated reasoner, Snorocket, for processing a subbranch of description logic suitable for defining clinical concepts. Snorocket reduced the processing time for SNOMED CT from more than 30 minutes down to under two minutes and for the first time introduced an incremental mode that processed changes in seconds.

 

It also allowed for more detail to be expressed in a concept’s definition. This was sufficient to change the SNOMED CT authoring and maintenance process, from a batch activity, or a large set of changes was made before their impact was checked, to an online activity where small sets of changes could be processed in semi real time.

 

By profoundly reducing the cost of the authoring and maintenance process, we’ve enabled a significant improvement in the quality of concept definitions.

 

By increasing the available expressive power of the description logic, we have enabled more precise definitions of concepts, especially in the area of medications where reasoning about numbers, such as strengths and quantities is particularly important.

 

Snorocket, was the foundation and catalyst for our terminology server, Ontoserver, which is licensed internationally and is the foundation of Australia’s National Clinical Terminology Service. You can explore SNOMED CT in all its detail using our browser, Shrimp.

 

[Image of Shrimp browser displayed on white background]

 

[Michael Lawley appears prominent on screen]

 

[Image of report cover appears on black background with female voiceover]

 

Download the report today for more insights into using artificial intelligence and machine learning for health applications. Read exciting case studies from Australia’s largest digital health initiative, the Australian e-Health Research Centre, and get in touch with us to discuss collaborations.

 

[End of recorded material 00 ;04:25 ]

 

 

 

Transcript

Topic. Cloud Computing and Machine Learning in Health

[Start of recorded material at 00:00:00] [CSIRO Group Leader and Associate Professor Dr Denis Bauer appears prominent on screen]

 

Denis Bauer

Artificial intelligence and machine learning have been around for decades, so why is it that there’s such an excitement around this technology today? Let’s find out with the artificial intelligence report from the Australian e-Health Research Centre.

 

[Animation of Australian e-Health Research Centre logo appears on screen]

 

[Denis Bauer appears prominent on screen. Her name and title briefly appear on screen]

 

My name is Dr Denis Bauer.

And with more than 15 years experience in machine learning, I’ve seen most of the successful algorithms today having been around for decades. The difference now, the datafication of everything means there is more data available than ever before. This leads to better accuracy and with it more trust in the technology. But there’s one other crucial development that has amplified the capability of machine learning like no other; cloud computing. The availability of vast compute resources from public cloud providers means more complex models can be trained in minutes rather than days.

 

This is enabled by new distributed computing approaches, which allow the commodity hardware of public cloud providers to be used for massively parallel yet robust processing of data. However, machine learning is an iterative process that requires information to be kept in memory between each iteration step. With the early distributed computing systems like MapReduce, unable to cater for this requirement, it took until the development of Apache Spark from Machine Learning to fully benefit from distributed computing and the capability of the cloud.

 

Since in place though, the resulting Spark-based machine learning library MLlib, has seen many machine learning algorithms be re-implemented in this high-performant framework. Yet there are still areas where the scalability of Spark’s MLlib is not sufficient, in particular in the health space. Here the number of features, that is the information points per sample or patient can quickly grow into the millions and exceeds the capability of even MLlib. For example, in the human genomics space, the mutational profile of a patient can contain more than 80 million features.

 

To process this kind of data, algorithms that are purpose-built for the ultra-high dimensional health space are necessary.

 

At the Australian e-Health Research Centre, we have created VariantSpark, which is an adaptation of the random forest algorithm. VariantSpark, is able to handle millions of samples with millions of features, each to detect disease genes and lead to better diagnostics. Be sure to check out the VariantSpark case study in the report. Other than providing unprecedented processing power, cloud computing has also accelerated the uptake of machine learning.

 

New cloud-native architectures offer economical yet highly scalable infrastructures, which is perfect for serving out the prediction of Machine Learning models. These so-called serverless approaches allow individual elements like compute, storage and communication to be decomposed into modular services. Each of which able to scale to massive workloads, but they only incur cost when actually in use. This enables the design of very powerful compute services without breaking the bank. At the Australian e-Health Research Center, we have created several serverless web-apps that use machine learning to predict biological results from user provided data.

 

For example, our serverless Variant Effect Predictor can predict the functional outcomes of a single misspelling in the DNA and is powerful enough to process all three billion letters in the human genome. Be sure to check out the serverless Variant Effect Prediction case study in the report. To summarize, cloud computing has amplified the capability of even decade old machine learning algorithms by providing the computational power and economic delivery vector to bring machine learning to the front line and into clinical practice.

 

[Image of report cover appears on black background with voiceover]

 

Download the report today for more insights into using artificial intelligence and machine learning for health applications. Read exciting case studies from Australia’s largest digital health initiative, the Australian e-Health Research Centre, and get in touch with us to discuss collaborations.

 

[End of recorded material 00:05:00]