A gentle introduction to image analysis using Neural Networks.

This post has been republished via RSS; it originally appeared at: Healthcare and Life Sciences Blog articles.

The growing volume of medical imaging data makes the use of technology more appropriate to assist human radiologists. We can use AI to analyze medical images and flag those that require additional analysis. As models get more accurate it means radiologists will review fewer and fewer images and therefore spend more time with edge and critical cases.

 

Classis ML has limitations when working with image data. The amount of work and code required to extract features limits the use of this technology to process images. Fortunately, Neural Networks, a subset of ML, have improved, making them ideal for a wide category of problems. Image analysis is one area where Neural Networks and especially Convolutional Neural Networks (CNN) are very useful.

 

Neural networks are designed to mimic the human neural network and are used to develop models that can be used for a wide range of tasks such as object identification. They can also be used for traditional classical ML problems such as regression and classification. The idea behind a neural network is to use layers of neurons to solve a problem. Each layer of the network will use math functions to solve a problem and send the output to subsequent layers. For example, in object detection each layer will be trained to identify a part of an image. The early layers will identify rough/crude shapes and subsequent layers will identify more specific shapes and eventually are able to identify complete images. Because these neural networks and made up of many layers, they are also referred to as Deep Neural Networks (DNN) and the area of study is commonly referred to as Deep Learning (DL). Such models could be trained to identify x-ray images for example, and separate healthy vs unhealthy images. For a more in-depth introduction to Neural Networks and CNNs check out this blog.

 

In order to learn more about this technology, we spent a two-week iteration learning the concepts and building a simple CNN to process handwritten digit image data (MNIST). The concepts we learnt will be applied to process chest x-ray data in a future iteration. We used MNIST data because it’s easy to understand and most deep learning libraries will have built in support for the dataset. There are several libraries available out there and depending on which you use, built data support might be different. Because we had previous exposure to PyTorch, we chose PyTorch, which has built in support for the MNIST handwritten dataset.

 

We broke up our learning into two distinct steps, which I would recommend. On the first week, we learnt about the basics of neural networks by building an Artificial Neural Network (ANN) which is commonly referred to as a Multi-Layer Perceptron (MLP). To follow along and build your own, here’s the tutorial that we used. This allows you to see under the hood how the neurons and layers are built to solve a relatively simple problem. On the second week, we built on this knowledge by adding specialized layers for image processing by building a convolutional neural network. For an introduction to convolutions CNNs refer to this outstanding blog.

 

The MNIST data comes with 60,000 test images and 10,000 validation images which makes it ideal for learning. Since the test data is labeled, we use it to train our model how to identify and classify a handwritten digit. We then used the validation data to analyze how accurate our model was. Given the simplicity of the data, we were able to very quickly get over 90% accuracy. ANN/MLP are good for simple cases and are good for illustration, but they have a couple of limitations. Because of their fully connected nature, all neurons in each layer are connected to all other neurons in subsequent layers, they have a lot of parameters which increases the computation time and cost. In addition, because the first layer, input layer is usually a single dimensional array, you lose some import spatial information.  In order to address these limitations, we added a couple of layers specialized for processing image data, convolution and pooling layers. If you’d like to follow along using this tutorial. This type of neural network is called a Convolution Neural Network and is very good at processing image and sound data.

 

In a future iteration, we’ll dive deeper into CNNs. We’ll use them to process chest x-ray data and build a process for assisting radiologists analyze large numbers of x-ray data quickly.

REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.