Skip to content

Quick take: Amazon Lookout for Vision

“Speed is a choice” – Andy Jassy, AWS Partner Keynote, re:Invent, 12/3/2020.

Amazon Web Services (AWS) launched Amazon Lookout for Vision last week at re:Invent. The managed service uses Machine Learning to identify anomalies in manufactured parts.

I wanted to get an idea how the service works, so I found an online data set of 245 images of seven types of fabric. These images are labeled as containing anomalies or as normal (the illustration above shows one of each type). In the AWS console I opened Lookout for Vision, created a new project, and immediately started to upload images. There is a click-and-drag limitation of 30 images at a time, but since my data set was relatively small, this was not prohibitive. Within a few minutes, I was able to upload the defect-free images, then labeled all of them as such by doing a “select all” over a few pages of displayed images. I followed the same process for the flawed fabric images, labeling them as defective. Note: if you had images that had not been labeled already, you could take the time at this step to label each one individually as the ground truth.

Having provided Lookout for Vision a labeled dataset, I trained the model, which took about 75 minutes. The training process saved 64 images for a test set (the model never got to see these images in the training process) and overall. When the model was applied to these saved images, it got the correct answer in 61 cases for an overall accuracy of 95%. The confusion matrix for how well the model predicted whether these images were anomalies or normal is shown below.

Amazon Lookout presents this information in a dashboard which gives slightly more granular statistics.

Finally, we want to be able to use the model with new images. In the console, I uploaded some test images to an S3 bucket, then specified this bucket in the Lookout’s “Trial Detections” and gave the instruction to run the trial. After a few minutes, it reported back its predictions. Even better, though is that it allows me to verify its work manually. My verification as to whether the model predicted normal/anomaly correctly then gets automatically added to the dataset to make the model better.

My biggest takeaway from my brief excursion into industrial anomaly detection is that you can move fast if you choose to. Although I have worked extensively with computer vision, I really don’t know a thing about industrial anomaly detection, yet start to finish, the process described above took about two person-hours. This time included finding a dataset all the way to starting to iteratively improve the model. Getting the 95% accuracy figure in a few hours is a huge win, giving us information for a decision point going forward as to whether this is sufficient for our business purposes or, if not, how far we will need to go to get there. As with other AWS managed Machine Learning services, I didn’t need to know what was under the hood, and I was able to make rapid and value-added progress.

I’m almost certain that I could have built a better model if I had started from scratch with a custom computer-vision implementation of this anomaly-detection process. But getting to an actionable result probably would have taken perhaps a week or two and would have depended on my prior experience in the field. For time- and resource-challenged teams, choosing to skip such opportunity costs through leveraging this managed service to get a rapid prototype could result in faster and better decision making. 

As Andy Jassy said, “speed is a choice.”