<iframe src="https://www.googletagmanager.com/ns.html?id=GTM-KQ3FZBL" height="0" width="0" style="display:none;visibility:hidden">
Skip to content

Abstract

This white paper delves into the implementation of an AI-based Automatic Visual Inspection System for the manufacturing industry, focusing on how Microsoft Azure AI Custom Vision Service can be applied to specific use cases. It provides a detailed overview of the system's architecture, key Azure AI components, configuration, and operational workflow. Additionally, it demonstrates how this innovative approach enhances defect detection accuracy and reliability, ultimately boosting operational efficiency. By examining standards, benefits, limitations, models, and use cases, this paper offers valuable insights into the potential advantages and challenges of adopting AI-driven visual inspection systems in manufacturing.

Introduction

During the verification and validation process on the shop floor in any manufacturing industry, the QA team uses multiple techniques. One common technique is visual inspection where technicians examine the test item, usually with the naked eye and sometimes with simple tools such as magnifying glasses. This non-invasive method does not require measuring equipment and is often used to assess, the general condition of an item under test. Technicians check for missing components, misalignment of installed parts, surface wear, tears, cracks, deformation, corrosion, or other types of damage.

Although simple, visual inspection requires trained technicians to conduct a thorough examination, making it a time-consuming task. This method has been standardized under DIN EN 13018 (General Principles of Visual Inspection).

Since manual visual inspection is time consuming and less accurate, Automatic Visual Inspection System (AVIS) have become increasingly popular. Such systems also align with Smart Factory requirements.

WP_-_AI-Based_Visual_Inspection_System.pdf-image-015

What is an Automatic Visual Inspection System?

An AVIS uses sensors to capture the images of test objects from various angles. These images are then analyzed using image processing and analysis software to detect any defects. The visual inspection software, based on image analysis, provides the test result in the form of either “Pass” or “Reject”. Many advanced software can also highlight and provide detailed information about the identified defects. The following figure represents a simple block diagram of an Automated Visual Inspection System.

block design
Figure 1: Block Diagram of an Automated Visual Inspection System
WP_-_AI-Based_Visual_Inspection_System.pdf-image-018

AVIS primarily relies on image analysis techniques and algorithms. The entire operation involves several processes including image acquisition, image enhancement, image segmentation, image feature selection, feature classification, and feature matching. Typically, the following algorithms, transforms, methods, and techniques are used during image analysis and defect detection:

edgi
WP_-_AI-Based_Visual_Inspection_System.pdf-image-019

Based on when the image analysis is performed AVIS can be classified into the following categories:

avis

With the rise of Industry 4.0, there is a growing demand for AVIS systems that leverage cutting-edge tools and technologies such as Artificial Intelligence (AI).

Let's now take a deeper dive to explore the overview and implementation of an Azure AI-based Automatic Visual Inspection System.

AI-based AVIS

We have already seen that image analysis is a critical component in any AVIS to determine the defects. Image analysis involves multiple stages in image processing making it a complex task. Implementing image analysis algorithms through programs can be challenging. However, AI-based image analysis offers numerous advantages including faster processing and higher accuracy. The following AI techniques are typically employed in implementing AVIS.

artificial

For image classification models, a set of training and test images are required. The training image set contains images of test object without any defects, while the testing image set includes images of the test object with various defects. A CNN-based inspection model facilitates advanced learning from defective images, allowing AVIS to detect and classify defects in different environments.

It is important to note that in AI-based AVIS implementations, the accuracy of defect detection and the overall system performance depend heavily on the AI techniques employed. The selection of an AI Technique is determined by the specific use case and defect detection requirements. An AI technique that is effective for detecting a particular type of defect may not be equally effective for other use cases. Choosing the right AI technique depends on:

Screenshot_2-Oct-28-2024-01-46-06-1351-PM
WP_-_AI-Based_Visual_Inspection_System.pdf-image-020

Due to its flexibility and ease in implementation and deployment, AI-based AVIS is widely used in manufacturing and production for product quality assurance, inventory management, and automated defect detection. AI systems can be trained to identify a wide variety of defects specific to different applications and use cases. Some of the detectable defects include:

Screenshot_3-Oct-28-2024-02-33-00-2598-PM

Many cloud platforms, such as Azure, AWS, GCP, offer AI services that can be used to implement visual inspection system with minimal configuration and programming. Using cloud AI services to develop AVIS offers several advantages, as many low-level functions - such as selecting the right AI techniques, building and training models, and designing the user interface—are handled by the cloud platform itself. Let us now delve deeper into implementing AVIS using MS Azure’s AI Custom Vision Service.

Azure AI enabled AVIS

AI Custom Vision is a part of MS Azure AI Services. The cloud-based Azure AI Vision service provides developers with access to advanced algorithms for processing images and extracting information. Custom Vision enables users to define their own labels and train custom models to detect them. It can be accessed through a client library SDK, REST API, or through the Custom Vision web portal. By uploading an image or specifying an image URL, Azure AI Vision algorithms can analyze visual content in diverse ways, based on inputs and user configurations.

Please refer to the following figure, which presents a high-level block diagram of an Azure AI-based AVIS. While we have considered an Azure cloud-deployed AVIS, containerization allows for, the creation of portable models that can also be deployed on edge devices. Vision inference models can be trained in the cloud, containerized, and used to build custom modules for Azure IoT Edge runtime-enabled devices. Deploying vision AI solutions at the edge yields significant cost and performance benefits.

high level
Figure 2: High Level Block Diagram of MS Azure AI-based AVIS

Before starting with any AI-based AVIS implementation, it is necessary to identify the end purpose and the application areas where the system will be deployed. For this basic implementation we have considered a simple problem statement: Inspecting the outer enclosure (10CM x 8 CM x 6 CM) of a finished product, which has 8 screws, to automatically detect whether all screws are properly fixed on the enclosure cover. Please refer to Figure 3.

The finished products will be placed on a moving conveyor, and a suitable resolution camera will be needed to capture images of the products. Please note that for detecting fine defects in complex assemblies, high-resolution, high-quality images are necessary, which requires a good camera. However, considering the current problem and the dimensions of the test object, a low-cost USB camera with a resolution of around 2 MP will be sufficient. The camera communication interface should be selected based on the IoT gateway and budget. Since USB cameras are low cost and widely supported by many IoT Gateways, we recommend using a USB interface camera for this application.

WP_-_AI-Based_Visual_Inspection_System.pdf-image-023
Figure 3: Test Object

To transfer the captured images of the test object to the cloud-based AVIS, a suitable gateway is required. The selection of the gateway will be based on following key parameters.

Screenshot_4 (1)

Considering the camera’s communication interface, the connection to the Azure Cloud instance, and the image storage and compression requirements, any low-cost enterprise gateway would be suitable for this purpose. We have selected Cyient IoT Gateway 5400 for this implementation as it offers a USB 2.0 communication interface, ample local storage, 2 GB of RAM, and a comprehensive communication stack (Ethernet, Wi-Fi, 5G, LTE, LORA) for secure cloud connectivity. It also runs on a Linux operating system, supporting advanced processing and edge analytics. Furthermore, Cyient IoT Gateway 5400 is Azure Certified, making it an ideal choice for this application.

As shown in Figure 2, the USB camera will be positioned on a pole that it is perpendicular to the conveyor belt, providing a clear view of the test object. The camera is connected to a nearby IoT Gateway via a USB Cable. The camera captures live images of the test object, and the application running on the IoT Gateway collects the image data, performs image compression and sends the time stamped, compressed images along with an image ID to the Azure Cloud Platform over an HTTPS Connection.

WP_-_AI-Based_Visual_Inspection_System.pdf-image-025
Figure 4: Cyient IoT Gateway 5400 – Azure Certified Gateway

The Azure Cloud Platform facilitates the Azure AI Service. To use this service, we will need to configure them. The Azure AI Custom Vision Service, a part of Azure AI Service, hosts the Custom Vision Service Models.

Azure
Figure 5: Azure AI Custom Vision Data Process Model (Source Microsoft Learn)

The data sets used by Custom Vision AI model are as follows.

Training Images: These are the customer-supplied images that cover various categories of defects, as well as images without defects. The images are labeled and used for training base models. Once the project is created, these images are uploaded and stored in Custom Vision Service. For the present implementation, we have included images showing the enclosure with all 8 screws intact, as well as images with some screws missing. Refer to Figure 6.

Training Labels: These are metadata generated from the data labeling, which annotates the training images. Examples include “Perfect Board”, “Defective Board”, “A Screw Missing”, “B Screw Missing”, “C Screw Missing”, “D Screw Missing” etc. Refer to Figure 10.

Prediction Images: These are the images of the test object captured by the camera, that require defect identification. These images are received at the cloud platform and sent to the Custom Vision service for prediction. After processing, the images can be stored and associated with the project for further labeling and training. Refer to Figure 7.

Prediction Results: These are the results produced by Custom Vision after running inference on prediction images using the trained model. These results are then linked to the corresponding prediction image to identify the defect. Refer to Figure 11.

Custom Model: This is the model trained on the labeled training images and tested with the prediction images. The model can be hosted within Custom Vision and used with new prediction images. Additionally, this model can be exported for embedding into AI Application.

WP_-_AI-Based_Visual_Inspection_System.pdf-image-029
Figure 6: Training Images

Note: Each Enclosure image is a separate Image.

pre (1)
Figure 7: Prediction Images

As shown in Figure 5, the Custom Vision Service requires training images and labels for training through transfer Learning, to generate a custom model tailored to the specific use case. Refer to Figure 8, which presents the model training interface. The custom model can be used for inference with prediction images, and these images can be stored for future training iterations to further improve the custom model.

WP_-_AI-Based_Visual_Inspection_System.pdf-image-031
Figure 8: Model Training
WP_-_AI-Based_Visual_Inspection_System.pdf-image-034
Figure 9: Model Training Iterations Result
WP_-_AI-Based_Visual_Inspection_System.pdf-image-036
Figure 10: Model Training Iterations Result with Training Labels
WP_-_AI-Based_Visual_Inspection_System.pdf-image-038
Figure 11: Prediction Result with Prediction Image

Once the Custom Vision model is trained and evaluated, it is essential to further improve it by deploying the model and testing it regularly with various prediction images to verify its accuracy. Only when the model consistently meets the required standards should it be integrated into the full system.

The AI Custom Vision model we developed to detect missing screws in the enclosure required a few hundred training images, including images under both ideal and harsh conditions, as well as those showing missing screws. We ensured that the images captured the enclosure from various angles to reflect practical conditions for training the model. The model was then further refined by running multiple iterations with additional prediction images. After thorough testing, we observed that the model’s predictions - identifying missing screws and classifying perfect boards – were nearly 100% accurate. Additionally, the results were fast, comparable to the performance of real-time AVIS. Refer to Figure 11.

WP_-_AI-Based_Visual_Inspection_System.pdf-image-040

Advantages of MS Azure AI Custom Vision Service

Advantages of MS Azure AI Custom Vision Service (1)

Advantages of AI-based AVIS

Advantages of AI-based AVIS (1)

Limitations of AI-based AVIS

While AI-based AVIS does not have any major limitations, the accuracy of defect identification depends heavily on the quality of the training images and the AI algorithm used to train the model. If the model is not professionally trained and thoroughly evaluated, it may produce incorrect predictions leading to inaccurate results. Deploying such an AVIS in a real production environment could result in poor product quality and deficiencies in the final products.

Conclusion

AI is a rapidly advancing field with diverse applications and exciting prospects. AI-based AVIS are more intelligent, versatile, and be can easily integrated into various industries, leading to increased automation, efficiency, and productivity. These systems ensure that defects are detected early in the production process, reducing waste, rework costs, and saving time and resources. Additionally, they provide valuable data analytics to further enhance operations.

Azure AI-based AVIS implemented for detecting missing screws and identifying good and defective enclosures, was highly effective in serving its intended use case. The application demonstrated perfect accuracy, and the results were produced quickly enough to make it suitable for quality assurance in a continuously operating plant. the system also proved flexible, allowing for the detection of multiple types of defects by providing appropriate training images and labels, making it easy to train and evaluate the custom model.

AI-based AVIS aligns with the Smart Factory framework and will play a critical role in modern manufacturing industries, operating in accordance with Industry 5.0 practices and guidelines.

About the Author


Dipak Gade

Dr. Dipak Gade
Global Head, IoT and Manufacturing Operations Management Practice

Dr. Dipak Gade, is the Global Head of IoT and Manufacturing Operations Management Practice at Cyient. He has around 24 Years of experience in software design, development, testing, review, delivery, systems engineering, and program management. Dr. Gade has led various projects in IT, embedded software, industrial automation, mining, M2M, digital manufacturing, IoT and cloud computing. He assists global customers with the successful implementation of Smart Factories, digital transformation and Industry 4.0 initiatives, leveraging his expertise in industry and process automation. He holds a Post Doctorate in Computer Science and Engineering and an alumnus of IIM Calcutta.

About Cyient

Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.

For more information, please visit www.cyient.com

References

  • Burgos-Artizzu, X. P. (2020) “FETAL_PLANES_DB: Common maternal-fetal ultrasound images”, Nature Scientific Reports. Zenodo. doi: 10.5281/zenodo.3904279.
  • T.L.A. van den Heuvel (2018) “Automated measurement of fetal head circumference”. Zenodo. doi: 10.5281/zenodo.1322001.
  • Alzubaidi M, Agus M, Makhlouf M, et al. Large-scale annotation dataset for fetal head biometry in ultrasound images. Data in Brief. 2023 Dec; 51:109708.
    DOI: 10.1016/j.dib.2023.109708. PMID: 38020431; PMCID: PMC10630602.
  • Moradi, S., Ghelich Oghli, M., Alizadehasl, A., Shiri, I., Oveisi, N., Oveisi, M., Maleki, M. and Dhooge, J., 2019. MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica, 67, pp.58-69. DOI: 10.1016/j.ejmp.2019.10.001.
  • Ronneberger, O., Fischer, P. and Brox, T., 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, October 2015, pp. 234– 241. Available at: https://doi.org/10.1007/978-3-319-24574-4_28.
  • Saadati, D., Manzari, O.N. and Mirzakuchaki, S., 2023. Dilated-UNet: A Fast and Accurate Medical Image Segmentation Approach using a Dilated Transformer and U-Net Architecture. arXiv preprint arXiv:2304.11450.
  • Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M. and Asari, V.K., 2018. Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation. arXiv. Available at: https://arxiv.org/abs/1802.06955.
  • Oktay, O., Schlemper, J., Le Folgoc, L., Lee, M.J., Heinrich, M.P., Misawa, K., Mori, K., McDonagh, S.G., Hammerla, N.Y., Kainz, B., Glocker, B. and Rueckert, D., 2018. Attention U-Net: Learning Where to Look for the Pancreas. ArXiv, abs/1804.03999. Available at: https://api.semanticscholar.org/CorpusID:4861068.
shutterstock_2455097521

About the Authors


Srinivas Rao Kudavelly

Srinivas Rao Kudavelly | Srinivasrao.Kudavelly@cyient.com
Consultant Senior Principal - Healthcare and Life Sciences

Srinivas has over 25 years of experience which spans across Consumer Electronics, Biomedical Instrumentation and Medical Imaging. He has led research and development teams, focused on end-to-end 3D/4D quantification applications, and released several "concept to research to market" solutions. He also led a cross functional team to drive applied research, product development, human factors team, clinical research, external collaboration, and innovation. He has garnered diverse sets of skill sets and problem challenges. and has over 25 Patent filings and 12 Patent Grants across varied domains, mentored over 30+ student projects, been a guide for over 10+ master thesis students, peer reviewer for papers and an IEEE Senior Member (2007).

 

Venkat Sudheer Naraharisetty

Venkat Sudheer Naraharisetty | Venkatsudheer.Naraharisetty@cyient.com
Lead Data Scientist

With a robust career spanning over 15 years, he has amassed extensive experience in diverse fields such as Automotive Research and Development, Computer Aided Engineering, and Data Science, with a particular focus on Artificial Intelligence. His expertise extends across a wide array of domains, including crash analysis and the development of classification models using advanced machine learning and deep learning techniques. These skills have been applied in various sectors, notably Automotive, consumer electronics and Medical Imaging.

 

Amol Gharpure

Amol Gharpure | Amol.Gharpure@cyient.com
Senior Solution Architect – Healthcare and Life Sciences

Amol brings over 20 years of expertise in embedded product development, primarily in the healthcare sector. His extensive experience spans the entire product development lifecycle of medical devices. Additionally, he has contributed to healthcare robotics by building custom robotic models and has worked in medical imaging, focusing on 2D and 3D image segmentation. His expertise in test fixture design and automation equips him with a strong proficiency in both the development and testing phases of medical technology solutions

About Cyient

Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.

For more information, please visit www.cyient.com

About Cyient

Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.

For more information, please visit www.cyient.com

Conclusion

The De Novo submission pathway offers an important regulatory mechanism for launching novel medical devices in the United States market. By understanding the key components of De Novo submission, strategic considerations, and post-market obligations, medical device manufacturers can navigate the regulatory pathway effectively and obtain market clearance for innovative technologies that address unmet clinical needs and improve patient care. While most medical device companies face challenges in their De Novo submissions, collaboration, resource allocation, and strategic planning are essential for achieving successful market entry through the De Novo pathway.

About Cyient

Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.

For more information, please visit www.cyient.com