Fetal growth assessment is a critical component of prenatal care, providing valuable insights into fetal development and overall pregnancy health. Traditional methods, relying on manual measurements of fetal biometry parameters such as Biparietal Diameter (BPD), Head Circumference (HC), Abdominal Circumference (AC), and Femur Length (FL), though effective, are time-intensive and prone to variability among practitioners. We have developed an AI algorithm based on deep learning techniques called CyNet which gives fetal biometry parameters with accurate dimensions with a segmentation dice score of 0.90. The integration of Artificial Intelligence (AI) in this field represents a significant advancement by automating these processes, enhancing both accuracy and efficiency. AI-driven algorithms streamline and simplify the extraction and analysis of fetal growth data from ultrasound images, ensuring consistent, reliable fetal biometry measurements. This white paper explores how AI-powered fetal growth assessment can enhance prenatal care, driving improved outcomes for both patients and healthcare providers.
Accurate assessment of fetal growth is essential for effective prenatal care, providing crucial information about the health and development of the fetus. Traditionally, this process involves sonographers manually measuring key fetal biometry parameters such as Biparietal Diameter (BPD), Head Circumference (HC), Abdominal Circumference (AC), and Femur Length (FL) from ultrasound images. These measurements help estimate gestational age, monitor growth patterns, and detect potential developmental anomalies. However, manual methods are not only time-consuming but also subject to observer variability, leading to potential inconsistencies in assessments.
The CyNet model provides accurate dimensions of the fetal biometry parameters which helps in estimating the growth patterns and detecting the potential anomalies the earliest.
AI offers promising advancements that can address these challenges by automating the measurement process, improving both accuracy and efficiency. By leveraging AI technologies such as convolutional neural networks (CNNs) and models like UNetfor specific parameters, clinicians can obtain more consistent and reliable fetal growth assessments. This white paper explores the potential of AI-driven solutions in prenatal care, focusing on their ability to enhance fetal growth measurements along with visualization as a 3D model and improved clinical outcomes.
U-Net Model: The U-Net architecture is a popular convolutional neural network (CNN) model specifically designed for image segmentation tasks, such as medical imaging, satellite image processing, etc. It was introduced by Olaf Ronneberger et al. in 2015, primarily for biomedical image segmentation. U-Net uses both large-scale and fine details from the image, it can make very precise predictions about what different parts of the image represent. Unlike other models that need tons of training data, U-Net can perform well even with a smaller amount of data, which is often the case in fields like medical imaging.
CyNet Model: CyNet model does classification followed by segmentation. Classification part is taken care of by a convolutional neural network (CNN) designed for image classification. CyNet segmentation part is a network architecture that follows a similar encoder-decoder structure like U-Net model but introduces several enhancements for improved feature extraction and segmentation accuracy.
CyNet model network introduces the use of larger convolutional filters, LeakyReLU activations, strided convolutions for downsampling, and dropout in the decoder for better generalization. These modifications make the model more suitable for complex image segmentation tasks, particularly where preserving fine details and spatial relationships is critical. By leveraging these improvements, CyNet model can deliver high-quality segmentation results while remaining efficient and adaptable to a wide range of applications, from medical imaging to object detection in satellite data.
By leveraging these improvements, CyNet model can deliver high-quality segmentation results while remaining efficient and adaptable to a wide range of applications, from medical imaging to object detection in satellite data.
MFP dataset: The ultrasound clinical dataset, collected at Med Fanavaran Plus company for AC and HC segmentation, is used to evaluate the proposed method.
FETAL_PLANES_DB: Common maternal-fetal ultrasound images:
Dataset Collection for Femur Images:
The dataset was compiled by gathering ultrasound images from routine maternal-fetal screening procedures at two separate hospitals, ensuring variability in clinical practices and equipment. The involvement of several operators using different ultrasound machines added another layer of diversity to the dataset. This variability is vital for developing a robust dataset capable of generalizing well across different settings and conditions.
Femur Image Segregation
From this extensive dataset, images specifically depicting the femur were isolated. The femur is a critical anatomical structure often measured during prenatal ultrasound screenings to assess fetal growth and development. Accurate identification and segmentation of the femur are essential for reliable measurements and assessments. We have developed an annotation tool for preparing ground truth for femur images to train the model.
The dataset used for training is a combination of head and abdomen images with masks taken from the MFP dataset. Additionally, a few femur images from the FETAL_PLANES_DB dataset were included to train the classification model. The training data comprises 80%, while the test data makes up 20% of the total 174 images.
While we could use results from a referenced paper that estimated abdomen and head circumference using a modified AI model, CyNet model first categorizes ultrasound images as abdomen, head, or femur. Followed by segmentation task that measures head or abdomen circumference, or femur length.
What is an Image Annotation Tool?
An image annotation tool is a software application that allows users to label specific regions or features within an image. These labels, or annotations, can take various forms—line, polygon, circle, ellipse, rectangle and are used to highlight, measure, and categorize different elements. The annotations are typically saved in a structured format, such as JSON, making them easy to import, export, or use for analysis or machine learning. Image annotation tools are indispensable in today's deep learning training landscape. They provide the means to label and analyze images, supporting a wide range of applications from AI development to medical diagnostics accurately and efficiently. As the importance of visual data increases, these tools will continue to drive innovation across multiple industries. Whether you are a researcher, engineer, or data scientist, mastering these tools is essential for harnessing the full potential of image-based data.
The EchoLabl annotation tool can prepare ground truths in different shapes and capture real-time dimensions if the correct average pixel size is entered in the text box.
The image below illustrates our developed EchoLabl annotation tool.
CyNet Model Classification layer:
The CyNet classification layer is a convolutional neural network (CNN) designed for image classification. It starts with an input layer that processes the image, followed by a series of convolutional layers that progressively extract features from the image while increasing the depth of the feature maps. After each convolution, max-pooling layers reduce the spatial dimensions, helping the model focus on the most important patterns. Once the feature extraction is complete, the data is flattened into a one-dimensional vector and passed through a fully connected (dense) layer to interpret the features. The final output layer, with a softmax activation, generates probabilities for each class, enabling the model to make predictions about the image's category.
After the image is classified, segmentation is performed using a segmentation model.
CyNet Segmentation Layer Architecture for Fetal Biometry:
CyNet segmentation layer has an advanced convolutional neural network (CNN) designed for image segmentation tasks, with several key modifications that distinguish it from standard architectures like U-Net. The model follows a symmetric design with an encoder-decoder structure, where the encoder (downsampling path) extracts important features from the input image, and the decoder (upsampling path) reconstructs a segmentation mask based on those features. The architecture incorporates several novel elements that enhance its performance in complex image segmentation tasks.
Encoder (Downsampling Path):
The encoder path is responsible for progressively reducing the spatial dimensions of the input image while capturing its most important features.
Each convolutional block consists of a Conv2D layer with larger kernel sizes, followed by LeakyReLU activation, and BatchNormalization to stabilize and accelerate training.
Strided convolutions are used instead of max-pooling for downsampling, which allows the model to learn more flexible feature representations.
The number of filters increases at each layer allowing the model to capture both fine and abstract features across different layers.
Bottleneck Layer:
The bottleneck is a critical layer that captures the most abstract features in the image after the encoder has significantly reduced its size.
It consists of a Conv2D layer with filters and a ReLU activation, ensuring that the most important spatial information is preserved before reconstruction.Decoder (Upsampling Path):
The decoder path reverses the downsampling process, using Conv2DTranspose (deconvolutional) layers to gradually restore the original spatial resolution of the input image.
Each deconvolutional block includes a Conv2DTranspose layer, followed by Dropout, ReLU activation, and BatchNormalization. The use of dropout helps prevent overfitting by regularizing the model.
After each upsampling operation, the model concatenates the upsampled feature map with the corresponding feature map from the encoder (via skip connections). This ensures that detailed spatial information from earlier layers is preserved, which is crucial for accurate segmentation.
Skip Connections:
Like U-Net, the model uses skip connections to bridge corresponding layers between the encoder and decoder.
This ensures that high-resolution features from the encoder are directly used in the decoder, allowing the network to maintain precise localization and fine details during the upsampling process.
Output Layer:
The final layer is a Conv2DTranspose layer with a single filter and a sigmoid activation function, producing a segmentation mask with pixel-wise predictions. This mask matches the size of the input image and contains the desired segmentation output.
Using a standard desktop Linux computer (GPU powered) with Visual Studio Code installed, we set up the necessary environment to configure the CNN network, including the Weights & Biases (W&B) platform for logging, visualizing, and analyzing deep learning experiment. This initial setup was used to make all the required modifications to the existing model.
We have developed an AI-based clinical application for fetal biometry, named EchoWrks, leveraging the CyNet model designed for precise identification of fetal biometry parameters. The application features comprehensive patient history management, fetal health status tracking, and automated report generation. Additionally, it includes 3D visualization of the fetus, scaled according to fetal dimensions. The AI model integrated within the application facilitates segmentation, with an option for the technician to manually adjust or redraw the AI-generated segments if necessary, ensuring flexibility in clinical decision-making.
How the EchoWrks Application works?
Image Upload: Users can upload ultrasound images of the fetal head, abdomen, and femur.
Segmentation and Measurement: The API processes each image using the Modified Unet model, to segment relevant structures and measure BPD, HC, AC, FL.
Result Visualization: The application overlays the measurements on the original images, providing a clear visualization for the user.
Downloadable Reports: Users can download the annotated images and a JSON file summarizing the measurements, along with a zip file containing these results.
3D Visualization of the Fetus: An exciting addition to this system is the ability to generate a 3D model of the fetus based on the measured parameters (BPD, HC, AC, and FL). This 3D visualization provides a comprehensive view of fetal growth and development, offering deeper insights for healthcare professionals and expectant parents.
The steps include:
Parameter Calculation: The segmented ultrasound images are used to calculate BPD, HC, AC, and FL.
Model Reconstruction: A 3D model of the fetus is built using these measurements, allowing for full 3D visualization.
Interactive Exploration: Users can interact with the 3D model, examining different perspectives and gaining a more tangible understanding of fetal development.
The overall workflow of the EchoWrks is outlined below to provide a clear understanding of how input images are processed by the segmentation model, followed by calculating dimensions from the segmentation details and generating a 3D model for visualization.
Fetal Biometry Parameters Computation Fetal Head details obtained by using CyNet model:
Abdominal Circumference details using CyNet model:
Femur Length details using CyNet model:
3D Visualization of the Fetus:
The result of different models for segmentation of Abdomen and Head taken from MFP Dataset for Fetal Head and Abdomen Segmentation is shown below:
The CyNet model is trained on Head, Abdomen and Femur images with ground truths in order to segment the fetal biometry parameters which includes Femur along with Head and Abdomen.
The performance of segmentation model (CyNet) used for Head, Abdomen and Femur segmentation are evaluated using the validation samples Dice score:
EchoWrks Application which is based out of CyNet model has classification followed by segmentation networks that help to categorize the ultrasound image-loaded to the application and then finds the fetal biometry parameters, along with the addition of 3D visualization capabilities, provides a powerful tool for the automated measurement and visualization of fetus. By leveraging AI, this application enhances the accuracy and efficiency of prenatal care, ensuring better health outcomes for both mothers and their babies. As AI technology continues to evolve, its applications in healthcare will expand, offering new possibilities for improving patient care and clinical workflows.
Future work will aim to enhance the performance of the CyNet model by training it on larger and more diverse datasets, introducing greater variability in the data to improve its robustness. This will enable segmentation of a wider range of categories in ultrasound images. The EchoWrks application will be expanded to support more comprehensive segmentation capabilities, catering to various clinical specialties. Additionally, it will be enhanced to offer advanced segmentation features tailored to meet the specific needs of these specialties. Future developments will focus on incorporating segmentation for critical areas such as obstetrics (OB), gynecology (GYN), cardiology, and prostate imaging. These advancements will enable automatic segmentation and analysis of ultrasound images specific to these anatomical regions, significantly broadening the clinical applicability of the system. The improved system will not only enhance diagnostic precision but also facilitate advanced medical diagnostics and treatment planning, thus supporting a wider array of clinical specialties.
Srinivas Rao Kudavelly | Srinivasrao.Kudavelly@cyient.com
Consultant Senior Principal - Healthcare and Life Sciences
Srinivas has over 25 years of experience which spans across Consumer Electronics, Biomedical Instrumentation and Medical Imaging. He has led research and development teams, focused on end-to-end 3D/4D quantification applications, and released several "concept to research to market" solutions. He also led a cross functional team to drive applied research, product development, human factors team, clinical research, external collaboration, and innovation. He has garnered diverse sets of skill sets and problem challenges. and has over 25 Patent filings and 12 Patent Grants across varied domains, mentored over 30+ student projects, been a guide for over 10+ master thesis students, peer reviewer for papers and an IEEE Senior Member (2007).
Venkat Sudheer Naraharisetty | Venkatsudheer.Naraharisetty@cyient.com
Lead Data Scientist
With a robust career spanning over 15 years, he has amassed extensive experience in diverse fields such as Automotive Research and Development, Computer Aided Engineering, and Data Science, with a particular focus on Artificial Intelligence. His expertise extends across a wide array of domains, including crash analysis and the development of classification models using advanced machine learning and deep learning techniques. These skills have been applied in various sectors, notably Automotive, consumer electronics and Medical Imaging.
Amol Gharpure | Amol.Gharpure@cyient.com
Senior Solution Architect – Healthcare and Life Sciences
Amol brings over 20 years of expertise in embedded product development, primarily in the healthcare sector. His extensive experience spans the entire product development lifecycle of medical devices. Additionally, he has contributed to healthcare robotics by building custom robotic models and has worked in medical imaging, focusing on 2D and 3D image segmentation. His expertise in test fixture design and automation equips him with a strong proficiency in both the development and testing phases of medical technology solutions
Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.
For more information, please visit www.cyient.com
Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.
For more information, please visit www.cyient.com
The De Novo submission pathway offers an important regulatory mechanism for launching novel medical devices in the United States market. By understanding the key components of De Novo submission, strategic considerations, and post-market obligations, medical device manufacturers can navigate the regulatory pathway effectively and obtain market clearance for innovative technologies that address unmet clinical needs and improve patient care. While most medical device companies face challenges in their De Novo submissions, collaboration, resource allocation, and strategic planning are essential for achieving successful market entry through the De Novo pathway.
Cyient (Estd: 1991, NSE: CYIENT) partners with over 300 customers, including 40% of the top 100 global innovators of 2023, to deliver intelligent engineering and technology solutions for creating a digital, autonomous, and sustainable future. As a company, Cyient is committed to designing a culturally inclusive, socially responsible, and environmentally sustainable Tomorrow Together with our stakeholders.
For more information, please visit www.cyient.com
Cyient (Estd: 1991, NSE: CYIENT)delivers Intelligent Engineering solutions for Digital, Autonomous and Sustainable Future
© Cyient 2024. All Rights Reserved.