AI tool analyzes placentas at birth for faster detection of neonatal, maternal problems, reveals study

A newly developed tool that harnesses computer vision and artificial intelligence (AI) may help clinicians rapidly evaluate placentas at birth, potentially improving neonatal and maternal care, according to new research from scientists at Northwestern Medicine and Penn State.

The study, which was published Dec. 13 in the print edition of the journal Patterns and featured on the journal’s cover, describes a computer program named PlacentaVision that can analyze a simple photograph of the placenta to detect abnormalities associated with infection and neonatal sepsis, a life-threatening condition that affects millions of newborns globally.

“Placenta is one of the most common specimens that we see in the lab,” said study co-author Dr. Jeffery Goldstein, director of perinatal pathology and an associate professor of pathology at Northwestern University Feinberg School of Medicine. “When the neonatal intensive care unit is treating a sick kid, even a few minutes can make a difference in medical decision making. With a diagnosis from these photographs, we can have an answer days earlier than we would in our normal process.”

Northwestern provided the largest set of images for the study, and Goldstein led the development and troubleshooting of the algorithms.

Alison D. Gernand, contact principal investigator on the project, conceived the original idea for this tool through her global health work, particularly with pregnancies where women deliver in their homes due to lack of health care resources.

“Discarding the placenta without examination is a common but often overlooked problem,” said Gernand, associate professor in the Penn State College of Health and Human Development (HHD) Department of Nutritional Sciences. “It is a missed opportunity to identify concerns and provide early intervention that can reduce complications and improve outcomes for both the mother and the baby.”

Why early examination of the placenta matters

The placenta plays a vital role in the health of both the pregnant individual and baby during pregnancy, yet it is often not thoroughly examined at birth, especially in areas with limited medical resources.

“This research could save lives and improve health outcomes,” said Yimu Pan, a doctoral candidate in the informatics program from the College of Information Sciences and Technology (IST) and lead author on the study. “It could make placental examination more accessible, benefitting research and care for future pregnancies, especially for mothers and babies at higher risk of complications.”

Early identification of placental infection through tools like PlacentaVision might enable clinicians to take prompt actions, such as administering antibiotics to the mother or baby and closely monitoring the newborn for signs of infection, the scientists said.

PlacentaVision is intended for use across a range of medical demographics, according to the researchers.

“In low-resource areas — places where hospitals don’t have pathology labs or specialists — this tool could help doctors quickly spot issues like infections from a placenta,” Pan said. “In well-equipped hospitals, the tool may eventually help doctors determine which placentas need further, detailed examination, making the process more efficient and ensuring the most important cases are prioritized.”

“Before such a tool can be deployed globally, core technical obstacles we faced were to make the model flexible enough to handle various diagnoses related to the placenta and to ensure that the tool can be robust enough to handle various delivery conditions, including variation in lighting conditions, imaging quality and clinical settings” said James Z. Wang, distinguished professor in the College of IST at Penn State and one of the principal investigators on the study. “Our AI tool needs to maintain accuracy even when many training images come from a well-equipped urban hospital. Ensuring that PlacentaVision can handle a wide range of real-world conditions was essential.”

How the tool learned how to analyze pictures of placentas

The researchers used cross-modal contrastive learning, an AI method for aligning and understanding relationship between different types of data — in this case, visual (images) and textual (pathological reports) — to teach a computer program how to analyze pictures of placentas. They gathered a large, diverse dataset of placental images and pathological reports spanning a 12-year period, studied how these images relate to health outcomes and built a model that could make predictions based on new images. The team also developed various image alteration strategies to simulate different photo-taking conditions so the model’s resilience can be evaluated properly.

The result was PlacentaCLIP+, a robust machine-learning model that can analyze photos of placentas to detect health risks with high accuracy. It was validated cross-nationally to confirm consistent performance across populations.

According to the researchers, PlacentaVision is designed to be easy to use, potentially working through a smartphone app or integrated into medical record software so doctors can get quick answers after delivery.

Next step: A user-friendly app for medical staff

“Our next steps include developing a user-friendly mobile app that can be used by medical professionals — with minimal training — in clinics or hospitals with low resources,” Pan said. “The user-friendly app would allow doctors and nurses to photograph placentas and get immediate feedback and improve care.”

The researchers plan to make the tool even smarter by including more types of placental features and adding clinical data to improve predictions while also contributing to research on long-term health. They’ll also test the tool in different hospitals to ensure it works in a variety of settings.

“This tool has the potential to transform how placentas are examined after birth, especially in parts of the world where these exams are rarely done,” Gernand said. “This innovation promises greater accessibility in both low- and high-resource settings. With further refinement, it has the potential to transform neonatal and maternal care by enabling early, personalized interventions that prevent severe health outcomes and improve the lives of mothers and infants worldwide.”

This research was supported by the National Institutes of Health National Institute of Biomedical Imaging and Bioengineering (grant R01EB030130). The team used supercomputing resources from the National Science Foundation-funded Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program.

Facebook Comments