Robot Radiology: AI for Improved Cervical Cancer Screens

Artificial intelligence—commonly known as AI—is already exceeding human abilities. Self-driving cars use AI to perform some tasks more safely than people. E-commerce companies use AI to tailor product ads to customers’ tastes more quickly and precisely than any breathing marketing analyst can.

And soon AI will be used to “read” biomedical images more accurately than medical personnel alone—providing better early cervical cancer detection at lower cost than current methods.

However, this does not necessarily mean radiologists will soon be out of business.

“Humans and computers are very complementary,” says Xiaolei Huang, associate professor of computer science and engineering. “That’s what AI is all about.”

Huang directs the Image Data Emulation and Analysis Laboratory, where she works on artificial intelligence related to vision and graphics, or, as she says, “creating techniques that enable computers to understand images the way humans do.” Among Huang’s primary interests is training computers to understand biomedical images.

Now, as a result of more than ten years of work, Huang and her team have created a cervical cancer screening technique that, based on an analysis of a very large dataset, has the potential to perform as well as, or better than, human interpretation or other traditional screening results, such as Pap tests and tests for human papilloma virus (HPV)—at a much lower cost. The technique could be used in less developed countries, where 80 percent of deaths from cervical cancer occur.

The researchers are seeking funding to conduct clinical trials using this data-driven detection method.

A More Accurate Tool for Screening for Cervical Cancer, at Lower Cost

Huang’s screening system is built on image-based classifiers (an algorithm that classifies data) constructed from a large number of Cervigrams. Cervigrams are images taken by digital cervicography, a noninvasive visual examination method that takes a photograph of the cervix. The images, when read, are designed to detect cervical intraepithelial neoplasia (CIN), which is the potentially precancerous change and abnormal growth of squamous cells on the surface of the cervix.

“Cervigrams have great potential as a screening tool in resource-poor regions where clinical tests such as Pap and HPV are too expensive to be made widely available,” says Huang. “However, there is concern about Cervigrams’ overall effectiveness due to reports of poor correlation between visual lesion recognition and high-grade disease, as well as disagreement among experts when grading visual findings.”

Huang thought that computer algorithms could help improve accuracy in grading lesions by using visual information—a hunch that, so far, is proving correct.

Through an analysis of the very large dataset, her technique has been shown to be more sensitive (able to detect abnormality) and more specific (fewer false positives). Because of this, it could be used to improve cervical cancer screening in developed countries like the U.S.

“Our method would be an effective low-cost addition to a battery of tests helping to lower the false positive rate since it provides 10 percent better sensitivity and specificity than any other screening method, including Pap and HPV tests,” says Huang.  

Correlating Visual Features and Patient Data to Cancer

To identify the characteristics that are most helpful in screening for cancer, the team created hand-crafted pyramid features (basic components of recognition systems). They also investigated the performance of a common deep learning framework known as convolutional neural networks (CNN) for cervical disease classification.

They describe their results in an article in the March 2017 issue of Pattern Recognition titled “Multi-feature base benchmark for cervical dysplasia classification.” The researchers have also released a multi-feature dataset and extensive evaluations using seven classic classifiers.

To build the screening tool, Huang and her team used data from 1,112 patient visits, during which 345 patients were found to have lesions that were positive for moderate or severe dysplasia (considered high-grade and likely to develop into cancer) and 767 had lesions that were negative (considered low-grade with mild dysplasia typically cleared by the immune system).

These data were selected from a large medical archive collected by the U.S. National Cancer Institute. The archive consists of information from 10,000 anonymous women who were screened using multiple methods, including Cervigrams, over a number of visits. The data also contains the diagnosis and outcome for each patient.

“The program we’ve created automatically segments tissue regions seen in photos of the cervix, correlating visual features from the images to the development of precancerous lesions,” says Huang. “In practice, this could mean that medical staff analyzing a new patient’s Cervigram could retrieve data about similar cases—not only in terms of optics, but also pathology since the dataset contains information about the outcomes of women at various stages of pathology.”

“With respect to accuracy and sensitivity,” Huang reported in Pattern Recognition, “our hand-crafted PLBP–PLAB–PHOG feature descriptor with random forest classifier (RF.PLBP–PLAB–PHOG) outperforms every single Pap test or HPV test, when achieving a specificity of 90 percent. When not constrained by the 90 percent specificity requirement, our image-based classifier can achieve even better overall accuracy.

“For example, our fine-tuned CNN features with Softmax classifier can achieve an accuracy of 78.41 percent with 80.87 percent sensitivity and 75.94 percent specificity at the default probability threshold 0.5. Consequently, on this dataset, our lower-cost image-based classifiers can perform comparably or better than human interpretation based on widely-used Pap and HPV tests.”

According to the researchers, their classifiers achieve higher sensitivity in a particularly important area: detecting moderate and severe dysplasia—or cancer.

Exploring Classification with Improved Imaging Technique

Huang is also collaborating with Chao Zhou, assistant professor of electrical and computer engineering on the use of an established medical imaging technique called optical coherence microscopy (OCM)—most commonly used in ophthalmology—to analyze breast tissue to produce computer-aided diagnoses. Their analysis is designed to help surgeons minimize the tissue removed while operating on cancer patients by providing highly accurate, real-time information about the health of the excised tissue.

They recently conducted a feasibility study with promising results that have been published in an article in Medical Image Analysis titled “Integrated local binary pattern texture features for classification of breast tissue imaged by optical coherence microscopy.”

Huang and Zhou used multi-scale and integrated image features to improve classification accuracy and were able to achieve high sensitivity (100 percent) and specificity (85.2 percent) for cancer detection using OCM images.

“Chao has done a lot of work in new instrumentation—improving the quality of biomedical images,” says Huang. “Since he works on the images—or data inputs—and I work on the results of the data analysis—or outputs—our collaboration is a natural fit.”

This story appears as "Robot Radiology" in the 2018 Lehigh Research Review.

Related Stories

flags behind a fence

Study: Threat of Deportation Leads to Psychological Distress Among Both Latino Citizens and Noncitizens

Amy Johnson and research collaborators find it’s not just undocumented immigrants who feel at risk.

Santiago Herrera and a student at the symposium

Lehigh Oceans Research Center Holds Inaugural Symposium

The College of Arts and Sciences launches a new research center that focuses on the field of ocean science.

Jennifer Midberry

Lehigh Professor to Study the Harms of Gun Violence Reporting

Former photojournalist Jennifer Midberry teams with trauma surgeon and epidemiologist to analyze how gun violence reporting can harm survivors and communities.