Sort by:
Page 1 of 43429 results
Next

End-to-end Spatiotemporal Analysis of Color Doppler Echocardiograms: Application for Rheumatic Heart Disease Detection.

Roshanitabrizi P, Nath V, Brown K, Broudy TG, Jiang Z, Parida A, Rwebembera J, Okello E, Beaton A, Roth HR, Sable CA, Linguraru MG

pubmed logopapersSep 29 2025
Rheumatic heart disease (RHD) represents a significant global health challenge, disproportionately affecting over 40 million people in low- and middle-income countries. Early detection through color Doppler echocardiography is crucial for treating RHD, but it requires specialized physicians who are often scarce in resource-limited settings. To address this disparity, artificial intelligence (AI)-driven tools for RHD screening can provide scalable, autonomous solutions to improve access to critical healthcare services in underserved regions. This paper introduces RADAR (Rapid AI-Assisted Echocardiography Detection and Analysis of RHD), a novel and generalizable AI approach for end-to-end spatiotemporal analysis of color Doppler echocardiograms, aimed at detecting early RHD in resource-limited settings. RADAR identifies key imaging views and employs convolutional neural networks to analyze diagnostically relevant phases of the cardiac cycle. It also localizes essential anatomical regions and examines blood flow patterns. It then integrates all findings into a cohesive analytical framework. RADAR was trained and validated on 1,022 echocardiogram videos from 511 Ugandan children, acquired using standard portable ultrasound devices. An independent set of 318 cases, acquired using a handheld ultrasound device with diverse imaging characteristics, was also tested. On the validation set, RADAR outperformed existing methods, achieving an average accuracy of 0.92, sensitivity of 0.94, and specificity of 0.90. In independent testing, it maintained high, clinically acceptable performance, with an average accuracy of 0.79, sensitivity of 0.87, and specificity of 0.70. These results highlight RADAR's potential to improve RHD detection and promote health equity for vulnerable children by enhancing timely, accurate diagnoses in underserved regions.

Prediction of neoadjuvant chemotherapy efficacy in patients with HER2-low breast cancer based on ultrasound radiomics.

Peng Q, Ji Z, Xu N, Dong Z, Zhang T, Ding M, Qu L, Liu Y, Xie J, Jin F, Chen B, Song J, Zheng A

pubmed logopapersSep 26 2025
Neoadjuvant chemotherapy (NAC) is a crucial therapeutic approach for treating breast cancer, yet accurately predicting treatment response remains a significant clinical challenge. Conventional ultrasound plays a vital role in assessing tumor morphology but lacks the ability to quantitatively capture intratumoral heterogeneity. Ultrasound radiomics, which extracts high-throughput quantitative imaging features, offers a novel approach to enhance NAC response prediction. This study aims to evaluate the predictive efficacy of ultrasound radiomics models based on pre-treatment, post-treatment, and combined imaging features for assessing the NAC response in patients with HER2-low breast cancer. This retrospective multicenter study included 359 patients with HER2-low breast cancer who underwent NAC between January 1, 2016, and December 31, 2020. A total of 488 radiomic features were extracted from pre- and post-treatment ultrasound images. Feature selection was conducted in two stages: first, Pearson correlation analysis (threshold: 0.65) was applied to remove highly correlated features and reduce redundancy; then, Recursive Feature Elimination with Cross-Validation (RFECV) was employed to identify the optimal feature subset for model construction. The dataset was divided into a training set (244 patients) and an external validation set (115 patients from independent centers). Model performance was assessed via the area under the receiver operating characteristic curve (AUC), accuracy, precision, recall, and F1 score. Three models were initially developed: (1) a pre-treatment model (AUC = 0.716), (2) a post-treatment model (AUC = 0.772), and (3) a combined pre- and post-treatment model (AUC = 0.762).To enhance feature selection, Recursive Feature Elimination with Cross-Validation was applied, resulting in optimized models with reduced feature sets: (1) the pre-treatment model (AUC = 0.746), (2) the post-treatment model (AUC = 0.712), and (3) the combined model (AUC = 0.759). Ultrasound radiomics is a non-invasive and promising approach for predicting response to neoadjuvant chemotherapy in HER2-low breast cancer. The pre-treatment model yielded reliable performance after feature selection. While the combined model did not substantially enhance predictive accuracy, its stable performance suggests that longitudinal ultrasound imaging may help capture treatment-induced phenotypic changes. These findings offer preliminary support for individualized therapeutic decision-making.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

Hybrid Fusion Model for Effective Distinguishing Benign and Malignant Parotid Gland Tumors in Gray-Scale Ultrasonography.

Mao Y, Jiang LP, Wang JL, Chen FQ, Zhang WP, Peng XQ, Chen L, Liu ZX

pubmed logopapersSep 26 2025
To develop a hybrid fusion model-deep learning radiomics nomograms (DLRN), integrating radiomics and transfer learning for assisting sonographers differentiate benign and malignant parotid gland tumors. This study retrospectively analyzed a total of 328 patients with pathologically confirmed parotid gland tumors from two centers. Radiomics features extracted from ultrasound images were input into eight machine learning classifiers to construct Radiomics (Rad) model. Additionally, images were also input into seven transfer learning networks to construct deep transfer learning (DTL) model. The prediction probabilities from these two models were combined through decision fusion to construct a DLR model. Clinical features were further integrated with the prediction probabilities of the DLR model to develop the DLRN model. The performance of these models was evaluated using receiver operating characteristic curve analysis, calibration curve, decision curve analysis and the Hosmer-Lemeshow test. In the internal and external validation cohorts, compared with Clinic (AUC = 0.891 and 0.734), Rad (AUC = 0.809 and 0.860), DTL (AUC = 0.905 and 0.782) and DLR (AUC = 0.932 and 0.828), the DLRN model demonstrated the greatest discriminative ability (AUC = 0.931 and 0.934), showing the best discriminative power. With the assistance of DLR, the diagnostic accuracy of resident, attending and chief physician increased by 6.6%, 6.5% and 1.2%, respectively. The hybrid fusion model DLRN significantly enhances the diagnostic performance for benign and malignant tumors of the parotid gland. It can effectively assist sonographers in making more accurate diagnoses.

Artificial intelligence applications in thyroid cancer care.

Pozdeyev N, White SL, Bell CC, Haugen BR, Thomas J

pubmed logopapersSep 25 2025
Artificial intelligence (AI) has created tremendous opportunities to improve thyroid cancer care. We used the "artificial intelligence thyroid cancer" query to search the PubMed database until May 31, 2025. We highlight a set of high-impact publications selected based on technical innovation, large generalizable training datasets, and independent and/or prospective validation of AI. We review the key applications of AI for diagnosing and managing thyroid cancer. Our primary focus is on using computer vision to evaluate thyroid nodules on thyroid ultrasound, an area of thyroid AI that has gained the most attention from researchers and will likely have a significant clinical impact. We also highlight AI for detecting and predicting thyroid cancer neck lymph node metastases, digital cyto- and histopathology, large language models for unstructured data analysis, patient education, and other clinical applications. We discuss how thyroid AI technology has evolved and cite the most impactful research studies. Finally, we balance our excitement about the potential of AI to improve clinical care for thyroid cancer with current limitations, such as the lack of high-quality, independent prospective validation of AI in clinical trials, the uncertain added value of AI software, unknown performance on non-papillary thyroid cancer types, and the complexity of clinical implementation. AI promises to improve thyroid cancer diagnosis, reduce healthcare costs and enable personalized management. High-quality, independent prospective validation of AI in clinical trials is lacking and is necessary for the clinical community's broad adoption of this technology.

Deep learning powered breast ultrasound to improve characterization of breast masses: a prospective study.

Singla V, Garg D, Negi S, Mehta N, Pallavi T, Choudhary S, Dhiman A

pubmed logopapersSep 25 2025
BackgroundThe diagnostic performance of ultrasound (US) is heavily reliant on the operator's expertise. Advances in artificial intelligence (AI) have introduced deep learning (DL) tools that detect morphology beyond human perception, providing automated interpretations.PurposeTo evaluate Smart-Detect (S-Detect), a DL tool, for its potential to enhance diagnostic precision and standardize US assessments among radiologists with varying levels of experience.Material and MethodsThis prospective observational study was conducted between May and November 2024. US and S-Detect analyses were performed by a breast imaging fellow. Images were independently analyzed by five radiologists with varying experience in breast imaging (<1 year-15 years). Each radiologist assessed the images twice: without and with S-Detect. ROC analyses compared the diagnostic performance. True downgrades and upgrades were calculated to determine the biopsy reduction with AI assistance. Kappa statistics assessed radiologist agreement before and after incorporating S-Detect.ResultsThis study analyzed 230 breast masses from 216 patients. S-Detect demonstrated high specificity (92.7%), PPV (92.9%), NPV (87.9%), and accuracy (90.4%). It enhanced less experienced radiologists' performance, increasing the sensitivity (85% to 93.33%), specificity (54.5% to 73.64%), and accuracy (70.43% to 83.91%; <i>P</i> <0.001). AUC significantly increased for the less experienced radiologists (0.698 to 0.835 <i>P</i> <0.001), with no significant gains for the expert radiologist. It also reduced variability in assessment between radiologists with an increase in kappa agreement (0.459-0.696) and enabled significant downgrades, reducing unnecessary biopsies.ConclusionThe DL tool improves diagnostic accuracy, bridges the expertise gap, reduces reliance on invasive procedures, and enhances consistency in clinical decisions among radiologists.

End-to-end CNN-based deep learning enhances breast lesion characterization using quantitative ultrasound (QUS) spectral parametric images.

Osapoetra LO, Moslemi A, Moore-Palhares D, Halstead S, Alberico D, Hwang A, Sannachi L, Curpen B, Czarnota GJ

pubmed logopapersSep 25 2025
QUS spectral parametric imaging offers a fast and accurate method for breast lesion characterization. This study explored using deep CNNs to classify breast lesions from QUS spectral parametric images, aiming to enhance radiomics and conventional machine learning. Predictive models were developed using transfer learning with pre-trained CNNs to distinguish malignant from benign lesions. The dataset included 276 participants: 184 malignant (median age, 51 years [IQR: 27-81 years]) and 92 benign cases (median age, 46 years [IQR: 18-75 years]). QUS spectral parametric imaging was applied to the US RF data and resulted in 1764 images of QUS spectral (MBF, SS, and SI), along with QUS scattering parameters (ASD and AAC). The data were randomly split into 60% training, 20% validation, and 20% test sets, stratified by lesion subtype, and repeated five times. The number of convolutional blocks was optimized, and the final convolutional layer was fine-tuned. Models tested included ResNet, Inception-v3, Xception, and EfficientNet. Xception-41 achieved a recall of 86 ± 3%, specificity of 87 ± 5%, balanced accuracy of 87 ± 3%, and an AUC of 0.93 ± 0.02 on test sets. EfficientNetV2-M showed similar performance with a recall of 91 ± 1%, specificity of 81 ± 7%, balanced accuracy of 86 ± 3%, and an AUC of 0.92 ± 0.02. CNN models outperformed radiomics and conventional machine learning (p-values < 0.05). This study demonstrated the capability of end-to-end CNN-based models for the accurate characterization of breast masses from QUS spectral parametric images.

CACTUS: Multiview classifier for Punctate White Matter Lesions detection & segmentation in cranial ultrasound volumes.

Estermann F, Kaftandjian V, Guy P, Quetin P, Delachartre P

pubmed logopapersSep 25 2025
Punctate white matter lesions (PWML) are the most common white matter injuries found in preterm neonates, with several studies indicating a connection between these lesions and negative long-term outcomes. Automated detection of PWML through ultrasound (US) imaging could assist doctors in diagnosis more effectively and at a lower cost than MRI. However, this task is highly challenging because of the lesions' small size and low contrast, and the number of lesions can vary significantly between subjects. In this work, we propose a two-phase approach: (1) Segmentation using a vision transformer to increase the detection rate of lesions. (2) Multi-view classification leveraging cross-attention to reduce false positives and enhance precision. We also investigate multiple postprocessing approaches to ensure prediction quality and compare our results with what is known in MRI. Our method demonstrates improved performance in PWML detection on US images, achieving recall and precision rates of 0.84 and 0.70, respectively, representing an increase of 2% and 10% over the best published US models. Moreover, by reducing the task to a slightly simpler problem (detection of MRI-visible PWML), the model achieves 0.82 recall and 0.89 precision, which is equivalent to the latest method in MRI.

SA<sup>2</sup>Net: Scale-adaptive structure-affinity transformation for spine segmentation from ultrasound volume projection imaging.

Xie H, Huang Z, Zuo Y, Ju Y, Leung FHF, Law NF, Lam KM, Zheng YP, Ling SH

pubmed logopapersSep 25 2025
Spine segmentation, based on ultrasound volume projection imaging (VPI), plays a vital role for intelligent scoliosis diagnosis in clinical applications. However, this task faces several significant challenges. Firstly, the global contextual knowledge of spines may not be well-learned if we neglect the high spatial correlation of different bone features. Secondly, the spine bones contain rich structural knowledge regarding their shapes and positions, which deserves to be encoded into the segmentation process. To address these challenges, we propose a novel scale-adaptive structure-aware network (SA<sup>2</sup>Net) for effective spine segmentation. First, we propose a scale-adaptive complementary strategy to learn the cross-dimensional long-distance correlation features for spinal images. Second, motivated by the consistency between multi-head self-attention in Transformers and semantic level affinity, we propose structure-affinity transformation to transform semantic features with class-specific affinity and combine it with a Transformer decoder for structure-aware reasoning. In addition, we adopt a feature mixing loss aggregation method to enhance model training. This method improves the robustness and accuracy of the segmentation process. The experimental results demonstrate that our SA<sup>2</sup>Net achieves superior segmentation performance compared to other state-of-the-art methods. Moreover, the adaptability of SA<sup>2</sup>Net to various backbones enhances its potential as a promising tool for advanced scoliosis diagnosis using intelligent spinal image analysis.

FetalDenseNet: multi-scale deep learning for enhanced early detection of fetal anatomical planes in prenatal ultrasound.

Dey SK, Howlader A, Haider MS, Saha T, Setu DM, Islam T, Siddiqi UR, Rahman MM

pubmed logopapersSep 24 2025
The study aims to improve the classification of fetal anatomical planes using Deep Learning (DL) methods to enhance the accuracy of fetal ultrasound interpretation. Five Convolutional Neural Network (CNN) architectures, such as VGG16, ResNet50, InceptionV3, DenseNet169, and MobileNetV2, are evaluated on a large-scale, clinically validated dataset of 12,400 ultrasound images from 1,792 patients. Preprocessing methods, including scaling, normalization, label encoding, and augmentation, are applied to the dataset, and the dataset is split into 80 % for training and 20 % for testing. Each model was fine-tuned and evaluated based on its classification accuracy for comparison. DenseNet169 achieved the highest classification accuracy of 92 % among all the tested models. The study shows that CNN-based models, particularly DenseNet169, significantly improve diagnostic accuracy in fetal ultrasound interpretation. This advancement reduces error rates and provides support for clinical decision-making in prenatal care.
Page 1 of 43429 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.