Sort by:
Page 3 of 56552 results

Deep learning powered breast ultrasound to improve characterization of breast masses: a prospective study.

Singla V, Garg D, Negi S, Mehta N, Pallavi T, Choudhary S, Dhiman A

pubmed logopapersSep 25 2025
BackgroundThe diagnostic performance of ultrasound (US) is heavily reliant on the operator's expertise. Advances in artificial intelligence (AI) have introduced deep learning (DL) tools that detect morphology beyond human perception, providing automated interpretations.PurposeTo evaluate Smart-Detect (S-Detect), a DL tool, for its potential to enhance diagnostic precision and standardize US assessments among radiologists with varying levels of experience.Material and MethodsThis prospective observational study was conducted between May and November 2024. US and S-Detect analyses were performed by a breast imaging fellow. Images were independently analyzed by five radiologists with varying experience in breast imaging (<1 year-15 years). Each radiologist assessed the images twice: without and with S-Detect. ROC analyses compared the diagnostic performance. True downgrades and upgrades were calculated to determine the biopsy reduction with AI assistance. Kappa statistics assessed radiologist agreement before and after incorporating S-Detect.ResultsThis study analyzed 230 breast masses from 216 patients. S-Detect demonstrated high specificity (92.7%), PPV (92.9%), NPV (87.9%), and accuracy (90.4%). It enhanced less experienced radiologists' performance, increasing the sensitivity (85% to 93.33%), specificity (54.5% to 73.64%), and accuracy (70.43% to 83.91%; <i>P</i> <0.001). AUC significantly increased for the less experienced radiologists (0.698 to 0.835 <i>P</i> <0.001), with no significant gains for the expert radiologist. It also reduced variability in assessment between radiologists with an increase in kappa agreement (0.459-0.696) and enabled significant downgrades, reducing unnecessary biopsies.ConclusionThe DL tool improves diagnostic accuracy, bridges the expertise gap, reduces reliance on invasive procedures, and enhances consistency in clinical decisions among radiologists.

Artificial intelligence applications in thyroid cancer care.

Pozdeyev N, White SL, Bell CC, Haugen BR, Thomas J

pubmed logopapersSep 25 2025
Artificial intelligence (AI) has created tremendous opportunities to improve thyroid cancer care. We used the "artificial intelligence thyroid cancer" query to search the PubMed database until May 31, 2025. We highlight a set of high-impact publications selected based on technical innovation, large generalizable training datasets, and independent and/or prospective validation of AI. We review the key applications of AI for diagnosing and managing thyroid cancer. Our primary focus is on using computer vision to evaluate thyroid nodules on thyroid ultrasound, an area of thyroid AI that has gained the most attention from researchers and will likely have a significant clinical impact. We also highlight AI for detecting and predicting thyroid cancer neck lymph node metastases, digital cyto- and histopathology, large language models for unstructured data analysis, patient education, and other clinical applications. We discuss how thyroid AI technology has evolved and cite the most impactful research studies. Finally, we balance our excitement about the potential of AI to improve clinical care for thyroid cancer with current limitations, such as the lack of high-quality, independent prospective validation of AI in clinical trials, the uncertain added value of AI software, unknown performance on non-papillary thyroid cancer types, and the complexity of clinical implementation. AI promises to improve thyroid cancer diagnosis, reduce healthcare costs and enable personalized management. High-quality, independent prospective validation of AI in clinical trials is lacking and is necessary for the clinical community's broad adoption of this technology.

End-to-end CNN-based deep learning enhances breast lesion characterization using quantitative ultrasound (QUS) spectral parametric images.

Osapoetra LO, Moslemi A, Moore-Palhares D, Halstead S, Alberico D, Hwang A, Sannachi L, Curpen B, Czarnota GJ

pubmed logopapersSep 25 2025
QUS spectral parametric imaging offers a fast and accurate method for breast lesion characterization. This study explored using deep CNNs to classify breast lesions from QUS spectral parametric images, aiming to enhance radiomics and conventional machine learning. Predictive models were developed using transfer learning with pre-trained CNNs to distinguish malignant from benign lesions. The dataset included 276 participants: 184 malignant (median age, 51 years [IQR: 27-81 years]) and 92 benign cases (median age, 46 years [IQR: 18-75 years]). QUS spectral parametric imaging was applied to the US RF data and resulted in 1764 images of QUS spectral (MBF, SS, and SI), along with QUS scattering parameters (ASD and AAC). The data were randomly split into 60% training, 20% validation, and 20% test sets, stratified by lesion subtype, and repeated five times. The number of convolutional blocks was optimized, and the final convolutional layer was fine-tuned. Models tested included ResNet, Inception-v3, Xception, and EfficientNet. Xception-41 achieved a recall of 86 ± 3%, specificity of 87 ± 5%, balanced accuracy of 87 ± 3%, and an AUC of 0.93 ± 0.02 on test sets. EfficientNetV2-M showed similar performance with a recall of 91 ± 1%, specificity of 81 ± 7%, balanced accuracy of 86 ± 3%, and an AUC of 0.92 ± 0.02. CNN models outperformed radiomics and conventional machine learning (p-values < 0.05). This study demonstrated the capability of end-to-end CNN-based models for the accurate characterization of breast masses from QUS spectral parametric images.

CACTUS: Multiview classifier for Punctate White Matter Lesions detection & segmentation in cranial ultrasound volumes.

Estermann F, Kaftandjian V, Guy P, Quetin P, Delachartre P

pubmed logopapersSep 25 2025
Punctate white matter lesions (PWML) are the most common white matter injuries found in preterm neonates, with several studies indicating a connection between these lesions and negative long-term outcomes. Automated detection of PWML through ultrasound (US) imaging could assist doctors in diagnosis more effectively and at a lower cost than MRI. However, this task is highly challenging because of the lesions' small size and low contrast, and the number of lesions can vary significantly between subjects. In this work, we propose a two-phase approach: (1) Segmentation using a vision transformer to increase the detection rate of lesions. (2) Multi-view classification leveraging cross-attention to reduce false positives and enhance precision. We also investigate multiple postprocessing approaches to ensure prediction quality and compare our results with what is known in MRI. Our method demonstrates improved performance in PWML detection on US images, achieving recall and precision rates of 0.84 and 0.70, respectively, representing an increase of 2% and 10% over the best published US models. Moreover, by reducing the task to a slightly simpler problem (detection of MRI-visible PWML), the model achieves 0.82 recall and 0.89 precision, which is equivalent to the latest method in MRI.

Nuclear Diffusion Models for Low-Rank Background Suppression in Videos

Tristan S. W. Stevens, Oisín Nolan, Jean-Luc Robert, Ruud J. G. van Sloun

arxiv logopreprintSep 25 2025
Video sequences often contain structured noise and background artifacts that obscure dynamic content, posing challenges for accurate analysis and restoration. Robust principal component methods address this by decomposing data into low-rank and sparse components. Still, the sparsity assumption often fails to capture the rich variability present in real video data. To overcome this limitation, a hybrid framework that integrates low-rank temporal modeling with diffusion posterior sampling is proposed. The proposed method, Nuclear Diffusion, is evaluated on a real-world medical imaging problem, namely cardiac ultrasound dehazing, and demonstrates improved dehazing performance compared to traditional RPCA concerning contrast enhancement (gCNR) and signal preservation (KS statistic). These results highlight the potential of combining model-based temporal models with deep generative priors for high-fidelity video restoration.

SA<sup>2</sup>Net: Scale-adaptive structure-affinity transformation for spine segmentation from ultrasound volume projection imaging.

Xie H, Huang Z, Zuo Y, Ju Y, Leung FHF, Law NF, Lam KM, Zheng YP, Ling SH

pubmed logopapersSep 25 2025
Spine segmentation, based on ultrasound volume projection imaging (VPI), plays a vital role for intelligent scoliosis diagnosis in clinical applications. However, this task faces several significant challenges. Firstly, the global contextual knowledge of spines may not be well-learned if we neglect the high spatial correlation of different bone features. Secondly, the spine bones contain rich structural knowledge regarding their shapes and positions, which deserves to be encoded into the segmentation process. To address these challenges, we propose a novel scale-adaptive structure-aware network (SA<sup>2</sup>Net) for effective spine segmentation. First, we propose a scale-adaptive complementary strategy to learn the cross-dimensional long-distance correlation features for spinal images. Second, motivated by the consistency between multi-head self-attention in Transformers and semantic level affinity, we propose structure-affinity transformation to transform semantic features with class-specific affinity and combine it with a Transformer decoder for structure-aware reasoning. In addition, we adopt a feature mixing loss aggregation method to enhance model training. This method improves the robustness and accuracy of the segmentation process. The experimental results demonstrate that our SA<sup>2</sup>Net achieves superior segmentation performance compared to other state-of-the-art methods. Moreover, the adaptability of SA<sup>2</sup>Net to various backbones enhances its potential as a promising tool for advanced scoliosis diagnosis using intelligent spinal image analysis.

FetalDenseNet: multi-scale deep learning for enhanced early detection of fetal anatomical planes in prenatal ultrasound.

Dey SK, Howlader A, Haider MS, Saha T, Setu DM, Islam T, Siddiqi UR, Rahman MM

pubmed logopapersSep 24 2025
The study aims to improve the classification of fetal anatomical planes using Deep Learning (DL) methods to enhance the accuracy of fetal ultrasound interpretation. Five Convolutional Neural Network (CNN) architectures, such as VGG16, ResNet50, InceptionV3, DenseNet169, and MobileNetV2, are evaluated on a large-scale, clinically validated dataset of 12,400 ultrasound images from 1,792 patients. Preprocessing methods, including scaling, normalization, label encoding, and augmentation, are applied to the dataset, and the dataset is split into 80 % for training and 20 % for testing. Each model was fine-tuned and evaluated based on its classification accuracy for comparison. DenseNet169 achieved the highest classification accuracy of 92 % among all the tested models. The study shows that CNN-based models, particularly DenseNet169, significantly improve diagnostic accuracy in fetal ultrasound interpretation. This advancement reduces error rates and provides support for clinical decision-making in prenatal care.

TCF-Net: A Hierarchical Transformer Convolution Fusion Network for Prostate Cancer Segmentation in Transrectal Ultrasound Images.

Lu X, Zhou Q, Xiao Z, Guo Y, Peng Q, Zhao S, Liu S, Huang J, Yang C, Yuan Y

pubmed logopapersSep 24 2025
Accurate prostate segmentation from transrectal ultrasound (TRUS) images is the key to the computer-aided diagnosis of prostate cancer. However, this task faces serious challenges, including various interferences, variational prostate shapes, and insufficient datasets. To address these challenges, a region-adaptive transformer convolution fusion net (TCF-Net) for accurate and robust segmentation of TRUS images is proposed. As a high-performance segmentation network, the TCF-Net contains a hierarchical encoder-decoder structure with two main modules: (1) a region-adaptive transformer-based encoder to identify and localize prostate regions, which learns the relationship between objects and pixels. It assists the model in overcoming various interferences and prostate shape variations. (2) A convolution-based decoder to improve the applicability to small datasets. Besides, a patch-based fusion module is also proposed to introduce an inductive bias for fine prostate segmentation. TCF-Net is trained and evaluated on a challenging clinical TRUS image dataset collected from the First Affiliated Hospital of Jinan University in China. The dataset contains 1000 TRUS images of 135 patients. Experimental results show that the mIoU of TCF-Net is 94.4%, which exceeds other state-of-the-art (SOTA) models by more than 1%.

Deep learning and radiomics integration of photoacoustic/ultrasound imaging for non-invasive prediction of luminal and non-luminal breast cancer subtypes.

Wang M, Mo S, Li G, Zheng J, Wu H, Tian H, Chen J, Tang S, Chen Z, Xu J, Huang Z, Dong F

pubmed logopapersSep 24 2025
This study aimed to develop a Deep Learning Radiomics integrated model (DLRN), which combines photoacoustic/ultrasound(PA/US)imaging with clinical and radiomics features to distinguish between luminal and non-luminal BC in a preoperative setting. A total of 388 BC patients were included, with 271 in the training group and 117 in the testing group. Radiomics and deep learning features were extracted from PA/US images using Pyradiomics and ResNet50, respectively. Feature selection was performed using independent sample t-tests, Pearson correlation analysis, and LASSO regression to build a Deep Learning Radiomics (DLR) model. Based on the results of univariate and multivariate logistic regression analyses, the DLR model was combined with valuable clinical features to construct the DLRN model. Model efficacy was assessed using AUC, accuracy, sensitivity, specificity, and NPV. The DLR model comprised 3 radiomic features and 6 deep learning features, which, when combined with significant clinical predictors, formed the DLRN model. In the testing set, the AUC of the DLRN model (0.924 [0.877-0.972]) was significantly higher than that of the DLR (AUC 0.847 [0.758-0.936], p = 0.026), DL (AUC 0.822 [0.725-0.919], p = 0.06), Rad (AUC 0.717 [0.597-0.838], p < 0.001), and clinical (AUC 0.820 [0.745-0.895], p = 0.002) models. These findings indicate that the DLRN model (integrated model) exhibited the most favorable predictive performance among all models evaluated. The DLRN model effectively integrates PA/US imaging with clinical data, showing potential for preoperative molecular subtype prediction and guiding personalized treatment strategies for BC patients.

Interpretable Machine Learning Model for Pulmonary Hypertension Risk Prediction: Retrospective Cohort Study.

Jiang H, Gao H, Wang D, Zeng Q, Hao X, Cheng Z

pubmed logopapersSep 24 2025
Pulmonary hypertension (PH) is a progressive disorder characterized by elevated pulmonary artery pressure and increased pulmonary vascular resistance, ultimately leading to right heart failure. Early detection is critical for improving patient outcomes. The diagnosis of PH primarily relies on right heart catheterization, but its invasive nature significantly limits its clinical use. Echocardiography, as the most common noninvasive screening and diagnostic tool for PH, provides valuable patient information. This study aims to identify key PH predictors from echocardiographic parameters, laboratory tests, and demographic data using machine learning, ultimately constructing a predictive model to support early noninvasive diagnosis of PH. This study compiled comprehensive datasets comprising echocardiography measurements, clinical laboratory data, and fundamental demographic information from patients with PH and matched controls. The final analytical cohort consisted of 895 participants with 85 evaluated variables. Recursive feature elimination was used to select the most relevant echocardiographic variables, which were subsequently integrated into a composite ultrasound index using machine learning techniques, XGBoost (Extreme Gradient Boosting). LASSO (least absolute shrinkage and selection operator) regression was applied to select the potential predictive variable from laboratory tests. Then, the ultrasound index variables and selected laboratory tests were combined to construct a logistic regression model for the predictive diagnosis of PH. The model's performance was rigorously evaluated using receiver operating characteristic curves, calibration plots, and decision curve analysis to ensure its clinical relevance and accuracy. Both internal and external validation were used to assess the performance of the constructed model. A total of 16 echocardiographic parameters (right atrium diameter, pulmonary artery diameter, left atrium diameter, tricuspid valve reflux degree, right ventricular diameter, E/E' [ratio of mitral valve early diastolic inflow velocity (E) to mitral annulus early diastolic velocity (E')], interventricular septal thickness, left ventricular diameter, ascending aortic diameter, left ventricular ejection fraction, left ventricular outflow tract velocity, mitral valve reflux degree, pulmonary valve outflow velocity, mitral valve inflow velocity, aortic valve reflux degree, and left ventricular posterior wall thickness) combined with 2 laboratory biomarkers (prothrombin time activity and cystatin C) were identified as optimal predictors, forming a high-performance PH prediction model. The diagnostic model demonstrated high predictive accuracy, with an area under the receiver operating characteristic curve of 0.997 in the internal validation and 0.974 in the external validation. Both calibration plots and decision curve analysis validated the model's predictive accuracy and clinical applicability, with optimal performance observed at higher risk stratification cutoffs. This model enhances early PH diagnosis through a noninvasive approach and demonstrates strong predictive accuracy. It facilitates early intervention and personalized treatment, with potential applications in broader cardiovascular disease management.
Page 3 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.