Sort by:
Page 22 of 56556 results

Gout Diagnosis From Ultrasound Images Using a Patch-Wise Attention Deep Network.

Zhao Y, Xiao L, Liu H, Li Y, Ning C, Liu M

pubmed logopapersJul 29 2025
The rising global prevalence of gout necessitates advancements in diagnostic methodologies. Ultrasonographic imaging of the foot has become an important diagnostic modality for gout because of its non-invasiveness, cost-effectiveness, and real-time imaging capabilities. This study aims to develop and validate a deep learning-based artificial intelligence (AI) model for automated gout diagnosis using ultrasound images. In this study, ultrasound images were primarily acquired at the first metatarsophalangeal joint (MTP1) from 598 cases in two institutions: 520 from Institution 1 and 78 from Institution 2. From Institution 1's dataset, 66% of cases were randomly allocated for model training, while the remaining 34% constitute the internal test set. The dataset from Institution 2 served as an independent external validation cohort. A novel deep learning model integrating a patch-wise attention mechanism and multi-scale feature extraction was developed to enhance the detection of subtle sonographic features and optimize diagnostic performance. The proposed model demonstrated robust diagnostic efficacy, achieving an accuracy of 87.88%, a sensitivity of 87.85%, a specificity of 87.93%, and an area under the curve (AUC) of 93.43%. Additionally, the model generates interpretable visual heatmaps to localize gout-related pathological features, thereby facilitating interpretation for clinical decision-making. In this paper, a deep learning-based artificial intelligence (AI) model was developed for the automated detection of gout using ultrasound images, which achieved better performance than other models. Furthermore, the features highlighted by the model align closely with expert assessments, demonstrating its potential to assist in the ultrasound-based diagnosis of gout.

Deep sensorless tracking of ultrasound probe orientation during freehand transperineal biopsy with spatial context for symmetry disambiguation.

Soormally C, Beitone C, Troccaz J, Voros S

pubmed logopapersJul 29 2025
Diagnosis of prostate cancer requires histopathology of tissue samples. Following an MRI to identify suspicious areas, a biopsy is performed under ultrasound (US) guidance. In existing assistance systems, 3D US information is generally available (taken before the biopsy session and/or in between samplings). However, without registration between 2D images and 3D volumes, the urologist must rely on cognitive navigation. This work introduces a deep learning model to track the orientation of real-time US slices relative to a reference 3D US volume using only image and volume data. The dataset comprises 515 3D US volumes collected from 51 patients during routine transperineal biopsy. To generate 2D images streams, volumes are resampled to simulate three degrees of freedom rotational movements around the rectal entrance. The proposed model comprises two ResNet-based sub-modules to address the symmetry ambiguity arising from complex out-of-plane movement of the probe. The first sub-module predicts the unsigned relative orientation between consecutive slices, while the second leverages a custom similarity model and a spatial context volume to determine the sign of this relative orientation. From the sub-modules predictions, slices orientations along the navigated trajectory can then be derived in real-time. Results demonstrate that registration error remains below 2.5 mm in 92% of cases over a 5-second trajectory, and 80% over a 25-second trajectory. These findings show that accurate, sensorless 2D/3D US registration given a spatial context is achievable with limited drift over extended navigation. This highlights the potential of AI-driven biopsy assistance to increase the accuracy of freehand biopsy.

Performance of AI-Based software in predicting malignancy risk in breast lesions identified on targeted ultrasound.

Lima IRM, Cruz RM, de Lima Rodrigues CL, Lago BM, da Cunha RF, Damião SQ, Wanderley MC, Bitencourt AGV

pubmed logopapersJul 27 2025
Targeted ultrasound is commonly used to identify lesions characterized on magnetic resonance imaging (MRI) that were not recognized on initial mammography or ultrasound and is especially valuable for guiding percutaneous biopsies. Although artificial intelligence (AI) algorithms have been used to differentiate benign from malignant breast lesions on ultrasound, their application in classifying lesions on targeted ultrasound has not yet been studied. To evaluate the performance of AI-based software in predicting malignancy risk in breast lesions identified on targeted ultrasound. This was a retrospective, cross-sectional, single-center study that included patients with breast lesions identified on MRI who underwent targeted ultrasound and percutaneous ultrasound-guided biopsy. The ultrasound findings were analyzed using AI-based software and subsequently correlated with the pathological results. 334 lesions were evaluated, including 183 mass and 151 non-mass lesions. On histological analysis, there were 257 (76.9 %) benign lesions, and 77 (23.1 %) malignant. Both the AI software and radiologists demonstrated high sensitivity in predicting the malignancy risk of the lesions. The specificity was higher when evaluated by the radiologist using the AI software compared to the radiologist's evaluation alone (p < 0.001). All lesions classified as BI-RADS 2 or 3 on targeted ultrasound by the radiologist or the AI software (n = 72; 21.6 %) showed benign pathology results. The AI software, when integrated into the radiologist's evaluation, demonstrated high diagnostic accuracy and improved specificity for both mass and non-mass lesions on targeted ultrasound, supporting more accurate biopsy decisions and potentially reducing false positives without missing cancers.

A novel hybrid deep learning approach combining deep feature attention and statistical validation for enhanced thyroid ultrasound segmentation.

Banerjee T, Singh DP, Swain D, Mahajan S, Kadry S, Kim J

pubmed logopapersJul 26 2025
An effective diagnosis system and suitable treatment planning require the precise segmentation of thyroid nodules in ultrasound imaging. The advancement of imaging technologies has not resolved traditional imaging challenges, which include noise issues, limited contrast, and dependency on operator choices, thus highlighting the need for automated, reliable solutions. The researchers developed TATHA, an innovative deep learning architecture dedicated to improving thyroid ultrasound image segmentation accuracy. The model is evaluated using the digital database of thyroid ultrasound images, which includes 99 cases across three subsets containing 134 labelled images for training, validation, and testing. It incorporates data pre-treatment procedures that reduce speckle noise and enhance contrast, while edge detection provides high-quality input for segmentation. TATHA outperforms U-Net, PSPNet, and Vision Transformers across various datasets and cross-validation folds, achieving superior Dice scores, accuracy, and AUC results. The distributed thyroid segmentation framework generates reliable predictions by combining results from multiple feature extraction units. The findings confirm that these advancements make TATHA an essential tool for clinicians and researchers in thyroid imaging and clinical applications.

Artificial intelligence-powered software outperforms interventional cardiologists in assessment of IVUS-based stent optimization.

Rubio PM, Garcia-Garcia HM, Galo J, Chaturvedi A, Case BC, Mintz GS, Ben-Dor I, Hashim H, Waksman R

pubmed logopapersJul 26 2025
Optimal stent deployment assessed by intravascular ultrasound (IVUS) is associated with improved outcomes after percutaneous coronary intervention (PCI). However, IVUS remains underutilized due to its time-consuming analysis and reliance on operator expertise. AVVIGO™+, an FDA-approved artificial intelligence (AI) software, offers automated lesion assessment, but its performance for stent evaluation has not been thoroughly investigated. To assess whether an artificial intelligence-powered software (AVVIGO™+) provides a superior evaluation of IVUS-based stent expansion index (%Stent expansion = Minimum Stent Area (MSA) / Distal reference lumen area) and geographic miss (i.e. >50 % plaque burden - PB - at stent edges) compared to the current gold standard method performed by interventional cardiologists (IC), defined as frame-by-frame visual assessment by interventional cardiologists, selecting the MSA and the reference frame with the largest lumen area within 5 mm of the stent edge, following expert consensus. This retrospective study included 60 patients (47,997 IVUS frames) who underwent IVUS guided PCI, independently analyzed by IC and AVVIGO™+. Assessments included minimum stent area (MSA), stent expansion index, and PB at proximal and distal reference segments. For expansion, a threshold of 80 % was used to define suboptimal results. The time required for expansion analysis was recorded for both methods. Concordance, absolute and relative differences were evaluated. AVVIGO™ + consistently identified lower mean expansion (70.3 %) vs. IC (91.2 %), (p < 0.0001), primarily due to detecting frames with smaller MSA values (5.94 vs. 7.19 mm<sup>2</sup>, p = 0.0053). This led to 25 discordant cases in which AVVIGO™ + reported suboptimal expansion while IC classified the result as adequate. The analysis time was significantly shorter with AVVIGO™ + (0.76 ± 0.39 min) vs IC (1.89 ± 0.62 min) (p < 0.0001), representing a 59.7 % reduction. For geographic miss, AVVIGO™ + reported higher PB than IC at both distal (51.8 % vs. 43.0 %, p < 0.0001) and proximal (50.0 % vs. 43.0 %, p = 0.0083) segments. When applying the 50 % PB threshold, AVVIGO™ + identified PB ≥50 % not seen by IC in 12 cases (6 distal, 6 proximal). AVVIGO™ + demonstrated improved detection of suboptimal stent expansion and geographic miss compared to interventional cardiologists, while also significantly reducing analysis time. These findings suggest that AI-based platforms may offer a more reliable and efficient approach to IVUS-guided stent optimization, with potential to enhance consistency in clinical practice.

A novel approach for breast cancer detection using a Nesterov accelerated adam optimizer with an attention mechanism.

Saber A, Emara T, Elbedwehy S, Hassan E

pubmed logopapersJul 25 2025
Image-based automatic breast tumor detection has become a significant research focus, driven by recent advancements in machine learning (ML) algorithms. Traditional disease detection methods often involve manual feature extraction from images, a process requiring extensive expertise from specialists and pathologists. This labor-intensive approach is not only time-consuming but also impractical for widespread application. However, advancements in digital technologies and computer vision have enabled convolutional neural networks (CNNs) to learn features automatically, thereby overcoming these challenges. This paper presents a deep neural network model based on the MobileNet-V2 architecture, enhanced with a convolutional block attention mechanism for identifying tumor types in ultrasound images. The attention module improves the MobileNet-V2 model's performance by highlighting disease-affected areas within the images. The proposed model refines features extracted by MobileNet-V2 using the Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimizer. This integration enhances convergence and stability, leading to improved classification accuracy. The proposed approach was evaluated on the BUSI ultrasound image dataset. Experimental results demonstrated strong performance, achieving an accuracy of 99.1%, sensitivity of 99.7%, specificity of 99.5%, precision of 97.7%, and an area under the curve (AUC) of 1.0 using an 80-20 data split. Additionally, under 10-fold cross-validation, the model achieved an accuracy of 98.7%, sensitivity of 99.1%, specificity of 98.3%, precision of 98.4%, F1-score of 98.04%, and an AUC of 0.99.

Carotid and femoral bifurcation plaques detected by ultrasound as predictors of cardiovascular events.

Blinc A, Nicolaides AN, Poredoš P, Paraskevas KI, Heiss C, Müller O, Rammos C, Stanek A, Jug B

pubmed logopapersJul 25 2025
<b></b>Risk factor-based algorithms give a good estimate of cardiovascular (CV) risk at the population level but are often inaccurate at the individual level. Detecting preclinical atherosclerotic plaques in the carotid and common femoral arterial bifurcations by ultrasound is a simple, non-invasive way of detecting atherosclerosis in the individual and thus more accurately estimating his/her risk of future CV events. The presence of plaques in these bifurcations is independently associated with increased risk of CV death and myocardial infarction, even after adjusting for traditional risk factors, while ultrasonographic characteristics of vulnerable plaque are mostly associated with increased risk for ipsilateral ischaemic stroke. The predictive value of carotid and femoral plaques for CV events increases in proportion to plaque burden and especially by plaque progression over time. Assessing the burden of carotid and/or common femoral bifurcation plaques enables reclassification of a significant number of individuals with low risk according risk factor-based algorithms into intermediate or high CV risk and intermediate risk individuals into the low- or high CV risk. Ongoing multimodality imaging studies, supplemented by clinical and genetic data, aided by machine learning/ artificial intelligence analysis are expected to advance our understanding of atherosclerosis progression from the asymptomatic into the symptomatic phase and personalize prevention.

Multimodal prediction based on ultrasound for response to neoadjuvant chemotherapy in triple negative breast cancer.

Lyu M, Yi S, Li C, Xie Y, Liu Y, Xu Z, Wei Z, Lin H, Zheng Y, Huang C, Lin X, Liu Z, Pei S, Huang B, Shi Z

pubmed logopapersJul 25 2025
Pathological complete response (pCR) can guide surgical strategy and postoperative treatments in triple-negative breast cancer (TNBC). In this study, we developed a Breast Cancer Response Prediction (BCRP) model to predict the pCR in patients with TNBC. The BCRP model integrated multi-dimensional longitudinal quantitative imaging features, clinical factors and features from the Breast Imaging Data and Reporting System (BI-RADS). Multi-dimensional longitudinal quantitative imaging features, including deep learning features and radiomics features, were extracted from multiview B-mode and colour Doppler ultrasound images before and after treatment. The BCRP model achieved the areas under the receiver operating curves (AUCs) of 0.94 [95% confidence interval (CI), 0.91-0.98] and 0.84 [95%CI, 0.75-0.92] in the training and external test cohorts, respectively. Additionally, the low BCRP score was an independent risk factor for event-free survival (P < 0.05). The BCRP model showed a promising ability in predicting response to neoadjuvant chemotherapy in TNBC, and could provide valuable information for survival.

Deep Learning-Based Multi-View Echocardiographic Framework for Comprehensive Diagnosis of Pericardial Disease

Jeong, S., Moon, I., Jeon, J., Jeong, D., Lee, J., kim, J., Lee, S.-A., Jang, Y., Yoon, Y. E., Chang, H.-J.

medrxiv logopreprintJul 25 2025
BackgroundPericardial disease exhibits a wide clinical spectrum, ranging from mild effusions to life-threatening tamponade or constriction pericarditis. While transthoracic echocardiography (TTE) is the primary diagnostic modality, its effectiveness is limited by operator dependence and incomplete evaluation of functional impact. Existing artificial intelligence models focus primarily on effusion detection, lacking comprehensive disease assessment. MethodsWe developed a deep learning (DL)-based framework that sequentially assesses pericardial disease: (1) morphological changes, including pericardial effusion amount (normal/small/moderate/large) and pericardial thickening or adhesion (yes/no), using five B-mode views, and (2) hemodynamic significance (yes/no), incorporating additional inputs from Doppler and inferior vena cava measurements. The developmental dataset comprises 2,253 TTEs from multiple Korean institutions (225 for internal testing), and the independent external test set consists of 274 TTEs. ResultsIn the internal test set, the model achieved diagnostic accuracy of 81.8-97.3% for pericardial effusion classification, 91.6% for pericardial thickening/adhesion, and 86.2% for hemodynamic significance. Corresponding accuracy in the external test set was 80.3-94.2%, 94.5%, and 85.5%, respectively. Area under the receiver operating curves (AUROCs) for the three tasks in the internal test set was 0.92-0.99, 0.90, and 0.79; and in the external test set, 0.95-0.98, 0.85, and 0.76. Sensitivity for detecting pericardial thickening/adhesion and hemodynamic significance was modest (66.7% and 68.8% in the internal test set), but improved substantially when cases with poor image quality were excluded (77.3%, and 80.8%). Similar performance gains were observed in subgroups with complete target views and a higher number of available video clips. ConclusionsThis study presents the first DL-based TTE model capable of comprehensive evaluation of pericardial disease, integrating both morphological and functional assessments. The proposed framework demonstrated strong generalizability and aligned with the real-world diagnostic workflow. However, caution is warranted when interpreting results under suboptimal imaging conditions.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation.
Page 22 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.