Sort by:
Page 141 of 3993982 results

De-speckling of medical ultrasound image using metric-optimized knowledge distillation.

Khalifa M, Hamza HM, Hosny KM

pubmed logopapersJul 3 2025
Ultrasound imaging provides real-time views of internal organs, which are essential for accurate diagnosis and treatment. However, speckle noise, caused by wave interactions with tissues, creates a grainy texture that hides crucial details. This noise varies with image intensity, which limits the effectiveness of traditional denoising methods. We introduce the Metric-Optimized Knowledge Distillation (MK) model, a deep-learning approach that utilizes Knowledge Distillation (KD) for denoising ultrasound images. Our method transfers knowledge from a high-performing teacher network to a smaller student network designed for this task. By leveraging KD, the model removes speckle noise while preserving key anatomical details needed for accurate diagnosis. A key innovation of our paper is the metric-guided training strategy. We achieve this by repeatedly computing evaluation metrics used to assess our model. Incorporating them into the loss function enables the model to reduce noise and enhance image quality optimally. We evaluate our proposed method against state-of-the-art despeckling techniques, including DNCNN and other recent models. The results demonstrate that our approach performs superior noise reduction and image quality preservation, making it a valuable tool for enhancing the diagnostic utility of ultrasound images.

Cross-validation of an artificial intelligence tool for fracture classification and localization on conventional radiography in Dutch population.

Ruitenbeek HC, Sahil S, Kumar A, Kushawaha RK, Tanamala S, Sathyamurthy S, Agrawal R, Chattoraj S, Paramasamy J, Bos D, Fahimi R, Oei EHG, Visser JJ

pubmed logopapersJul 3 2025
The aim of this study is to validate the effectiveness of an AI tool trained on Indian data in a Dutch medical center and to assess its ability to classify and localize fractures. Conventional radiographs acquired between January 2019 and November 2022 were analyzed using a multitask deep neural network. The tool, trained on Indian data, identified and localized fractures in 17 body parts. The reference standard was based on radiology reports resulting from routine clinical workflow and confirmed by an experienced musculoskeletal radiologist. The analysis included both patient-wise and fracture-wise evaluations, employing binary and Intersection over Union (IoU) metrics to assess fracture detection and localization accuracy. In total, 14,311 radiographs (median age, 48 years (range 18-98), 7265 male) were analyzed and categorized by body parts; clavicle, shoulder, humerus, elbow, forearm, wrist, hand and finger, pelvis, hip, femur, knee, lower leg, ankle, foot and toe. 4156/14,311 (29%) had fractures. The AI tool demonstrated overall patient-wise sensitivity, specificity, and AUC of 87.1% (95% CI: 86.1-88.1%), 87.1% (95% CI: 86.4-87.7%), and 0.92 (95% CI: 0.91-0.93), respectively. Fracture detection rate was 60% overall, ranging from 7% for rib fractures to 90% for clavicle fractures. This study validates a fracture detection AI tool on a Western-European dataset, originally trained on Indian data. While classification performance is robust on real clinical data, fracture-wise analysis reveals variability in localization accuracy, underscoring the need for refinement in fracture localization. AI may provide help by enabling optimal use of limited resources or personnel. This study evaluates an AI tool designed to aid in detecting fractures, possibly reducing reading time or optimization of radiology workflow by prioritizing fracture-positive cases. Cross-validation on a consecutive Dutch cohort confirms this AI tool's clinical robustness. The tool detected fractures with 87% sensitivity, 87% specificity, and 0.92 AUC. AI localizes 60% of fractures, the highest for clavicle (90%) and lowest for ribs (7%).

Integrating MobileNetV3 and SqueezeNet for Multi-class Brain Tumor Classification.

Kantu S, Kaja HS, Kukkala V, Aly SA, Sayed K

pubmed logopapersJul 3 2025
Brain tumors pose a critical health threat requiring timely and accurate classification for effective treatment. Traditional MRI analysis is labor-intensive and prone to variability, necessitating reliable automated solutions. This study explores lightweight deep learning models for multi-class brain tumor classification across four categories: glioma, meningioma, pituitary tumors, and no tumor. We investigate the performance of MobileNetV3 and SqueezeNet individually, and a feature-fusion hybrid model that combines their embedding layers. We utilized a publicly available MRI dataset containing 7023 images with a consistent internal split (65% training, 17% validation, 18% test) to ensure reliable evaluation. MobileNetV3 offers deep semantic understanding through its expressive features, while SqueezeNet provides minimal computational overhead. Their feature-level integration creates a balanced approach between diagnostic accuracy and deployment efficiency. Experiments conducted with consistent hyperparameters and preprocessing showed MobileNetV3 achieved the highest test accuracy (99.31%) while maintaining a low parameter count (3.47M), making it suitable for real-world deployment. Grad-CAM visualizations were employed for model explainability, highlighting tumor-relevant regions and helping visualize the specific areas contributing to predictions. Our proposed models outperform several baseline architectures like VGG16 and InceptionV3, achieving high accuracy with significantly fewer parameters. These results demonstrate that well-optimized lightweight networks can deliver accurate and interpretable brain tumor classification.

Content-based X-ray image retrieval using fusion of local neighboring patterns and deep features for lung disease detection.

Prakash A, Singh VP

pubmed logopapersJul 3 2025
This paper introduces a Content-Based Medical Image Retrieval (CBMIR) system for detecting and retrieving lung disease cases to assist doctors and radiologists in clinical decision-making. The system combines texture-based features using Local Binary Patterns (LBP) with deep learning-based features extracted from pretrained CNN models, including VGG-16, DenseNet121, and InceptionV3. The objective is to identify the optimal fusion of texture and deep features to enhance the image retrieval performance. Various similarity measures, including Euclidean, Manhattan, and cosine similarities, were evaluated, with Cosine Similarity demonstrating the best performance, achieving an average precision of 65.5%. For COVID-19 cases, VGG-16 achieved a precision of 52.5%, while LBP performed best for the normal class with 85% precision. The fusion of LBP, VGG-16, and DenseNet121 excelled in pneumonia cases, with a precision of 93.5%. Overall, VGG-16 delivered the highest average precision of 74.0% across all classes, followed by LBP at 72.0%. The fusion of texture (LBP) and deep features from all CNN models achieved 86% accuracy for the retrieval of the top 10 images, supporting healthcare professionals in making more informed clinical decisions.

Predicting Ten-Year Clinical Outcomes in Multiple Sclerosis with Radiomics-Based Machine Learning Models.

Tranfa M, Petracca M, Cuocolo R, Ugga L, Morra VB, Carotenuto A, Elefante A, Falco F, Lanzillo R, Moccia M, Scaravilli A, Brunetti A, Cocozza S, Quarantelli M, Pontillo G

pubmed logopapersJul 3 2025
Identifying patients with multiple sclerosis (pwMS) at higher risk of clinical progression is essential to inform clinical management. We aimed to build prognostic models using machine learning (ML) algorithms predicting long-term clinical outcomes based on a systematic mapping of volumetric, radiomic, and macrostructural disconnection features from routine brain MRI scans of pwMS. In this longitudinal monocentric study, 3T structural MRI scans of pwMS were retrospectively analyzed. Based on a ten-year clinical follow-up (average duration=9.4±1.1 years), patients were classified according to confirmed disability progression (CDP) and cognitive impairment (CI) as assessed through the Expanded Disability Status Scale (EDSS) and the Brief International Cognitive Assessment of Multiple Sclerosis (BICAMS) battery, respectively. 3D-T1w and FLAIR images were automatically segmented to obtain volumes, disconnection scores (estimated based on lesion masks and normative tractography data), and radiomic features from 116 gray matter regions defined according to the Automated Anatomical Labelling (AAL) atlas. Three ML algorithms (Extra Trees, Logistic Regression, and Support Vector Machine) were used to build models predicting long-term CDP and CI based on MRI-derived features. Feature selection was performed on the training set with a multi-step process, and models were validated with a holdout approach, randomly splitting the patients into training (75%) and test (25%) sets. We studied 177 pwMS (M/F = 51/126; mean±SD age: 35.2±8.7 years). Long-term CDP and CI were observed in 71 and 55 patients, respectively. Regarding the CDP class prediction analysis, the feature selection identified 13-, 12-, and 10-feature subsets obtaining an accuracy on the test set of 0.71, 0.69, and 0.67 for the Extra Trees, Logistic Regression, and Support Vector Machine classifiers, respectively. Similarly, for the CI prediction, subsets of 16, 17, and 19 features were selected, with 0.69, 0.64, and 0.62 accuracy values on the test set, respectively. There were no significant differences in accuracy between ML models for CDP (p=0.65) or CI (p=0.31). Building on quantitative features derived from conventional MRI scans, we obtained long-term prognostic models, potentially informing patients' stratification and clinical decision-making. MS, multiple sclerosis; pwMS, people with MS; HC, healthy controls; ML, machine learning; DD, disease duration; EDSS, Expanded Disability Status Scale; TLV, total lesion volume; CDP, confirmed disability progression; CI, cognitive impairment; BICAMS, Brief International Cognitive Assessment of Multiple Sclerosis.

Recent Advances in Applying Machine Learning to Proton Radiotherapy.

Wildman VL, Wynne J, Momin S, Kesarwala AH, Yang X

pubmed logopapersJul 3 2025
In radiation oncology, precision and timeliness of both planning and treatment are paramount values of patient care. Machine learning has increasingly been applied to various aspects of photon radiotherapy to reduce manual error and improve the efficiency of clinical decision making; however, applications to proton therapy remain an emerging field in comparison. This systematic review aims to comprehensively cover all current and potential applications of machine learning to the proton therapy clinical workflow, an area that has not been extensively explored in literature. PubMed and Embase were utilized to identify studies pertinent to machine learning in proton therapy between 2019 to 2024. An initial search on PubMed was made with the search strategy "'proton therapy', 'machine learning', 'deep learning'". A subsequent search on Embase was made with "("proton therapy") AND ("machine learning" OR "deep learning")". In total, 38 relevant studies have been summarized and incorporated. It is observed that U-Net architectures are prevalent in the patient pre-screening process, while convolutional neural networks play an important role in dose and range prediction. Both image quality improvement and transformation between modalities to decrease extraneous radiation are popular targets of various models. To adaptively improve treatments, advanced architectures such as general deep inception or deep cascaded convolution neural networks improve online dose verification and range monitoring. With the rising clinical usage of proton therapy, machine learning models have been increasingly proposed to facilitate both treatment and discovery. Significantly improving patient screening, planning, image quality, and dose and range calculation, machine learning is advancing the precision and personalization of proton therapy.

Differentiated thyroid cancer and positron emission computed tomography: when, how and why?

Coca Pelaz A, Rodrigo JP, Zafereo M, Nixon I, Guntinas-Lichius O, Randolph G, Civantos FJ, Pace-Asciak P, Jara MA, Kuker R, Ferlito A

pubmed logopapersJul 3 2025
Fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) has become an indispensable tool in oncology, offering both metabolic and anatomical insights into tumor behavior. Most differentiated thyroid carcinomas (DTC) are indolent and therefore FDG PET/CT is not routinely incorporated into management. However, in biologically aggressive DTCs, FDG PET/CT plays a crucial role in detecting recurrence and metastases. This narrative review with articles from the last 25 years from PubMed database, explores the evolving role of FDG PET/CT, focusing on its utility in recurrence detection, staging, and follow-up of radioactive iodine (RAI)-refractory cases. Current guidelines recommend FDG PET/CT primarily for high-risk patients with elevated thyroglobulin levels and negative RAI scans (TENIS syndrome). We also examine advancements in PET imaging, novel radiotracers and theragnostic approaches that enhance diagnostic accuracy and treatment monitoring. While FDG PET/CT has proven valuable in biologically aggressive DTC, its routine use remains limited by cost, accessibility, and concerns regarding radiation exposure in younger patients requiring repeated imaging studies. Future developments in molecular imaging, including novel tracers and artificial intelligence-driven analysis, are expected to refine its role, leading to more personalized and effective management, though economic and reimbursement challenges remain important considerations for broader adoption.

A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation.

Jin H, Che H, He S, Chen H

pubmed logopapersJul 3 2025
Despite the progress of radiology report generation (RRG), existing works face two challenges: 1) The performances in clinical efficacy are unsatisfactory, especially for lesion attributes description; 2) the generated text lacks explainability, making it difficult for radiologists to trust the results. To address the challenges, we focus on a trustworthy RRG model, which not only generates accurate descriptions of abnormalities, but also provides basis of its predictions. To this end, we propose a framework named chain of diagnosis (CoD), which maintains a chain of diagnostic process for clinically accurate and explainable RRG. It first generates question-answer (QA) pairs via diagnostic conversation to extract key findings, then prompts a large language model with QA diagnoses for accurate generation. To enhance explainability, a diagnosis grounding module is designed to match QA diagnoses and generated sentences, where the diagnoses act as a reference. Moreover, a lesion grounding module is designed to locate abnormalities in the image, further improving the working efficiency of radiologists. To facilitate label-efficient training, we propose an omni-supervised learning strategy with clinical consistency to leverage various types of annotations from different datasets. Our efforts lead to 1) an omni-labeled RRG dataset with QA pairs and lesion boxes; 2) a evaluation tool for assessing the accuracy of reports in describing lesion location and severity; 3) extensive experiments to demonstrate the effectiveness of CoD, where it outperforms both specialist and generalist models consistently on two RRG benchmarks and shows promising explainability by accurately grounding generated sentences to QA diagnoses and images.

Beyond Recanalization: Machine Learning-Based Insights into Post-Thrombectomy Vascular Morphology in Stroke Patients.

Deshpande A, Laksari K, Tahsili-Fahadan P, Latour LL, Luby M

pubmed logopapersJul 3 2025
Many stroke patients have poor outcomes despite successful endovascular therapy (EVT). We hypothesized that machine learning (ML)-based analysis of vascular changes post-EVT could identify macrovascular perfusion deficits such as residual hypoperfusion and distal emboli. Patients with anterior circulation large vessel occlusion (LVO) stroke, pre-and post-EVT MRI, and successful recanalization (mTICI 2b/3) were included. An ML algorithm extracted vascular features from pre-and 24-hour post-EVT MRA. A ≥100% increase in ipsilateral arterial branch length was considered significant. Perfusion deficits were defined using PWI, MTT, or distal clot presence; early neurological improvement (ENI) by a 24-hour NIHSS decrease ≥4 or NIHSS 0-1. Among 44 patients (median age 63), 71% had complete reperfusion. Those with distal clot had smaller arterial length increases (51% vs. 134%, p=0.05). ENI patients showed greater arterial length increases (161% vs. 67%, p=0.023). ML-based vascular analysis post-EVT correlates with perfusion deficits and may guide adjunctive therapy.ABBREVIATIONS: EVT = Endovascular Thrombectomy, LVO = Large Vessel Occlusion, ENI = Early Neurological Improvement, AIS = Acute Ischemic Stroke, mTICI = Modified Thrombolysis in Cerebral Infarction.

ComptoNet: a Compton-map guided deep learning framework for multi-scatter estimation in multi-source stationary CT.

Xia Y, Zhang L, Xing Y, Chen Z, Gao H

pubmed logopapersJul 3 2025
Multi-source stationary computed tomography (MSS-CT) offers significant advantages in medical and industrial applications due to its gantryless scan architecture and capability of simultaneous multi-source emission. However, the lack of anti-scatter grid deployment in MSS-CT leads to severe forward and cross scatter contamination, necessitating accurate and efficient scatter correction. In this work, we propose ComptoNet, an innovative decoupled deep learning framework that integrates Compton-scattering physics with deep learning for scatter estimation in MSS-CT. The core innovation lies in the Compton-map, a representation of large-angle Compton scatter signals outside the scan field of view. ComptoNet employs a dual-network architecture: a Conditional Encoder-Decoder Network (CED-Net) guided by reference Compton-maps and spare detector data for cross scatter estimation, and a Frequency U-Net with attention mechanisms for forward scatter correction. Experiments on Monte Carlo-simulated data demonstrate ComptoNet's superior performance, achieving a mean absolute percentage error (MAPE) of $0.84\%$ on scatter estimation. After correction, CT images show nearly artifact-free quality, validating ComptoNet's robustness in mitigating scatter-induced errors across diverse photon counts and phantoms.
Page 141 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.