Sort by:
Page 8 of 45441 results

DNP-Guided Contrastive Reconstruction with a Reverse Distillation Transformer for Medical Anomaly Detection

Luhu Li, Bowen Lin, Mukhtiar Khan, Shujun Fu

arxiv logopreprintAug 27 2025
Anomaly detection in medical images is challenging due to limited annotations and a domain gap compared to natural images. Existing reconstruction methods often rely on frozen pre-trained encoders, which limits adaptation to domain-specific features and reduces localization accuracy. Prototype-based learning offers interpretability and clustering benefits but suffers from prototype collapse, where few prototypes dominate training, harming diversity and generalization. To address this, we propose a unified framework combining a trainable encoder with prototype-guided reconstruction and a novel Diversity-Aware Alignment Loss. The trainable encoder, enhanced by a momentum branch, enables stable domain-adaptive feature learning. A lightweight Prototype Extractor mines informative normal prototypes to guide the decoder via attention for precise reconstruction. Our loss enforces balanced prototype use through diversity constraints and per-prototype normalization, effectively preventing collapse. Experiments on multiple medical imaging benchmarks show significant improvements in representation quality and anomaly localization, outperforming prior methods. Visualizations and prototype assignment analyses further validate the effectiveness of our anti-collapse mechanism and enhanced interpretability.

A robust deep learning framework for cerebral microbleeds recognition in GRE and SWI MRI.

Hassanzadeh T, Sachdev S, Wen W, Sachdev PS, Sowmya A

pubmed logopapersAug 27 2025
Cerebral microbleeds (CMB) are small hypointense lesions visible on gradient echo (GRE) or susceptibility-weighted (SWI) MRI, serving as critical biomarkers for various cerebrovascular and neurological conditions. Accurate quantification of CMB is essential, as their number correlates with the severity of conditions such as small vessel disease, stroke risk and cognitive decline. Current detection methods depend on manual inspection, which is time-consuming and prone to variability. Automated detection using deep learning presents a transformative solution but faces challenges due to the heterogeneous appearance of CMB, high false-positive rates, and similarity to other artefacts. This study investigates the application of deep learning techniques to public (ADNI and AIBL) and private datasets (OATS and MAS), leveraging GRE and SWI MRI modalities to enhance CMB detection accuracy, reduce false positives, and ensure robustness in both clinical and normal cases (i.e., scans without cerebral microbleeds). A 3D convolutional neural network (CNN) was developed for automated detection, complemented by a You Only Look Once (YOLO)-based approach to address false positive cases in more complex scenarios. The pipeline incorporates extensive preprocessing and validation, demonstrating robust performance across a diverse range of datasets. The proposed method achieves remarkable performance across four datasets, ADNI: Balanced accuracy: 0.953, AUC: 0.955, Precision: 0.954, Sensitivity: 0.920, F1-score: 0.930, AIBL: Balanced accuracy: 0.968, AUC: 0.956, Precision: 0.956, Sensitivity: 0.938, F1-score: 0.946, MAS: Balanced accuracy: 0.889, AUC: 0.889, Precision: 0.948, Sensitivity: 0.779, F1-score: 0.851, and OATS dataset: Balanced accuracy: 0.93, AUC: 0.930, Precision: 0.949, Sensitivity: 0.862, F1-score: 0.900. These results highlight the potential of deep learning models to improve early diagnosis and support treatment planning for conditions associated with CMB.

Application of artificial intelligence in medical imaging for tumor diagnosis and treatment: a comprehensive approach.

Huang J, Xiang Y, Gan S, Wu L, Yan J, Ye D, Zhang J

pubmed logopapersAug 26 2025
This narrative review provides a comprehensive and structured overview of recent advances in the application of artificial intelligence (AI) to medical imaging for tumor diagnosis and treatment. By synthesizing evidence from recent literature and clinical reports, we highlight the capabilities, limitations, and translational potential of AI techniques across key imaging modalities such as CT, MRI, and PET. Deep learning (DL) and radiomics have facilitated automated lesion detection, tumour segmentation, and prognostic assessments, improving early cancer detection across various malignancies, including breast, lung, and prostate cancers. AI-driven multi-modal imaging fusion integrates radiomics, genomics, and clinical data, refining precision oncology strategies. Additionally, AI-assisted radiotherapy planning and adaptive dose optimisation have enhanced therapeutic efficacy while minimising toxicity. However, challenges persist regarding data heterogeneity, model generalisability, regulatory constraints, and ethical concerns. The lack of standardised datasets and explainable AI (XAI) frameworks hinders clinical adoption. Future research should focus on improving AI interpretability, fostering multi-centre dataset interoperability, and integrating AI with molecular imaging and real-time clinical decision support. Addressing these challenges will ensure AI's seamless integration into clinical oncology, optimising cancer diagnosis, prognosis, and treatment outcomes.

Development of a deep learning method to identify acute ischaemic stroke lesions on brain CT.

Fontanella A, Li W, Mair G, Antoniou A, Platt E, Armitage P, Trucco E, Wardlaw JM, Storkey A

pubmed logopapersAug 26 2025
CT is commonly used to image patients with ischaemic stroke but radiologist interpretation may be delayed. Machine learning techniques can provide rapid automated CT assessment but are usually developed from annotated images which necessarily limits the size and representation of development data sets. We aimed to develop a deep learning (DL) method using CT brain scans that were labelled but not annotated for the presence of ischaemic lesions. We designed a convolutional neural network-based DL algorithm to detect ischaemic lesions on CT. Our algorithm was trained using routinely acquired CT brain scans collected for a large multicentre international trial. These scans had previously been labelled by experts for acute and chronic appearances. We explored the impact of ischaemic lesion features, background brain appearances and timing of CT (baseline or 24-48 hour follow-up) on DL performance. From 5772 CT scans of 2347 patients (median age 82), 54% had visible ischaemic lesions according to experts. Our DL method achieved 72% accuracy in detecting ischaemic lesions. Detection was better for larger (80% accuracy) or multiple (87% accuracy for two, 100% for three or more) lesions and with follow-up scans (76% accuracy vs 67% at baseline). Chronic brain conditions reduced accuracy, particularly non-stroke lesions and old stroke lesions (32% and 31% error rates, respectively). DL methods can be designed for ischaemic lesion detection on CT using the vast quantities of routinely collected brain scans without the need for lesion annotation. Ultimately, this should lead to more robust and widely applicable methods.

Anatomy-aware transformer-based model for precise rectal cancer detection and localization in MRI scans.

Li S, Zhang Y, Hong Y, Yuan W, Sun J

pubmed logopapersAug 25 2025
Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.

Application of artificial intelligence in the diagnosis of scaphoid fractures: impact of automated detection of scaphoid fractures in a real-life study.

Hernáiz Ferrer AI, Bortolotto C, Carone L, Preda EM, Fichera C, Lionetti A, Gambini G, Fresi E, Grassi FA, Preda L

pubmed logopapersAug 23 2025
We evaluated the diagnostic performance of two AI software programs (BoneView and RBfracture) in assisting non-specialist radiologists (NSRs) in detecting scaphoid fractures using conventional wrist radiographs (X-rays). We retrospectively analyzed 724 radiographs from 264 patients with wrist trauma. Patients were classified into two groups: Group 1 included cases with a definitive diagnosis by a specialist radiologist (SR) based on X-rays (either scaphoid fracture or not), while Group 2 comprised indeterminate cases for the SRs requiring a CT scan for a final diagnosis. Indeterminate cases were defined as negative or doubtful X-rays in patients with persistent clinical symptoms. The X-rays were evaluated by AI and two NSRs, independently and in combination. We compared their diagnostic performances using sensitivity, specificity, area under the curve (AUC), and Cohen's kappa for diagnostic agreement. Group 1 included 174 patients, with 80 cases (45.97%) of scaphoid fractures. Group 2 had 90 patients, of which 44 with uncertain diagnoses and 46 negative cases with persistent symptoms. Scaphoid fractures were identified in 51 patients (56.67%) in Group 2 after further CT imaging. In Group 1, AI performed similarly to NSRs (AUC: BoneView 0.83, RBfracture 0.84, NSR1 0.88, NSR2 0.90), without significant contribution of AI to the performance of NSRs. In Group 2, performances were lower (AUC: BoneView 0.62, RBfracture 0.65, NSR1 0.46, NSR2 0.63), but AI assistance significantly improved NSR performance (NSR2 + BoneView AUC = 0.75, p = 0.003; NSR2 + RBfracture AUC = 0.72, p = 0.030). Diagnostic agreement between NSR1 with AI support and SR was moderate (kappa = 0.576), and substantial for NSR2 (kappa = 0.712). AI tools may effectively assist NSRs, especially in complex scaphoid fracture cases.

ESR Essentials: lung cancer screening with low-dose CT-practice recommendations by the European Society of Thoracic Imaging.

Revel MP, Biederer J, Nair A, Silva M, Jacobs C, Snoeckx A, Prokop M, Prosch H, Parkar AP, Frauenfelder T, Larici AR

pubmed logopapersAug 23 2025
Low-dose CT screening for lung cancer reduces the risk of death from lung cancer by at least 21% in high-risk participants and should be offered to people aged between 50 and 75 with at least 20 pack-years of smoking. Iterative reconstruction or deep learning algorithms should be used to keep the effective dose below 1 mSv. Deep learning algorithms are required to facilitate the detection of nodules and the measurement of their volumetric growth. Only large solid nodules larger than 500 mm<sup>3</sup> or those with spiculations, bubble-like lucencies, or pleural indentation and complex cysts should be investigated further. Short-term follow-up at 3 or 6 months is required for solid nodules of 100 to 500 mm<sup>3</sup>. A watchful waiting approach is recommended for most subsolid nodules, to limit the risk of overtreatment. Finally, the description of additional findings must be limited if LCS is to be cost-effective. KEY POINTS: Low-dose CT screening reduces the risk of death from lung cancer by at least 21% in high-risk individuals, with a greater benefit in women. Quality assurance of screening is essential to control radiation dose and the number of false positives. Screening with low-dose CT scans detects incidental findings of variable clinical relevance, only those of importance should be reported.

Diagnostic value of artificial intelligence-based software for the detection of pediatric upper extremity fractures.

Mollica F, Metz C, Anders MS, Wismayer KK, Schmid A, Niehues SM, Veldhoen S

pubmed logopapersAug 23 2025
Fractures in children are common in emergency care, and accurate diagnosis is crucial to avoid complications affecting skeletal development. Limited access to pediatric radiology specialists emphasizes the potential of artificial intelligence (AI)-based diagnostic tools. This study evaluates the performance of the AI software BoneView® for detecting fractures of the upper extremity in children aged 2-18 years. A retrospective analysis was conducted using radiographic data from 826 pediatric patients presenting to the university's pediatric emergency department. Independent assessments by two experienced pediatric radiologists served as reference standard. The diagnostic accuracy of the AI tool compared to the reference standard was evaluated and performance parameters, e.g., sensitivity, specificity, positive and negative predictive values were calculated. The AI tool achieved an overall sensitivity of 89% and specificity of 91% for detecting fractures of the upper extremities. Significantly poorer performance compared to the reference standard was observed for the shoulder, elbow, hand, and fingers, while no significant difference was found for the wrist, clavicle, upper arm, and forearm. The software performed best for wrist fractures (sensitivity: 96%; specificity: 94%) and worst for elbow fractures (sensitivity: 87%; specificity: 65%). The software assessed provides diagnostic support in pediatric emergency radiology. While its overall performance is robust, limitations in specific anatomical regions underscore the need for further training of the underlying algorithms. The results suggest that AI can complement clinical expertise but should not replace radiological assessment. Question There is no comprehensive analysis of an AI-based tool for the diagnosis of pediatric fractures focusing on the upper extremities. Findings The AI-based software demonstrated solid overall diagnostic accuracy in the detection of upper limb fractures in children, with performance differing by anatomical region. Clinical relevance AI-based fracture detection can support pediatric emergency radiology, especially where expert interpretation is limited. However, further algorithm training is needed for certain anatomical regions and for detecting associated findings such as joint effusions to maximize clinical benefit.

Development and validation of a keypoint region-based convolutional neural network to automate thoracic Cobb angle measurements using whole-spine standing radiographs.

Dagli MM, Sussman JH, Gujral J, Budihal BR, Kerr M, Yoon JW, Ozturk AK, Cahill PJ, Anari J, Winkelstein BA, Welch WC

pubmed logopapersAug 23 2025
Adolescent idiopathic scoliosis (AIS) affects a significant portion of the adolescent population, leading to severe spinal deformities if untreated. Diagnosis, surgical planning, and assessment of outcomes are determined primarily by the Cobb angle on anteroposterior spinal radiographs. Screening for scoliosis enables early interventions and improved outcomes. However, screenings are often conducted through school entities where a trained radiologist may not be available to accurately interpret the imaging results. In this study, we developed an artificial intelligence tool utilizing a keypoint region-based convolutional neural network (KR-CNN) for automated thoracic Cobb angle (TCA) measurement. The KR-CNN was trained on 609 whole-spine radiographs of AIS patients and validated using our institutional AIS registry, which included 83 patients who underwent posterior spinal fusion with both preoperative and postoperative anteroposterior X-ray images. The KR-CNN model demonstrated superior performance metrics, including a mean absolute error (MAE) of 2.22, mean squared error (MSE) of 9.1, symmetric mean absolute percentage error (SMAPE) of 4.29, and intraclass correlation coefficient (ICC) of 0.98, outperforming existing methods. This method will enable fast and accurate screening for AIS and assessment of postoperative outcomes and provides a development framework for further automation and validation of spinopelvic measurements.

Towards expert-level autonomous carotid ultrasonography with large-scale learning-based robotic system.

Jiang H, Zhao A, Yang Q, Yan X, Wang T, Wang Y, Jia N, Wang J, Wu G, Yue Y, Luo S, Wang H, Ren L, Chen S, Liu P, Yao G, Yang W, Song S, Li X, He K, Huang G

pubmed logopapersAug 23 2025
Carotid ultrasound requires skilled operators due to small vessel dimensions and high anatomical variability, exacerbating sonographer shortages and diagnostic inconsistencies. Prior automation attempts, including rule-based approaches with manual heuristics and reinforcement learning trained in simulated environments, demonstrate limited generalizability and fail to complete real-world clinical workflows. Here, we present UltraBot, a fully learning-based autonomous carotid ultrasound robot, achieving human-expert-level performance through four innovations: (1) A unified imitation learning framework for acquiring anatomical knowledge and scanning operational skills; (2) A large-scale expert demonstration dataset (247,000 samples, 100 × scale-up), enabling embodied foundation models with strong generalization; (3) A comprehensive scanning protocol ensuring full anatomical coverage for biometric measurement and plaque screening; (4) The clinical-oriented validation showing over 90% success rates, expert-level accuracy, up to 5.5 × higher reproducibility across diverse unseen populations. Overall, we show that large-scale deep learning offers a promising pathway toward autonomous, high-precision ultrasonography in clinical practice.
Page 8 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.