Sort by:
Page 127 of 1401395 results

Automated neuroradiological support systems for multiple cerebrovascular disease markers - A systematic review and meta-analysis.

Phitidis J, O'Neil AQ, Whiteley WN, Alex B, Wardlaw JM, Bernabeu MO, Hernández MV

pubmed logopapersJun 1 2025
Cerebrovascular diseases (CVD) can lead to stroke and dementia. Stroke is the second leading cause of death world wide and dementia incidence is increasing by the year. There are several markers of CVD that are visible on brain imaging, including: white matter hyperintensities (WMH), acute and chronic ischaemic stroke lesions (ISL), lacunes, enlarged perivascular spaces (PVS), acute and chronic haemorrhagic lesions, and cerebral microbleeds (CMB). Brain atrophy also occurs in CVD. These markers are important for patient management and intervention, since they indicate elevated risk of future stroke and dementia. We systematically reviewed automated systems designed to support radiologists reporting on these CVD imaging findings. We considered commercially available software and research publications which identify at least two CVD markers. In total, we included 29 commercial products and 13 research publications. Two distinct types of commercial support system were available: those which identify acute stroke lesions (haemorrhagic and ischaemic) from computed tomography (CT) scans, mainly for the purpose of patient triage; and those which measure WMH and atrophy regionally and longitudinally. In research, WMH and ISL were the markers most frequently analysed together, from magnetic resonance imaging (MRI) scans; lacunes and PVS were each targeted only twice and CMB only once. For stroke, commercially available systems largely support the emergency setting, whilst research systems consider also follow-up and routine scans. The systems to quantify WMH and atrophy are focused on neurodegenerative disease support, where these CVD markers are also of significance. There are currently no openly validated systems, commercially, or in research, performing a comprehensive joint analysis of all CVD markers (WMH, ISL, lacunes, PVS, haemorrhagic lesions, CMB, and atrophy).

Brain tumor segmentation with deep learning: Current approaches and future perspectives.

Verma A, Yadav AK

pubmed logopapersJun 1 2025
Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.

Development and validation of a 3-D deep learning system for diabetic macular oedema classification on optical coherence tomography images.

Zhu H, Ji J, Lin JW, Wang J, Zheng Y, Xie P, Liu C, Ng TK, Huang J, Xiong Y, Wu H, Lin L, Zhang M, Zhang G

pubmed logopapersMay 31 2025
To develop and validate an automated diabetic macular oedema (DME) classification system based on the images from different three-dimensional optical coherence tomography (3-D OCT) devices. A multicentre, platform-based development study using retrospective and cross-sectional data. Data were subjected to a two-level grading system by trained graders and a retina specialist, and categorised into three types: no DME, non-centre-involved DME and centre-involved DME (CI-DME). The 3-D convolutional neural networks algorithm was used for DME classification system development. The deep learning (DL) performance was compared with the diabetic retinopathy experts. Data were collected from Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Chaozhou People's Hospital and The Second Affiliated Hospital of Shantou University Medical College from January 2010 to December 2023. 7790 volumes of 7146 eyes from 4254 patients were annotated, of which 6281 images were used as the development set and 1509 images were used as the external validation set, split based on the centres. Accuracy, F1-score, sensitivity, specificity, area under receiver operating characteristic curve (AUROC) and Cohen's kappa were calculated to evaluate the performance of the DL algorithm. In classifying DME with non-DME, our model achieved an AUROCs of 0.990 (95% CI 0.983 to 0.996) and 0.916 (95% CI 0.902 to 0.930) for hold-out testing dataset and external validation dataset, respectively. To distinguish CI-DME from non-centre-involved-DME, our model achieved AUROCs of 0.859 (95% CI 0.812 to 0.906) and 0.881 (95% CI 0.859 to 0.902), respectively. In addition, our system showed comparable performance (Cohen's κ: 0.85 and 0.75) to the retina experts (Cohen's κ: 0.58-0.92 and 0.70-0.71). Our DL system achieved high accuracy in multiclassification tasks on DME classification with 3-D OCT images, which can be applied to population-based DME screening.

Accelerated proton resonance frequency-based magnetic resonance thermometry by optimized deep learning method.

Xu S, Zong S, Mei CS, Shen G, Zhao Y, Wang H

pubmed logopapersMay 31 2025
Proton resonance frequency (PRF)-based magnetic resonance (MR) thermometry plays a critical role in thermal ablation therapies through focused ultrasound (FUS). For clinical applications, accurate and rapid temperature feedback is essential to ensure both the safety and effectiveness of these treatments. This work aims to improve temporal resolution in dynamic MR temperature map reconstructions using an enhanced deep-learning method, thereby supporting the real-time monitoring required for effective FUS treatments. Five classical neural network architectures-cascade net, complex-valued U-Net, shift window transformer for MRI, real-valued U-Net, and U-Net with residual blocks-along with training-optimized methods were applied to reconstruct temperature maps from 2-fold and 4-fold undersampled k-space data. The training enhancements included pre-training/training-phase data augmentations, knowledge distillation, and a novel amplitude-phase decoupling loss function. Phantom and ex vivo tissue heating experiments were conducted using a FUS transducer. Ground truth was the complex MR images with accurate temperature changes, and datasets were manually undersampled to simulate such acceleration here. Separate testing datasets were used to evaluate real-time performance and temperature accuracy. Furthermore, our proposed deep learning-based rapid reconstruction approach was validated on a clinical dataset obtained from patients with uterine fibroids, demonstrating its clinical applicability. Acceleration factors of 1.9 and 3.7 were achieved for 2× and 4× k-space under samplings, respectively. The deep learning-based reconstruction using ResUNet incorporating the four optimizations, showed superior performance. For 2-fold acceleration, the RMSE of temperature map patches were 0.89°C and 1.15°C for the phantom and ex vivo testing datasets, respectively. The DICE coefficient for the 43°C isotherm-enclosed regions was 0.81, and the Bland-Altman analysis indicated a bias of -0.25°C with limits of agreement of ±2.16°C. In the 4-fold under-sampling case, these evaluation metrics showed approximately a 10% reduction in accuracy. Additionally, the DICE coefficient measuring the overlap between the reconstructed temperature maps (using the optimized ResUNet) and the ground truth, specifically in regions where the temperature exceeded the 43°C threshold, were 0.77 and 0.74 for the 2× and 4× under-sampling scenarios, respectively. This study demonstrates that deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly in the context of FUS-based clinical treatments for uterine fibroids. This approach could also be extended to other applications such as essential tremor and prostate cancer treatments where MRI-guided FUS plays a critical role.

Pretraining Deformable Image Registration Networks with Random Images

Junyu Chen, Shuwen Wei, Yihao Liu, Aaron Carass, Yong Du

arxiv logopreprintMay 30 2025
Recent advances in deep learning-based medical image registration have shown that training deep neural networks~(DNNs) does not necessarily require medical images. Previous work showed that DNNs trained on randomly generated images with carefully designed noise and contrast properties can still generalize well to unseen medical data. Building on this insight, we propose using registration between random images as a proxy task for pretraining a foundation model for image registration. Empirical results show that our pretraining strategy improves registration accuracy, reduces the amount of domain-specific data needed to achieve competitive performance, and accelerates convergence during downstream training, thereby enhancing computational efficiency.

ACM-UNet: Adaptive Integration of CNNs and Mamba for Efficient Medical Image Segmentation

Jing Huang, Yongkang Zhao, Yuhan Li, Zhitao Dai, Cheng Chen, Qiying Lai

arxiv logopreprintMay 30 2025
The U-shaped encoder-decoder architecture with skip connections has become a prevailing paradigm in medical image segmentation due to its simplicity and effectiveness. While many recent works aim to improve this framework by designing more powerful encoders and decoders, employing advanced convolutional neural networks (CNNs) for local feature extraction, Transformers or state space models (SSMs) such as Mamba for global context modeling, or hybrid combinations of both, these methods often struggle to fully utilize pretrained vision backbones (e.g., ResNet, ViT, VMamba) due to structural mismatches. To bridge this gap, we introduce ACM-UNet, a general-purpose segmentation framework that retains a simple UNet-like design while effectively incorporating pretrained CNNs and Mamba models through a lightweight adapter mechanism. This adapter resolves architectural incompatibilities and enables the model to harness the complementary strengths of CNNs and SSMs-namely, fine-grained local detail extraction and long-range dependency modeling. Additionally, we propose a hierarchical multi-scale wavelet transform module in the decoder to enhance feature fusion and reconstruction fidelity. Extensive experiments on the Synapse and ACDC benchmarks demonstrate that ACM-UNet achieves state-of-the-art performance while remaining computationally efficient. Notably, it reaches 85.12% Dice Score and 13.89mm HD95 on the Synapse dataset with 17.93G FLOPs, showcasing its effectiveness and scalability. Code is available at: https://github.com/zyklcode/ACM-UNet.

CCTA-Derived coronary plaque burden offers enhanced prognostic value over CAC scoring in suspected CAD patients.

Dahdal J, Jukema RA, Maaniitty T, Nurmohamed NS, Raijmakers PG, Hoek R, Driessen RS, Twisk JWR, Bär S, Planken RN, van Royen N, Nijveldt R, Bax JJ, Saraste A, van Rosendael AR, Knaapen P, Knuuti J, Danad I

pubmed logopapersMay 30 2025
To assess the prognostic utility of coronary artery calcium (CAC) scoring and coronary computed tomography angiography (CCTA)-derived quantitative plaque metrics for predicting adverse cardiovascular outcomes. The study enrolled 2404 patients with suspected coronary artery disease (CAD) but without a prior history of CAD. All participants underwent CAC scoring and CCTA, with plaque metrics quantified using an artificial intelligence (AI)-based tool (Cleerly, Inc). Percent atheroma volume (PAV) and non-calcified plaque volume percentage (NCPV%), reflecting total plaque burden and the proportion of non-calcified plaque volume normalized to vessel volume, were evaluated. The primary endpoint was a composite of all-cause mortality and non-fatal myocardial infarction (MI). Cox proportional hazard models, adjusted for clinical risk factors and early revascularization, were employed for analysis. During a median follow-up of 7.0 years, 208 patients (8.7%) experienced the primary endpoint, including 73 cases of MI (3%). The model incorporating PAV demonstrated superior discriminatory power for the composite endpoint (AUC = 0.729) compared to CAC scoring (AUC = 0.706, P = 0.016). In MI prediction, PAV (AUC = 0.791) significantly outperformed CAC (AUC = 0.699, P < 0.001), with NCPV% showing the highest prognostic accuracy (AUC = 0.814, P < 0.001). AI-driven assessment of coronary plaque burden enhances prognostic accuracy for future adverse cardiovascular events, highlighting the critical role of comprehensive plaque characterization in refining risk stratification strategies.

Beyond the LUMIR challenge: The pathway to foundational registration models

Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass

arxiv logopreprintMay 30 2025
Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.

Comparative analysis of natural language processing methodologies for classifying computed tomography enterography reports in Crohn's disease patients.

Dai J, Kim MY, Sutton RT, Mitchell JR, Goebel R, Baumgart DC

pubmed logopapersMay 30 2025
Imaging is crucial to assess disease extent, activity, and outcomes in inflammatory bowel disease (IBD). Artificial intelligence (AI) image interpretation requires automated exploitation of studies at scale as an initial step. Here we evaluate natural language processing to classify Crohn's disease (CD) on CTE. From our population representative IBD registry a sample of CD patients (male: 44.6%, median age: 50 IQR37-60) and controls (n = 981 each) CTE reports were extracted and split into training- (n = 1568), development- (n = 196), and testing (n = 198) datasets each with around 200 words and balanced numbers of labels, respectively. Predictive classification was evaluated with CNN, Bi-LSTM, BERT-110M, LLaMA-3.3-70B-Instruct and DeepSeek-R1-Distill-LLaMA-70B. While our custom IBDBERT finetuned on expert IBD knowledge (i.e. ACG, AGA, ECCO guidelines), outperformed rule- and rationale extraction-based classifiers (accuracy 88.6% with pre-tuning learning rate 0.00001, AUC 0.945) in predictive performance, LLaMA, but not DeepSeek achieved overall superior results (accuracy 91.2% vs. 88.9%, F1 0.907 vs. 0.874).

The value of artificial intelligence in PSMA PET: a pathway to improved efficiency and results.

Dadgar H, Hong X, Karimzadeh R, Ibragimov B, Majidpour J, Arabi H, Al-Ibraheem A, Khalaf AN, Anwar FM, Marafi F, Haidar M, Jafari E, Zarei A, Assadi M

pubmed logopapersMay 30 2025
This systematic review investigates the potential of artificial intelligence (AI) in improving the accuracy and efficiency of prostate-specific membrane antigen positron emission tomography (PSMA PET) scans for detecting metastatic prostate cancer. A comprehensive literature search was conducted across Medline, Embase, and Web of Science, adhering to PRISMA guidelines. Key search terms included "artificial intelligence," "machine learning," "deep learning," "prostate cancer," and "PSMA PET." The PICO framework guided the selection of studies focusing on AI's application in evaluating PSMA PET scans for staging lymph node and distant metastasis in prostate cancer patients. Inclusion criteria prioritized original English-language articles published up to October 2024, excluding studies using non-PSMA radiotracers, those analyzing only the CT component of PSMA PET-CT, studies focusing solely on intra-prostatic lesions, and non-original research articles. The review included 22 studies, with a mix of prospective and retrospective designs. AI algorithms employed included machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs). The studies explored various applications of AI, including improving diagnostic accuracy, sensitivity, differentiation from benign lesions, standardization of reporting, and predicting treatment response. Results showed high sensitivity (62% to 97%) and accuracy (AUC up to 98%) in detecting metastatic disease, but also significant variability in positive predictive value (39.2% to 66.8%). AI demonstrates significant promise in enhancing PSMA PET scan analysis for metastatic prostate cancer, offering improved efficiency and potentially better diagnostic accuracy. However, the variability in performance and the "black box" nature of some algorithms highlight the need for larger prospective studies, improved model interpretability, and the continued involvement of experienced nuclear medicine physicians in interpreting AI-assisted results. AI should be considered a valuable adjunct, not a replacement, for expert clinical judgment.
Page 127 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.