Sort by:
Page 209 of 3143139 results

Enhancing Lesion Detection in Inflammatory Myelopathies: A Deep Learning-Reconstructed Double Inversion Recovery MRI Approach.

Fang Q, Yang Q, Wang B, Wen B, Xu G, He J

pubmed logopapersJun 3 2025
The imaging of inflammatory myelopathies has advanced significantly across time, with MRI techniques playing a pivotal role in enhancing lesion detection. However, the impact of deep learning (DL)-based reconstruction on 3D double inversion recovery (DIR) imaging for inflammatory myelopathies remains unassessed. This study aimed to compare the acquisition time, image quality, diagnostic confidence, and lesion detection rates among sagittal T2WI, standard DIR, and DL-reconstructed DIR in patients with inflammatory myelopathies. In this observational study, patients diagnosed with inflammatory myelopathies were recruited between June 2023 and March 2024. Each patient underwent sagittal conventional TSE sequences and standard 3D DIR (T2WI and standard 3D DIR were used as references for comparison), followed by an undersampled accelerated double inversion recovery deep learning (DIR<sub>DL</sub>) examination. Three neuroradiologists evaluated the images using a 4-point Likert scale (from 1 to 4) for overall image quality, perceived SNR, sharpness, artifacts, and diagnostic confidence. The acquisition times and lesion detection rates were also compared among the acquisition protocols. A total of 149 participants were evaluated (mean age, 40.6 [SD, 16.8] years; 71 women). The median acquisition time for DIR<sub>DL</sub> was significantly lower than for standard DIR (298 seconds [interquartile range, 288-301 seconds] versus 151 seconds [interquartile range, 148-155 seconds]; <i>P</i> < .001), showing a 49% time reduction. DIR<sub>DL</sub> images scored higher in overall quality, perceived SNR, and artifact noise reduction (all <i>P</i> < .001). There were no significant differences in sharpness (<i>P</i> = .07) or diagnostic confidence (<i>P</i> = .06) between the standard DIR and DIR<sub>DL</sub> protocols. Additionally, DIR<sub>DL</sub> detected 37% more lesions compared with T2WI (300 versus 219; <i>P</i> < .001). DIR<sub>DL</sub> significantly reduces acquisition time and improves image quality compared with standard DIR, without compromising diagnostic confidence. Additionally, DIR<sub>DL</sub> enhances lesion detection in patients with inflammatory myelopathies, making it a valuable tool in clinical practice. These findings underscore the potential for incorporating DIR<sub>DL</sub> into future imaging guidelines.

A Review of Intracranial Aneurysm Imaging Modalities, from CT to State-of-the-Art MR.

Allaw S, Khabaz K, Given TC, Montas D, Alcazar-Felix RJ, Srinath A, Kass-Hout T, Carroll TJ, Hurley MC, Polster SP

pubmed logopapersJun 3 2025
Traditional guidance for intracranial aneurysm (IA) management is dichotomized by rupture status. Fundamental to the management of ruptured aneurysm is the detection and treatment of SAH, along with securing the aneurysm by the safest technique. On the other hand, unruptured aneurysms first require a careful assessment of their natural history versus treatment risk, including an imaging assessment of aneurysm size, location, and morphology, along with additional evidence-based risk factors such as smoking, hypertension, and family history. Unfortunately, a large proportion of ruptured aneurysms are in the lower risk size category (<7 mm), putting a premium on discovering a more refined noninvasive biomarker to detect and stratify aneurysm instability before rupture. In this review of aneurysm work-up, we cover the gamut of established imaging modalities (eg, CT, CTA, DSA, FLAIR, 3D TOF-MRA, contrast-enhanced-MRA) as well as more novel MR techniques (MR vessel wall imaging, dynamic contrast-enhanced MRI, computational fluid dynamics). Additionally, we evaluate the current landscape of artificial intelligence software and its integration into diagnostic and risk-stratification pipelines for IAs. These advanced MR techniques, increasingly complemented with artificial intelligence models, offer a paradigm shift by evaluating factors beyond size and morphology, including vessel wall inflammation, permeability, and hemodynamics. Additionally, we provide our institution's scan parameters for many of these modalities as a reference. Ultimately, this review provides an organized, up-to-date summary of the array of available modalities/sequences for IA imaging to help build protocols focused on IA characterization.

Ultra-High-Resolution Photon-Counting-Detector CT with a Dedicated Denoising Convolutional Neural Network for Enhanced Temporal Bone Imaging.

Chang S, Benson JC, Lane JI, Bruesewitz MR, Swicklik JR, Thorne JE, Koons EK, Carlson ML, McCollough CH, Leng S

pubmed logopapersJun 3 2025
Ultra-high-resolution (UHR) photon-counting-detector (PCD) CT improves image resolution but increases noise, necessitating the use of smoother reconstruction kernels that reduce resolution below the 0.125-mm maximum spatial resolution. A denoising convolutional neural network (CNN) was developed to reduce noise in images reconstructed with the available sharpest reconstruction kernel while preserving resolution for enhanced temporal bone visualization to address this issue. With institutional review board approval, the CNN was trained on 6 patient cases of clinical temporal bone imaging (1885 images) and tested on 20 independent cases using a dual-source PCD-CT (NAEOTOM Alpha). Images were reconstructed using quantum iterative reconstruction at strength 3 (QIR3) with both a clinical routine kernel (Hr84) and the sharpest available head kernel (Hr96). The CNN was applied to images reconstructed with Hr96 and QIR1 kernel. For each case, three series of images (Hr84-QIR3, Hr96-QIR3, and Hr96-CNN) were randomized for review by 2 neuroradiologists assessing the overall quality and delineating the modiolus, stapes footplate, and incudomallear joint. The CNN reduced noise by 80% compared with Hr96-QIR3 and by 50% relative to Hr84-QIR3, while maintaining high resolution. Compared with the conventional method at the same kernel (Hr96-QIR3), Hr96-CNN significantly decreased image noise (from 204.63 to 47.35 HU) and improved its structural similarity index (from 0.72 to 0.99). Hr96-CNN images ranked higher than Hr84-QIR3 and Hr96-QIR3 in overall quality (<i>P</i> < .001). Readers preferred Hr96-CNN for all 3 structures. The proposed CNN significantly reduced image noise in UHR PCD-CT, enabling the use of the sharpest kernel. This combination greatly enhanced diagnostic image quality and anatomic visualization.

Patient-specific prediction of glioblastoma growth via reduced order modeling and neural networks.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

Artificial intelligence vs human expertise: A comparison of plantar fascia thickness measurements through MRI imaging.

Alyanak B, Çakar İ, Dede BT, Yıldızgören MT, Bağcıer F

pubmed logopapersJun 3 2025
This study aims to evaluate the reliability of plantar fascia thickness measurements performed by ChatGPT-4 using magnetic resonance imaging (MRI) compared to those obtained by an experienced clinician. In this retrospective, single-center study, foot MRI images from the hospital archive were analysed. Plantar fascia thickness was measured under both blinded and non-blinded conditions by an experienced clinician and ChatGPT-4 at two separate time points. Measurement reliability was assessed using the intraclass correlation coefficient (ICC), mean absolute error (MAE), and mean relative error (MRE). A total of 41 participants (32 females, 9 males) were included. The average plantar fascia thickness measured by the clinician was 4.20 ± 0.80 mm and 4.25 ± 0.92 mm under blinded and non-blinded conditions, respectively, while ChatGPT-4's measurements were 6.47 ± 1.30 mm and 6.46 ± 1.31 mm, respectively. Human evaluators demonstrated excellent agreement (ICC = 0.983-0.989), whereas ChatGPT-4 exhibited low reliability (ICC = 0.391-0.432). In thin plantar fascia cases, ChatGPT-4's error rate was higher, with MAE = 2.70 mm, MRE = 77.17 % under blinded conditions, and MAE = 2.91 mm, MRE = 87.02 % under non-blinded conditions. ChatGPT-4 demonstrated lower reliability in plantar fascia thickness measurements compared to an experienced clinician, with increased error rates in thin structures. These findings highlight the limitations of AI-based models in medical image analysis and emphasize the need for further refinement before clinical implementation.

Radiomics-Based Differentiation of Primary Central Nervous System Lymphoma and Solitary Brain Metastasis Using Contrast-Enhanced T1-Weighted Imaging: A Retrospective Machine Learning Study.

Xia X, Qiu J, Tan Q, Du W, Gou Q

pubmed logopapersJun 3 2025
To develop and evaluate radiomics-based models using contrast-enhanced T1-weighted imaging (CE-T1WI) for the non-invasive differentiation of primary central nervous system lymphoma (PCNSL) and solitary brain metastasis (SBM), aiming to improve diagnostic accuracy and support clinical decision-making. This retrospective study included a cohort of 324 patients pathologically diagnosed with PCNSL (n=115) or SBM (n=209) between January 2014 and December 2024. Tumor regions were manually segmented on CE-T1WI, and a comprehensive set of 1561 radiomic features was extracted. To identify the most important features, a two-step approach for feature selection was utilized, which involved the use of least absolute shrinkage and selection operator (LASSO) regression. Multiple machine learning classifiers were trained and validated to assess diagnostic performance. Model performance was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity. The effectiveness of the radiomics-based models was further assessed using decision curve analysis, which incorporated a risk threshold of 0.5 to balance both false positives and false negatives. 23 features were identified through LASSO regression. All classifiers demonstrated robust performance in terms of area under the curve (AUC) and accuracy, with 15 out of 20 classifiers achieving AUC values exceeding 0.9. In the 10-fold cross-validation, the artificial neural network (ANN) classifier achieved the highest AUC of 0.9305, followed by the support vector machine with polynomial kernels (SVMPOLY) classifier at 0.9226. Notably, the independent test revealed that the support vector machine with radial basis function (SVMRBF) classifier performed best, with an AUC of 0.9310 and the highest accuracy of 0.8780. The selected models-SVMRBF, SVMPOLY, ensemble learning with LDA (ELDA), ANN, random forest (RF), and grading boost with random undersampling boosting (GBRUSB)-all showed significant clinical utility, with their standardized net benefits (sNBs) surpassing 0.6. These results underline the potential of the radiomics-based models in reliably distinguishing PCNSL from SBM. The application of radiomic-driven models based on CE-T1WI has demonstrated encouraging potential for accurately distinguishing between PCNSL and SBM. The SVMRBF classifier showed the greatest diagnostic efficacy of all the classifiers tested, indicating its potential clinical utility in differential diagnosis.

Radiomics and deep learning characterisation of liver malignancies in CT images - A systematic review.

Yahaya BS, Osman ND, Karim NKA, Appalanaido GK, Isa IS

pubmed logopapersJun 3 2025
Computed tomography (CT) has been widely used as an effective tool for liver imaging due to its high spatial resolution, and ability to differentiate tissue densities, which contributing to comprehensive image analysis. Recent advancements in artificial intelligence (AI) promoted the role of Machine Learning (ML) in managing liver cancers by predicting or classifying tumours using mathematical algorithms. Deep learning (DL), a subset of ML, expanded these capabilities through convolutional neural networks (CNN) that analyse large data automatically. This review examines methods, achievements, limitations, and performance outcomes of ML-based radiomics and DL models for liver malignancies from CT imaging. A systematic search for full-text articles in English on CT radiomics and DL in liver cancer analysis was conducted in PubMed, Scopus, Science Citation Index, and Cochrane Library databases between 2020 and 2024 using the keywords; machine learning, radiomics, deep learning, computed tomography, liver cancer and associated MESH terms. PRISMA guidelines were used to identify and screen studies for inclusion. A total of 49 studies were included consisting of 17 Radiomics, 24 DL, and 8 combined DL/Radiomics studies. Radiomics has been predominantly utilised for predictive analysis, while DL has been extensively applied to automatic liver and tumour segmentation with a surge of a recent increase in studies integrating both techniques. Despite the growing popularity of DL methods, classical radiomics models are still relevant and often preferred over DL methods when performance is similar, due to lower computational and data needs. Performance of models keep improving, but challenges like data scarcity and lack of standardised protocols persists.

Artificial intelligence in bone metastasis analysis: Current advancements, opportunities and challenges.

Afnouch M, Bougourzi F, Gaddour O, Dornaika F, Ahmed AT

pubmed logopapersJun 3 2025
Artificial Intelligence is transforming medical imaging, particularly in the analysis of bone metastases (BM), a serious complication of advanced cancers. Machine learning and deep learning techniques offer new opportunities to improve detection, recognition, and segmentation of bone metastasis. Yet, challenges such as limited data, interpretability, and clinical validation remain. Following PRISMA guidelines, we reviewed artificial intelligence methods and applications for bone metastasis analysis across major imaging modalities including CT, MRI, PET, SPECT, and bone scintigraphy. The survey includes traditional machine learning models and modern deep learning architectures such as CNNs and transformers. We also examined available datasets and their effect in developing artificial intelligence in this field. Artificial intelligence models have achieved strong performance across tasks and modalities, with Convolutional Neural Network (CNN) and Transformer architectures showing particularly efficient performance across different tasks. However, limitations persist, including data imbalance, overfitting risks, and the need for greater transparency. Clinical translation is also challenged by regulatory and validation hurdles. Artificial intelligence holds strong potential to improve BM diagnosis and streamline radiology workflows. To reach clinical maturity, future work must address data diversity, model explainability, and large-scale validation, which are critical steps for being trusted to be integrated into the oncology care routines.

SASWISE-UE: Segmentation and synthesis with interpretable scalable ensembles for uncertainty estimation.

Chen W, McMillan AB

pubmed logopapersJun 2 2025
This paper introduces an efficient sub-model ensemble framework aimed at enhancing the interpretability of medical deep learning models, thus increasing their clinical applicability. By generating uncertainty maps, this framework enables end-users to evaluate the reliability of model outputs. We developed a strategy to generate diverse models from a single well-trained checkpoint, facilitating the training of a model family. This involves producing multiple outputs from a single input, fusing them into a final output, and estimating uncertainty based on output disagreements. Implemented using U-Net and UNETR models for segmentation and synthesis tasks, this approach was tested on CT body segmentation and MR-CT synthesis datasets. It achieved a mean Dice coefficient of 0.814 in segmentation and a Mean Absolute Error of 88.17 HU in synthesis, improved from 89.43 HU by pruning. Additionally, the framework was evaluated under image corruption and data undersampling, maintaining correlation between uncertainty and error, which highlights its robustness. These results suggest that the proposed approach not only maintains the performance of well-trained models but also enhances interpretability through effective uncertainty estimation, applicable to both convolutional and transformer models in a range of imaging tasks.

ViTU-net: A hybrid deep learning model with patch-based LSB approach for medical image watermarking and authentication using a hybrid metaheuristic algorithm.

Nanammal V, Rajalakshmi S, Remya V, Ranjith S

pubmed logopapersJun 2 2025
In modern healthcare, telemedicine, health records, and AI-driven diagnostics depend on medical image watermarking to secure chest X-rays for pneumonia diagnosis, ensuring data integrity, confidentiality, and authenticity. A 2024 study found over 70 % of healthcare institutions faced medical image data breaches. Yet, current methods falter in imperceptibility, robustness against attacks, and deployment efficiency. ViTU-Net integrates cutting-edge techniques to address these multifaceted challenges in medical image security and analysis. The model's core component, the Vision Transformer (ViT) encoder, efficiently captures global dependencies and spatial information, while the U-Net decoder enhances image reconstruction, with both components leveraging the Adaptive Hierarchical Spatial Attention (AHSA) module for improved spatial processing. Additionally, the patch-based LSB embedding mechanism ensures focused embedding of reversible fragile watermarks within each patch of the segmented non-diagnostic region (RONI), guided dynamically by adaptive masks derived from the attention mechanism, minimizing impact on diagnostic accuracy while maximizing precision and ensuring optimal utilization of spatial information. The hybrid meta-heuristic optimization algorithm, TuniBee Fusion, dynamically optimizes watermarking parameters, striking a balance between exploration and exploitation, thereby enhancing watermarking efficiency and robustness. The incorporation of advanced cryptographic techniques, including SHA-512 hashing and AES encryption, fortifies the model's security, ensuring the authenticity and confidentiality of watermarked medical images. A PSNR value of 60.7 dB, along with an NCC value of 0.9999 and an SSIM value of 1.00, underscores its effectiveness in preserving image quality, security, and diagnostic accuracy. Robustness analysis against a spectrum of attacks validates ViTU-Net's resilience in real-world scenarios.
Page 209 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.