Sort by:
Page 19 of 1431427 results

AI-Based screening for thoracic aortic aneurysms in routine breast MRI.

Bounias D, Führes T, Brock L, Graber J, Kapsner LA, Liebert A, Schreiter H, Eberle J, Hadler D, Skwierawska D, Floca R, Neher P, Kovacs B, Wenkel E, Ohlmeyer S, Uder M, Maier-Hein K, Bickelhaupt S

pubmed logopapersJun 12 2025
Prognosis for thoracic aortic aneurysms is significantly worse for women than men, with a higher mortality rate observed among female patients. The increasing use of magnetic resonance breast imaging (MRI) offers a unique opportunity for simultaneous detection of both breast cancer and thoracic aortic aneurysms. We retrospectively validate a fully-automated artificial neural network (ANN) pipeline on 5057 breast MRI examinations from public (Duke University Hospital/EA1141 trial) and in-house (Erlangen University Hospital) data. The ANN, benchmarked against 3D-ground-truth segmentations, clinical reports, and a multireader panel, demonstrates high technical robustness (dice/clDice 0.88-0.91/0.97-0.99) across different vendors and field strengths. The ANN improves aneurysm detection rates by 3.5-fold compared with routine clinical readings, highlighting its potential to improve early diagnosis and patient outcomes. Notably, a higher odds ratio (OR = 2.29, CI: [0.55,9.61]) for thoracic aortic aneurysms is observed in women with breast cancer or breast cancer history, suggesting potential further benefits from integrated simultaneous assessment for cancer and aortic aneurysms.

Summary Report of the SNMMI AI Task Force Radiomics Challenge 2024.

Boellaard R, Rahmim A, Eertink JJ, Duehrsen U, Kurch L, Lugtenburg PJ, Wiegers SE, Zwezerijnen GJC, Zijlstra JM, Heymans MW, Buvat I

pubmed logopapersJun 12 2025
In medical imaging, challenges are competitions that aim to provide a fair comparison of different methodologic solutions to a common problem. Challenges typically focus on addressing real-world problems, such as segmentation, detection, and prediction tasks, using various types of medical images and associated data. Here, we describe the organization and results of such a challenge to compare machine-learning models for predicting survival in patients with diffuse large B-cell lymphoma using a baseline <sup>18</sup>F-FDG PET/CT radiomics dataset. <b>Methods:</b> This challenge aimed to predict progression-free survival (PFS) in patients with diffuse large B-cell lymphoma, either as a binary outcome (shorter than 2 y versus longer than 2 y) or as a continuous outcome (survival in months). All participants were provided with a radiomic training dataset, including the ground truth survival for designing a predictive model and a radiomic test dataset without ground truth. Figures of merit (FOMs) used to assess model performance were the root-mean-square error for continuous outcomes and the C-index for 1-, 2-, and 3-y PFS binary outcomes. The challenge was endorsed and initiated by the Society of Nuclear Medicine and Molecular Imaging AI Task Force. <b>Results:</b> Nineteen models for predicting PFS as a continuous outcome from 15 teams were received. Among those models, external validation identified 6 models showing similar performance to that of a simple general linear reference model using SUV and total metabolic tumor volumes (TMTV) only. Twelve models for predicting binary outcomes were submitted by 9 teams. External validation showed that 1 model had higher, but nonsignificant, C-index values compared with values obtained by a simple logistic regression model using SUV and TMTV. <b>Conclusion:</b> Some of the radiomic-based machine-learning models developed by participants showed better FOMs than did simple linear or logistic regression models based on SUV and TMTV only, although the differences in observed FOMs were nonsignificant. This suggests that, for the challenge dataset, there was limited or no value seen from the addition of sophisticated radiomic features and use of machine learning when developing models for outcome prediction.

Application of Deep Learning Accelerated Image Reconstruction in T2-Weighted Turbo Spin-Echo Imaging of the Brain at 7T.

Liu Z, Zhou X, Tao S, Ma J, Nickel D, Liebig P, Mostapha M, Patel V, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersJun 12 2025
Prolonged imaging times and motion sensitivity at 7T necessitate advancements in image acceleration techniques. This study evaluates a 7T deep learning (DL)-based image reconstruction by using a deep neural network trained on 7T data, applied to T2-weighted turbo spin-echo imaging. Raw <i>k</i>-space data from 30 consecutive clinical 7T brain MRI patients was reconstructed by using both DL and standard methods. Qualitative assessments included overall image quality, artifacts, sharpness, structural conspicuity, and noise level, while quantitative metrics evaluated contrast-to-noise ratio (CNR) and image noise. DL-based reconstruction consistently outperformed standard methods across all qualitative metrics (<i>P</i> < .001), with a mean CNR increase of 50.8% (95% CI: 43.0%-58.6%) and a mean noise reduction of 35.1% (95% CI: 32.7%-37.6%). These findings demonstrate that DL-based reconstruction at 7T significantly enhances image quality without introducing adverse effects, offering a promising tool for addressing the challenges of ultra-high-field MRI.

Improving the Robustness of Deep Learning Models in Predicting Hematoma Expansion from Admission Head CT.

Tran AT, Abou Karam G, Zeevi D, Qureshi AI, Malhotra A, Majidi S, Murthy SB, Park S, Kontos D, Falcone GJ, Sheth KN, Payabvash S

pubmed logopapersJun 12 2025
Robustness against input data perturbations is essential for deploying deep learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep learning models' prediction errors. Testing deep learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness. In this study, we examined adversarial training and input modifications to improve the robustness of deep learning models in predicting hematoma expansion (HE) from admission head CTs of patients with acute intracerebral hemorrhage (ICH). We used a multicenter cohort of <i>n</i> = 890 patients for cross-validation/training, and a cohort of <i>n</i> = 684 consecutive patients with ICH from 2 stroke centers for independent validation. Fast gradient sign method (FGSM) and projected gradient descent (PGD) adversarial attacks were applied for training and testing. We developed and tested 4 different models to predict ≥3 mL, ≥6 mL, ≥9 mL, and ≥12 mL HE in an independent validation cohort applying receiver operating characteristics area under the curve (AUC). We examined varying mixtures of adversarial and nonperturbed (clean) scans for training as well as including additional input from the hyperparameter-free Otsu multithreshold segmentation for model. When deep learning models trained solely on clean scans were tested with PGD and FGSM adversarial images, the average HE prediction AUC decreased from 0.8 to 0.67 and 0.71, respectively. Overall, the best performing strategy to improve model robustness was training with 5:3 mix of clean and PGD adversarial scans and addition of Otsu multithreshold segmentation to model input, increasing the average AUC to 0.77 against both PGD and FGSM adversarial attacks. Adversarial training with FGSM improved robustness against similar type attack but offered limited cross-attack robustness against PGD-type images. Adversarial training and inclusion of threshold-based segmentation as an additional input can improve deep learning model robustness in prediction of HE from admission head CTs in acute ICH.

CT derived fractional flow reserve: Part 2 - Critical appraisal of the literature.

Rodriguez-Lozano PF, Waheed A, Evangelou S, Kolossváry M, Shaikh K, Siddiqui S, Stipp L, Lakshmanan S, Wu EH, Nurmohamed NS, Orbach A, Baliyan V, de Matos JFRG, Trivedi SJ, Madan N, Villines TC, Ihdayhid AR

pubmed logopapersJun 12 2025
The integration of computed tomography-derived fractional flow reserve (CT-FFR), utilizing computational fluid dynamics and artificial intelligence (AI) in routine coronary computed tomographic angiography (CCTA), presents a promising approach to enhance evaluations of functional lesion severity. Extensive evidence underscores the diagnostic accuracy, prognostic significance, and clinical relevance of CT-FFR, prompting recent clinical guidelines to recommend its combined use with CCTA for selected individuals with with intermediate stenosis on CCTA and stable or acute chest pain. This manuscript critically examines the existing clinical evidence, evaluates the diagnostic performance, and outlines future perspectives for integrating noninvasive assessments of coronary anatomy and physiology. Furthermore, it serves as a practical guide for medical imaging professionals by addressing common pitfalls and challenges associated with CT-FFR while proposing potential solutions to facilitate its successful implementation in clinical practice.

Exploring the limit of image resolution for human expert classification of vascular ultrasound images in giant cell arteritis and healthy subjects: the GCA-US-AI project.

Bauer CJ, Chrysidis S, Dejaco C, Koster MJ, Kohler MJ, Monti S, Schmidt WA, Mukhtyar CB, Karakostas P, Milchert M, Ponte C, Duftner C, de Miguel E, Hocevar A, Iagnocco A, Terslev L, Døhn UM, Nielsen BD, Juche A, Seitz L, Keller KK, Karalilova R, Daikeler T, Mackie SL, Torralba K, van der Geest KSM, Boumans D, Bosch P, Tomelleri A, Aschwanden M, Kermani TA, Diamantopoulos A, Fredberg U, Inanc N, Petzinna SM, Albarqouni S, Behning C, Schäfer VS

pubmed logopapersJun 12 2025
Prompt diagnosis of giant cell arteritis (GCA) with ultrasound is crucial for preventing severe ocular and other complications, yet expertise in ultrasound performance is scarce. The development of an artificial intelligence (AI)-based assistant that facilitates ultrasound image classification and helps to diagnose GCA early promises to close the existing gap. In the projection of the planned AI, this study investigates the minimum image resolution required for human experts to reliably classify ultrasound images of arteries commonly affected by GCA for the presence or absence of GCA. Thirty-one international experts in GCA ultrasonography participated in a web-based exercise. They were asked to classify 10 ultrasound images for each of 5 vascular segments as GCA, normal, or not able to classify. The following segments were assessed: (1) superficial common temporal artery, (2) its frontal and (3) parietal branches (all in transverse view), (4) axillary artery in transverse view, and 5) axillary artery in longitudinal view. Identical images were shown at different resolutions, namely 32 × 32, 64 × 64, 128 × 128, 224 × 224, and 512 × 512 pixels, thereby resulting in a total of 250 images to be classified by every study participant. Classification performance improved with increasing resolution up to a threshold, plateauing at 224 × 224 pixels. At 224 × 224 pixels, the overall classification sensitivity was 0.767 (95% CI, 0.737-0.796), and specificity was 0.862 (95% CI, 0.831-0.888). A resolution of 224 × 224 pixels ensures reliable human expert classification and aligns with the input requirements of many common AI-based architectures. Thus, the results of this study substantially guide projected AI development.

Multimodal deep learning for enhanced breast cancer diagnosis on sonography.

Wei TR, Chang A, Kang Y, Patel M, Fang Y, Yan Y

pubmed logopapersJun 12 2025
This study introduces a novel multimodal deep learning model tailored for the differentiation of benign and malignant breast masses using dual-view breast ultrasound images (radial and anti-radial views) in conjunction with corresponding radiology reports. The proposed multimodal model architecture includes specialized image and text encoders for independent feature extraction, along with a transformation layer to align the multimodal features for the subsequent classification task. The model achieved an area of the curve of 85% and outperformed unimodal models with 6% and 8% in Youden index. Additionally, our multimodal model surpassed zero-shot predictions generated by prominent foundation models such as CLIP and MedCLIP. In direct comparison with classification results based on physician-assessed ratings, our model exhibited clear superiority, highlighting its practical significance in diagnostics. By integrating both image and text modalities, this study exemplifies the potential of multimodal deep learning in enhancing diagnostic performance, laying the foundation for developing robust and transparent AI-assisted solutions.

NeuroEmo: A neuroimaging-based fMRI dataset to extract temporal affective brain dynamics for Indian movie video clips stimuli using dynamic functional connectivity approach with graph convolution neural network (DFC-GCNN).

Abgeena A, Garg S, Goyal N, P C JR

pubmed logopapersJun 12 2025
FMRI, a non-invasive neuroimaging technique, can detect emotional brain activation patterns. It allows researchers to observe functional changes in the brain, making it a valuable tool for emotion recognition. For improved emotion recognition systems, it becomes crucial to understand the neural mechanisms behind emotional processing in the brain. There have been multiple studies across the world on the same, however, research on fMRI-based emotion recognition within the Indian population remains scarce, limiting the generalizability of existing models. To address this gap, a culturally relevant neuroimaging dataset has been created https://openneuro.org/datasets/ds005700 for identifying five emotional states i.e., calm, afraid, delighted, depressed and excited-in a diverse group of Indian participants. To ensure cultural relevance, emotional stimuli were derived from Bollywood movie clips. This study outlines the fMRI task design, experimental setup, data collection procedures, preprocessing steps, statistical analysis using the General Linear Model (GLM), and region-of-interest (ROI)-based dynamic functional connectivity (DFC) extraction using parcellation based on the Power et al. (2011) functional atlas. A supervised emotion classification model has been proposed using a Graph Convolutional Neural Network (GCNN), where graph structures were constructed from DFC matrices at varying thresholds. The DFC-GCNN model achieved an impressive 95% classification accuracy across 5-fold cross-validation, highlighting emotion-specific connectivity dynamics in key affective regions, including the amygdala, prefrontal cortex, and anterior insula. These findings emphasize the significance of temporal variability in emotional state classification. By introducing a culturally specific neuroimaging dataset and a GCNN-based emotion recognition framework, this research enhances the applicability of graph-based models for identifying region-wise connectivity patterns in fMRI data. It also offers novel insights into cross-cultural differences in emotional processing at the neural level. Furthermore, the high spatial and temporal resolution of the fMRI dataset provides a valuable resource for future studies in emotional neuroscience and related disciplines.

Accelerating Diffusion: Task-Optimized latent diffusion models for rapid CT denoising.

Jee J, Chang W, Kim E, Lee K

pubmed logopapersJun 12 2025
Computed tomography (CT) systems are indispensable for diagnostics but pose risks due to radiation exposure. Low-dose CT (LDCT) mitigates these risks but introduces noise and artifacts that compromise diagnostic accuracy. While deep learning methods, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have been applied to LDCT denoising, challenges persist, including difficulties in preserving fine details and risks of model collapse. Recently, the Denoising Diffusion Probabilistic Model (DDPM) has addressed the limitations of traditional methods and demonstrated exceptional performance across various tasks. Despite these advancements, its high computational cost during training and extended sampling time significantly hinder practical clinical applications. Additionally, DDPM's reliance on random Gaussian noise can reduce optimization efficiency and performance in task-specific applications. To overcome these challenges, this study proposes a novel LDCT denoising framework that integrates the Latent Diffusion Model (LDM) with the Cold Diffusion Process. LDM reduces computational costs by conducting the diffusion process in a low-dimensional latent space while preserving critical image features. The Cold Diffusion Process replaces Gaussian noise with a CT denoising task-specific degradation approach, enabling efficient denoising with fewer time steps. Experimental results demonstrate that the proposed method outperforms DDPM in key metrics, including PSNR, SSIM, and RMSE, while achieving up to 2 × faster training and 14 × faster sampling. These advancements highlight the proposed framework's potential as an effective and practical solution for real-world clinical applications.

A machine learning approach for personalized breast radiation dosimetry in CT: Integrating radiomics and deep neural networks.

Tzanis E, Stratakis J, Damilakis J

pubmed logopapersJun 11 2025
To develop a machine learning-based workflow for patient-specific breast radiation dosimetry in CT. Two hundred eighty-six chest CT examinations, with corresponding right and left breast contours, were retrospectively collected from the radiotherapy department at our institution to develop and validate breast segmentation U-Nets. Additionally, Monte Carlo simulations were performed for each CT scan to determine radiation doses to the breasts. The derived breast doses, along with predictors such as X-ray tube current and radiomic features, were then used to train deep neural networks (DNNs) for breast dose prediction. The breast segmentation models achieved a mean dice similarity coefficient of 0.92, with precision and sensitivity scores above 0.90 for both breasts, indicating high segmentation accuracy. The DNNs demonstrated close alignment with ground truth values, with mean predicted doses of 5.05 ± 0.50 mGy for the right breast and 5.06 ± 0.55 mGy for the left breast, compared to ground truth values of 5.03 ± 0.57 mGy and 5.02 ± 0.61 mGy, respectively. The mean absolute percentage errors were 4.01 % (range: 3.90 %-4.12 %) for the right breast and 4.82 % (range: 4.56 %-5.11 %) for the left breast. The mean inference time was 30.2 ± 4.3 s. Statistical analysis showed no significant differences between predicted and actual doses (p ≥ 0.07). This study presents an automated, machine learning-based workflow for breast radiation dosimetry in CT, integrating segmentation and dose prediction models. The models and code are available at: https://github.com/eltzanis/ML-based-Breast-Radiation-Dosimetry-in-CT.
Page 19 of 1431427 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.