Sort by:
Page 129 of 3543538 results

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

Deep Learning-Based Signal Amplification of T1-Weighted Single-Dose Images Improves Metastasis Detection in Brain MRI.

Haase R, Pinetz T, Kobler E, Bendella Z, Zülow S, Schievelkamp AH, Schmeel FC, Panahabadi S, Stylianou AM, Paech D, Foltyn-Dumitru M, Wagner V, Schlamp K, Heussel G, Holtkamp M, Heussel CP, Vahlensieck M, Luetkens JA, Schlemmer HP, Haubold J, Radbruch A, Effland A, Deuschl C, Deike K

pubmed logopapersAug 1 2025
Double-dose contrast-enhanced brain imaging improves tumor delineation and detection of occult metastases but is limited by concerns about gadolinium-based contrast agents' effects on patients and the environment. The purpose of this study was to test the benefit of a deep learning-based contrast signal amplification in true single-dose T1-weighted (T-SD) images creating artificial double-dose (A-DD) images for metastasis detection in brain magnetic resonance imaging. In this prospective, multicenter study, a deep learning-based method originally trained on noncontrast, low-dose, and T-SD brain images was applied to T-SD images of 30 participants (mean age ± SD, 58.5 ± 11.8 years; 23 women) acquired externally between November 2022 and June 2023. Four readers with different levels of experience independently reviewed T-SD and A-DD images for metastases with 4 weeks between readings. A reference reader reviewed additionally acquired true double-dose images to determine any metastases present. Performances were compared using Mid-p McNemar tests for sensitivity and Wilcoxon signed rank tests for false-positive findings. All readers found more metastases using A-DD images. The 2 experienced neuroradiologists achieved the same level of sensitivity using T-SD images (62 of 91 metastases, 68.1%). While the increase in sensitivity using A-DD images was only descriptive for 1 of them (A-DD: 65 of 91 metastases, +3.3%, P = 0.424), the second neuroradiologist benefited significantly with a sensitivity increase of 12.1% (73 of 91 metastases, P = 0.008). The 2 less experienced readers (1 resident and 1 fellow) both found significantly more metastases on A-DD images (resident, T-SD: 61.5%, A-DD: 68.1%, P = 0.039; fellow, T-SD: 58.2%, A-DD: 70.3%, P = 0.008). They were therefore able to use A-DD images to increase their sensitivity to the neuroradiologists' initial level on regular T-SD images. False-positive findings did not differ significantly between sequences. However, readers showed descriptively more false-positive findings on A-DD images. The benefit in sensitivity particularly applied to metastases ≤5 mm (5.7%-17.3% increase in sensitivity). A-DD images can improve the detectability of brain metastases without a significant loss of precision and could therefore represent a potentially valuable addition to regular single-dose brain imaging.

First comparison between artificial intelligence-guided coronary computed tomography angiography versus single-photon emission computed tomography testing for ischemia in clinical practice.

Cho GW, Sayed S, D'Costa Z, Karlsberg DW, Karlsberg RP

pubmed logopapersAug 1 2025
Noninvasive cardiac testing with coronary computed tomography angiography (CCTA) and single-photon emission computed tomography (SPECT) are becoming alternatives to invasive angiography for the evaluation of obstructive coronary artery disease. We aimed to evaluate whether a novel artificial intelligence (AI)-assisted CCTA program is comparable to SPECT imaging for ischemic testing. CCTA images were analyzed using an artificial intelligence convolutional neural network machine-learning-based model, atherosclerosis imaging-quantitative computed tomography (AI-QCT) ISCHEMIA . A total of 183 patients (75 females and 108 males, with an average age of 60.8 years ± 12.3 years) were selected. All patients underwent AI-QCT ISCHEMIA -augmented CCTA, with 60 undergoing concurrent SPECT and 16 having invasive coronary angiograms. Eight studies were excluded from analysis due to incomplete data or coronary anomalies.  A total of 175 patients (95%) had CCTA performed, deemed acceptable for AI-QCT ISCHEMIA interpretation. Compared to invasive angiography, AI-QCT ISCHEMIA -driven CCTA showed a sensitivity of 75% and specificity of 70% for predicting coronary ischemia, versus 70% and 53%, respectively for SPECT. The negative predictive value was high for female patients when using AI-QCT ISCHEMIA compared to SPECT (91% vs. 68%, P  = 0.042). Area under the receiver operating characteristic curves were similar between both modalities (0.81 for AI-CCTA, 0.75 for SPECT, P  = 0.526). When comparing both modalities, the correlation coefficient was r  = 0.71 ( P  < 0.04). AI-powered CCTA is a viable alternative to SPECT for detecting myocardial ischemia in patients with low- to intermediate-risk coronary artery disease, with significant positive and negative correlation in results. For patients who underwent confirmatory invasive angiography, the results of AI-CCTA and SPECT imaging were comparable. Future research focusing on prospective studies involving larger and more diverse patient populations is warranted to further investigate the benefits offered by AI-driven CCTA.

Deep Learning Reconstruction Combined With Conventional Acceleration Improves Image Quality of 3 T Brain MRI and Does Not Impact Quantitative Diffusion Metrics.

Wilpert C, Russe MF, Weiss J, Voss C, Rau S, Strecker R, Reisert M, Bedin R, Urbach H, Zaitsev M, Bamberg F, Rau A

pubmed logopapersAug 1 2025
Deep learning reconstruction of magnetic resonance imaging (MRI) allows to either improve image quality of accelerated sequences or to generate high-resolution data. We evaluated the interaction of conventional acceleration and Deep Resolve Boost (DRB)-based reconstruction techniques of a single-shot echo-planar imaging (ssEPI) diffusion-weighted imaging (DWI) on image quality features in cerebral 3 T brain MRI and compared it with a state-of-the-art DWI sequence. In this prospective study, 24 patients received a standard of care ssEPI DWI and 5 additional adapted ssEPI DWI sequences, 3 of those with DRB reconstruction. Qualitative analysis encompassed rating of image quality, noise, sharpness, and artifacts. Quantitative analysis compared apparent diffusion coefficient (ADC) values region-wise between the different DWI sequences. Intraclass correlations, paired sampled t test, Wilcoxon signed rank test, and weighted Cohen κ were used. Compared with the reference standard, the acquisition time was significantly improved in accelerated DWI from 75 seconds up to 50% (39 seconds; P < 0.001). All tested DRB-reconstructed sequences showed significantly improved image quality, sharpness, and reduced noise ( P < 0.001). Highest image quality was observed for the combination of conventional acceleration and DL reconstruction. In singular slices, more artifacts were observed for DRB-reconstructed sequences ( P < 0.001). While in general high consistency was found between ADC values, increasing differences in ADC values were noted with increasing acceleration and application of DRB. Falsely pathological ADCs were rarely observed near frontal poles and optic chiasm attributable to susceptibility-related artifacts due to adjacent sinuses. In this comparative study, we found that the combination of conventional acceleration and DRB reconstruction improves image quality and enables faster acquisition of ssEPI DWI. Nevertheless, a tradeoff between increased acceleration with risk of stronger artifacts and high-resolution with longer acquisition time needs to be considered, especially for application in cerebral MRI.

Multimodal multiphasic preoperative image-based deep-learning predicts HCC outcomes after curative surgery.

Hui RW, Chiu KW, Lee IC, Wang C, Cheng HM, Lu J, Mao X, Yu S, Lam LK, Mak LY, Cheung TT, Chia NH, Cheung CC, Kan WK, Wong TC, Chan AC, Huang YH, Yuen MF, Yu PL, Seto WK

pubmed logopapersAug 1 2025
HCC recurrence frequently occurs after curative surgery. Histological microvascular invasion (MVI) predicts recurrence but cannot provide preoperative prognostication, whereas clinical prediction scores have variable performances. Recurr-NET, a multimodal multiphasic residual-network random survival forest deep-learning model incorporating preoperative CT and clinical parameters, was developed to predict HCC recurrence. Preoperative triphasic CT scans were retrieved from patients with resected histology-confirmed HCC from 4 centers in Hong Kong (internal cohort). The internal cohort was randomly divided in an 8:2 ratio into training and internal validation. External testing was performed in an independent cohort from Taiwan.Among 1231 patients (age 62.4y, 83.1% male, 86.8% viral hepatitis, and median follow-up 65.1mo), cumulative HCC recurrence rates at years 2 and 5 were 41.8% and 56.4%, respectively. Recurr-NET achieved excellent accuracy in predicting recurrence from years 1 to 5 (internal cohort AUROC 0.770-0.857; external AUROC 0.758-0.798), significantly outperforming MVI (internal AUROC 0.518-0.590; external AUROC 0.557-0.615) and multiple clinical risk scores (ERASL-PRE, ERASL-POST, DFT, and Shim scores) (internal AUROC 0.523-0.587, external AUROC: 0.524-0.620), respectively (all p < 0.001). Recurr-NET was superior to MVI in stratifying recurrence risks at year 2 (internal: 72.5% vs. 50.0% in MVI; external: 65.3% vs. 46.6% in MVI) and year 5 (internal: 86.4% vs. 62.5% in MVI; external: 81.4% vs. 63.8% in MVI) (all p < 0.001). Recurr-NET was also superior to MVI in stratifying liver-related and all-cause mortality (all p < 0.001). The performance of Recurr-NET remained robust in subgroup analyses. Recurr-NET accurately predicted HCC recurrence, outperforming MVI and clinical prediction scores, highlighting its potential in preoperative prognostication.

Reference charts for first-trimester placental volume derived using OxNNet.

Mathewlynn S, Starck LN, Yin Y, Soltaninejad M, Swinburne M, Nicolaides KH, Syngelaki A, Contreras AG, Bigiotti S, Woess EM, Gerry S, Collins S

pubmed logopapersAug 1 2025
To establish a comprehensive reference range for OxNNet-derived first-trimester placental volume (FTPV), based on values observed in healthy pregnancies. Data were obtained from the First Trimester Placental Ultrasound Study, an observational cohort study in which three-dimensional placental ultrasound imaging was performed between 11 + 2 and 14 + 1 weeks' gestation, alongside otherwise routine care. A subgroup of singleton pregnancies resulting in term live birth, without neonatal unit admission or major chromosomal or structural abnormality, were included. Exclusion criteria were fetal growth restriction, maternal diabetes mellitus, hypertensive disorders of pregnancy or other maternal medical conditions (e.g. chronic hypertension, antiphospholipid syndrome, systemic lupus erythematosus). Placental images were processed using the OxNNet toolkit, a software solution based on a fully convolutional neural network, for automated placental segmentation and volume calculation. Quantile regression and the lambda-mu-sigma (LMS) method were applied to model the distribution of FTPV, using both crown-rump length (CRL) and gestational age as predictors. Model fit was assessed using the Akaike information criterion (AIC), and centile curves were constructed for visual inspection. The cohort comprised 2547 cases. The distribution of FTPV across gestational ages was positively skewed, with variation in the distribution at different gestational timepoints. In model comparisons, the LMS method yielded lower AIC values compared with quantile regression models. For predicting FTPV from CRL, the LMS model with the Sinh-Arcsinh distribution achieved the best performance, with the lowest AIC value. For gestational-age-based prediction, the LMS model with the Box-Cox Cole and Green original distribution achieved the lowest AIC value. The LMS models were selected to construct centile charts for FTPV based on both CRL and gestational age. Evaluation of the centile charts revealed strong agreement between predicted and observed centiles, with minimal deviations. Both models demonstrated excellent calibration, and the Z-scores derived using each of the models confirmed normal distribution. This study established reference ranges for FTPV based on both CRL and gestational age in healthy pregnancies. The LMS method provided the best model fit, demonstrating excellent calibration and minimal deviations between predicted and observed centiles. These findings should facilitate the exploration of FTPV as a potential biomarker for adverse pregnancy outcome and provide a foundation for future research into its clinical applications. © 2025 The Author(s). Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

M4CXR: Exploring Multitask Potentials of Multimodal Large Language Models for Chest X-Ray Interpretation.

Park J, Kim S, Yoon B, Hyun J, Choi K

pubmed logopapersAug 1 2025
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the LLMs' capability for multitask learning or lacking clinical accuracy. This article presents M4CXR, a multimodal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought (CoT) prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multiimage, and multistudy contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.

Weakly Supervised Intracranial Aneurysm Detection and Segmentation in MR angiography via Multi-task UNet with Vesselness Prior

Erin Rainville, Amirhossein Rasoulian, Hassan Rivaz, Yiming Xiao

arxiv logopreprintAug 1 2025
Intracranial aneurysms (IAs) are abnormal dilations of cerebral blood vessels that, if ruptured, can lead to life-threatening consequences. However, their small size and soft contrast in radiological scans often make it difficult to perform accurate and efficient detection and morphological analyses, which are critical in the clinical care of the disorder. Furthermore, the lack of large public datasets with voxel-wise expert annotations pose challenges for developing deep learning algorithms to address the issues. Therefore, we proposed a novel weakly supervised 3D multi-task UNet that integrates vesselness priors to jointly perform aneurysm detection and segmentation in time-of-flight MR angiography (TOF-MRA). Specifically, to robustly guide IA detection and segmentation, we employ the popular Frangi's vesselness filter to derive soft cerebrovascular priors for both network input and an attention block to conduct segmentation from the decoder and detection from an auxiliary branch. We train our model on the Lausanne dataset with coarse ground truth segmentation, and evaluate it on the test set with refined labels from the same database. To further assess our model's generalizability, we also validate it externally on the ADAM dataset. Our results demonstrate the superior performance of the proposed technique over the SOTA techniques for aneurysm segmentation (Dice = 0.614, 95%HD =1.38mm) and detection (false positive rate = 1.47, sensitivity = 92.9%).

FOCUS-DWI improves prostate cancer detection through deep learning reconstruction with IQMR technology.

Zhao Y, Xie XL, Zhu X, Huang WN, Zhou CW, Ren KX, Zhai RY, Wang W, Wang JW

pubmed logopapersAug 1 2025
This study explored the effects of using Intelligent Quick Magnetic Resonance (IQMR) image post-processing on image quality in Field of View Optimized and Constrained Single-Shot Diffusion-Weighted Imaging (FOCUS-DWI) sequences for prostate cancer detection, and assessed its efficacy in distinguishing malignant from benign lesions. The clinical data and MRI images from 62 patients with prostate masses (31 benign and 31 malignant) were retrospectively analyzed. Axial T2-weighted imaging with fat saturation (T2WI-FS) and FOCUS-DWI sequences were acquired, and the FOCUS-DWI images were processed using the IQMR post-processing system to generate IQMR-FOCUS-DWI images. Two independent radiologists undertook subjective scoring, grading using the Prostate Imaging Reporting and Data System (PI-RADS), diagnosis of benign and malignant lesions, and diagnostic confidence scoring for images from the FOCUS-DWI and IQMR-FOCUS-DWI sequences. Additionally, quantitative analyses, specifically, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), were conducted using T2WI-FS as the reference standard. The apparent diffusion coefficients (ADCs) of malignant and benign lesions were compared between the two imaging sequences. Spearman correlation coefficients were calculated to evaluate the associations between diagnostic confidence scores and diagnostic accuracy rates of the two sequence groups, as well as between the ADC values of malignant lesions and Gleason grading in the two sequence groups. Receiver operating characteristic (ROC) curves were utilized to assess the efficacy of ADC in distinguishing lesions. The qualitative analysis revealed that IQMR-FOCUS-DWI images showed significantly better noise suppression, reduced geometric distortion, and enhanced overall quality relative to the FOCUS-DWI images (P < 0.001). There was no significant difference in the PI-RADS scores between IQMR-FOCUS-DWI and FOCUS-DWI images (P = 0.0875), while the diagnostic confidence scores of IQMR-FOCUS-DWI sequences were markedly higher than those of FOCUS-DWI sequences (P = 0.0002). The diagnostic results of the FOCUS-DWI sequences for benign and malignant prostate lesions were consistent with those of the pathological results (P < 0.05), as were those of the IQMR-FOCUS-DWI sequences (P < 0.05). The quantitative analysis indicated that the PSNR, SSIM, and ADC values were markedly greater in IQMR-FOCUS-DWI images relative to FOCUS-DWI images (P < 0.01). In both imaging sequences, benign lesions exhibited ADC values markedly greater than those of malignant lesions (P < 0.001). The diagnostic confidence scores of both groups of sequences were significantly positively correlated with the diagnostic accuracy rate. In malignant lesions, the ADC values of the FOCUS-DWI sequences showed moderate negative correlations with the Gleason grading, while the ADC values of the IQMR-FOCUS-DWI sequences were strongly negatively associated with the Gleason grading. ROC curves indicated the superior diagnostic performance of IQMR-FOCUS-DWI (AUC = 0.941) compared to FOCUS-DWI (AUC = 0.832) for differentiating prostate lesions (P = 0.0487). IQMR-FOCUS-DWI significantly enhances image quality and improves diagnostic accuracy for benign and malignant prostate lesions compared to conventional FOCUS-DWI.

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime
Page 129 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.