Sort by:
Page 118 of 2202194 results

Current trends in glioma tumor segmentation: A survey of deep learning modules.

Shoushtari FK, Elahi R, Valizadeh G, Moodi F, Salari HM, Rad HS

pubmed logopapersJun 2 2025
Multiparametric Magnetic Resonance Imaging (mpMRI) is the gold standard for diagnosing brain tumors, especially gliomas, which are difficult to segment due to their heterogeneity and varied sub-regions. While manual segmentation is time-consuming and error-prone, Deep Learning (DL) automates the process with greater accuracy and speed. We conducted ablation studies on surveyed articles to evaluate the impact of "add-on" modules-addressing challenges like spatial information loss, class imbalance, and overfitting-on glioma segmentation performance. Advanced modules-such as atrous (dilated) convolutions, inception, attention, transformer, and hybrid modules-significantly enhance segmentation accuracy, efficiency, multiscale feature extraction, and boundary delineation, while lightweight modules reduce computational complexity. Experiments on the Brain Tumor Segmentation (BraTS) dataset (comprising low- and high-grade gliomas) confirm their robustness, with top-performing models achieving high Dice score for tumor sub-regions. This survey underscores the need for optimal module selection and placement to balance speed, accuracy, and interpretability in glioma segmentation. Future work should focus on improving model interpretability, lowering computational costs, and boosting generalizability. Tools like NeuroQuant® and Raidionics demonstrate potential for clinical translation. Further refinement could enable regulatory approval, advancing precision in brain tumor diagnosis and treatment planning.

Direct parametric reconstruction in dynamic PET using deep image prior and a novel parameter magnification strategy.

Hong X, Wang F, Sun H, Arabi H, Lu L

pubmed logopapersJun 2 2025
Multiple parametric imaging in positron emission tomography (PET) is challenging due to the noisy dynamic data and the complex mapping to kinetic parameters. Although methods like direct parametric reconstruction have been proposed to improve the image quality, limitations persist, particularly for nonlinear and small-value micro-parameters (e.g., k<sub>2</sub>, k<sub>3</sub>). This study presents a novel unsupervised deep learning approach to reconstruct and improve the quality of these micro-parameters. We proposed a direct parametric image reconstruction model, DIP-PM, integrating deep image prior (DIP) with a parameter magnification (PM) strategy. The model employs a U-Net generator to predict multiple parametric images using a CT image prior, with each output channel subsequently magnified by a factor to adjust the intensity. The model was optimized with a log-likelihood loss computed between the measured projection data and forward projected data. Two tracer datasets were simulated for evaluation: <sup>82</sup>Rb data using the 1-tissue compartment (1 TC) model and <sup>18</sup>F-FDG data using the 2-tissue compartment (2 TC) model, with 10-fold magnification applied to the 1 TC k<sub>2</sub> and the 2 TC k<sub>3</sub>, respectively. DIP-PM was compared to the indirect method, direct algorithm (OTEM) and the DIP method without parameter magnification (DIP-only). Performance was assessed on phantom data using peak signal-to-noise ratio (PSNR), normalized root mean square error (NRMSE) and structural similarity index (SSIM), as well as on real <sup>18</sup>F-FDG scan from a male subject. For the 1 TC model, OTEM performed well in K<sub>1</sub> reconstruction, but both indirect and OTEM methods showed high noise and poor performance in k<sub>2</sub>. The DIP-only method suppressed noise in k<sub>2</sub>, but failed to reconstruct fine structures in the myocardium. DIP-PM outperformed other methods with well-preserved detailed structures, particularly in k<sub>2</sub>, achieving the best metrics (PSNR: 19.00, NRMSE: 0.3002, SSIM: 0.9289). For the 2 TC model, traditional methods exhibited high noise and blurred structures in estimating all nonlinear parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>), while DIP-based methods significantly improved image quality. DIP-PM outperformed all methods in k<sub>3</sub> (PSNR: 21.89, NRMSE: 0.4054, SSIM: 0.8797), and consequently produced the most accurate 2 TC K<sub>i</sub> images (PSNR: 22.74, NRMSE: 0.4897, SSIM: 0.8391). On real FDG data, DIP-PM also showed evident advantages in estimating K<sub>1</sub>, k<sub>2</sub> and k<sub>3</sub> while preserving myocardial structures. The results underscore the efficacy of the DIP-based direct parametric imaging in generating and improving quality of PET parametric images. This study suggests that the proposed DIP-PM method with the parameter magnification strategy can enhance the fidelity of nonlinear micro-parameter images.

Efficiency and Quality of Generative AI-Assisted Radiograph Reporting.

Huang J, Wittbrodt MT, Teague CN, Karl E, Galal G, Thompson M, Chapa A, Chiu ML, Herynk B, Linchangco R, Serhal A, Heller JA, Abboud SF, Etemadi M

pubmed logopapersJun 2 2025
Diagnostic imaging interpretation involves distilling multimodal clinical information into text form, a task well-suited to augmentation by generative artificial intelligence (AI). However, to our knowledge, impacts of AI-based draft radiological reporting remain unstudied in clinical settings. To prospectively evaluate the association of radiologist use of a workflow-integrated generative model capable of providing draft radiological reports for plain radiographs across a tertiary health care system with documentation efficiency, the clinical accuracy and textual quality of final radiologist reports, and the model's potential for detecting unexpected, clinically significant pneumothorax. This prospective cohort study was conducted from November 15, 2023, to April 24, 2024, at a tertiary care academic health system. The association between use of the generative model and radiologist documentation efficiency was evaluated for radiographs documented with model assistance compared with a baseline set of radiographs without model use, matched by study type (chest or nonchest). Peer review was performed on model-assisted interpretations. Flagging of pneumothorax requiring intervention was performed on radiographs prospectively. The primary outcomes were association of use of the generative model with radiologist documentation efficiency, assessed by difference in documentation time with and without model use using a linear mixed-effects model; for peer review of model-assisted reports, the difference in Likert-scale ratings using a cumulative-link mixed model; and for flagging pneumothorax requiring intervention, sensitivity and specificity. A total of 23 960 radiographs (11 980 each with and without model use) were used to analyze documentation efficiency. Interpretations with model assistance (mean [SE], 159.8 [27.0] seconds) were faster than the baseline set of those without (mean [SE], 189.2 [36.2] seconds) (P = .02), representing a 15.5% documentation efficiency increase. Peer review of 800 studies showed no difference in clinical accuracy (χ2 = 0.68; P = .41) or textual quality (χ2 = 3.62; P = .06) between model-assisted interpretations and nonmodel interpretations. Moreover, the model flagged studies containing a clinically significant, unexpected pneumothorax with a sensitivity of 72.7% and specificity of 99.9% among 97 651 studies screened. In this prospective cohort study of clinical use of a generative model for draft radiological reporting, model use was associated with improved radiologist documentation efficiency while maintaining clinical quality and demonstrated potential to detect studies containing a pneumothorax requiring immediate intervention. This study suggests the potential for radiologist and generative AI collaboration to improve clinical care delivery.

Referenceless 4D Flow Cardiovascular Magnetic Resonance with deep learning.

Trenti C, Ylipää E, Ebbers T, Carlhäll CJ, Engvall J, Dyverfeldt P

pubmed logopapersJun 2 2025
Despite its potential to improve the assessment of cardiovascular diseases, 4D Flow CMR is hampered by long scan times. 4D Flow CMR is conventionally acquired with three motion encodings and one reference encoding, as the 3-dimensional velocity data are obtained by subtracting the phase of the reference from the phase of the motion encodings. In this study, we aim to use deep learning to predict the reference encoding from the three motion encodings for cardiovascular 4D Flow. A U-Net was trained with adversarial learning (U-Net<sub>ADV</sub>) and with a velocity frequency-weighted loss function (U-Net<sub>VEL</sub>) to predict the reference encoding from the three motion encodings obtained with a non-symmetric velocity-encoding scheme. Whole-heart 4D Flow datasets from 126 patients with different types of cardiomyopathies were retrospectively included. The models were trained on 113 patients with a 5-fold cross-validation, and tested on 13 patients. Flow volumes in the aorta and pulmonary artery, mean and maximum velocity, total and maximum turbulent kinetic energy at peak systole in the cardiac chambers and main vessels were assessed. 3-dimensional velocity data reconstructed with the reference encoding predicted by deep learning agreed well with the velocities obtained with the reference encoding acquired at the scanner for both models. U-Net<sub>ADV</sub> performed more consistently throughout the cardiac cycle and across the test subjects, while U-Net<sub>VEL</sub> performed better for systolic velocities. Comprehensively, the largest error for flow volumes, maximum and mean velocities was -6.031% for maximum velocities in the right ventricle for the U-Net<sub>ADV</sub>, and -6.92% for mean velocities in the right ventricle for U-Net<sub>VEL</sub>. For total turbulent kinetic energy, the highest errors were in the left ventricle (-77.17%) for the U-Net<sub>ADV</sub>, and in the right ventricle (24.96%) for the U-Net<sub>VEL</sub>, while for maximum turbulent kinetic energy were in the pulmonary artery for both models, with a value of -15.5% for U-Net<sub>ADV</sub> and 15.38% for the U-Net<sub>VEL</sub>. Deep learning-enabled referenceless 4D Flow CMR permits velocities and flow volumes quantification comparable to conventional 4D Flow. Omitting the reference encoding reduces the amount of acquired data by 25%, thus allowing shorter scan times or improved resolution, which is valuable for utilization in the clinical routine.

Validation of a Dynamic Risk Prediction Model Incorporating Prior Mammograms in a Diverse Population.

Jiang S, Bennett DL, Colditz GA

pubmed logopapersJun 2 2025
For breast cancer risk prediction to be clinically useful, it must be accurate and applicable to diverse groups of women across multiple settings. To examine whether a dynamic risk prediction model incorporating prior mammograms, previously validated in Black and White women, could predict future risk of breast cancer across a racially and ethnically diverse population in a population-based screening program. This prognostic study included women aged 40 to 74 years with 1 or more screening mammograms drawn from the British Columbia Breast Screening Program from January 1, 2013, to December 31, 2019, with follow-up via linkage to the British Columbia Cancer Registry through June 2023. This provincial, organized screening program offers screening mammography with full field digital mammography (FFDM) every 2 years. Data were analyzed from May to August 2024. FFDM-based, artificial intelligence-generated mammogram risk score (MRS), including up to 4 years of prior mammograms. The primary outcomes were 5-year risk of breast cancer (measured with the area under the receiver operating characteristic curve [AUROC]) and absolute risk of breast cancer calibrated to the US Surveillance, Epidemiology, and End Results incidence rates. Among 206 929 women (mean [SD] age, 56.1 [9.7] years; of 118 093 with data on race, there were 34 266 East Asian; 1946 Indigenous; 6116 South Asian; and 66 742 White women), there were 4168 pathology-confirmed incident breast cancers diagnosed through June 2023. Mean (SD) follow-up time was 5.3 (3.0) years. Using up to 4 years of prior mammogram images in addition to the most current mammogram, a 5-year AUROC of 0.78 (95% CI, 0.77-0.80) was obtained based on analysis of images alone. Performance was consistent across subgroups defined by race and ethnicity in East Asian (AUROC, 0.77; 95% CI, 0.75-0.79), Indigenous (AUROC, 0.77; 95% CI 0.71-0.83), and South Asian (AUROC, 0.75; 95% CI 0.71-0.79) women. Stratification by age gave a 5-year AUROC of 0.76 (95% CI, 0.74-0.78) for women aged 50 years or younger and 0.80 (95% CI, 0.78-0.82) for women older than 50 years. There were 18 839 participants (9.0%) with a 5-year risk greater than 3%, and the positive predictive value was 4.9% with an incidence of 11.8 per 1000 person-years. A dynamic MRS generated from both current and prior mammograms showed robust performance across diverse racial and ethnic populations in a province-wide screening program starting from age 40 years, reflecting improved accuracy for racially and ethnically diverse populations.

Machine Learning Methods Based on Chest CT for Predicting the Risk of COVID-19-Associated Pulmonary Aspergillosis.

Liu J, Zhang J, Wang H, Fang C, Wei L, Chen J, Li M, Wu S, Zeng Q

pubmed logopapersJun 1 2025
To develop and validate a machine learning model based on chest CT and clinical risk factors to predict secondary aspergillus infection in hospitalized COVID-19 patients. This retrospective study included 291 COVID-19 patients with complete clinical data between December 2022 and March 2024, and some (n=82) of them developed secondary aspergillus infection after admission. Patients were divided into training (n=162), internal validation (n=69) and external validation (n=60) cohorts. The least absolute shrinkage and selection operator regression was applied to select the most significant image features extracted from chest CT. Univariate and multivariate logistic regression analyses were performed to develop a multifactorial model, which integrated chest CT with clinical risk factors, to predict secondary aspergillus infection in hospitalized COVID-19 patients. The performance of the constructed models was assessed with the receiver operating characteristic curve and the area under the curve (AUC). The clinical application value of the models was comprehensively evaluated using decision curve analysis (DCA). Eleven radiomics features and seven clinical risk factors were selected to develop prediction models. The multifactorial model demonstrated a favorable predictive performance with the highest AUC values of 0.98 (95% CI, 0.96-1.00) in the training cohort, 0.98 (95% CI, 0.96-1.00) in the internal validation cohort, and 0.87 (95% CI, 0.75-0.99) in the external validation cohort, which was significantly superior to the models relied solely on chest CT or clinical risk factors. The calibration curves from Hosmer-Lemeshow tests showed that there were no significant differences in the training cohort (p=0.359) and internal validation cohort (p=0.941), suggesting the good performance of the multifactorial model. DCA indicated that the multifactorial model exhibited better performance than others. The multifactorial model can serve as a reliable tool for predicting the risk of COVID-19-associated pulmonary aspergillosis.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.

Evolution of Cortical Lesions and Function-Specific Cognitive Decline in People With Multiple Sclerosis.

Krijnen EA, Jelgerhuis J, Van Dam M, Bouman PM, Barkhof F, Klawiter EC, Hulst HE, Strijbis EMM, Schoonheim MM

pubmed logopapersJun 1 2025
Cortical lesions in multiple sclerosis (MS) severely affect cognition, but their longitudinal evolution and impact on specific cognitive functions remain understudied. This study investigates the evolution of function-specific cognitive functioning over 10 years in people with MS and assesses the influence of cortical lesion load and formation on these trajectories. In this prospectively collected study, people with MS underwent 3T MRI (T1 and fluid-attenuated inversion recovery) at 3 study visits between 2008 and 2022. Cognitive functioning was evaluated based on neuropsychological assessment reflecting 7 cognitive functions: attention; executive functioning (EF); information processing speed (IPS); verbal fluency; and verbal, visuospatial, and working memory. Cortical lesions were manually identified on artificial intelligence-generated double-inversion recovery images. Linear mixed models were constructed to assess the temporal evolution between cortical lesion load and function-specific cognitive decline. In addition, analyses were stratified by MS disease stage: early and late relapsing-remitting MS (cutoff disease duration at 15 years) and progressive MS. The study included 223 people with MS (mean age, 47.8 ± 11.1 years; 153 women) and 62 healthy controls. All completed 5-year follow-up, and 37 healthy controls and 94 with MS completed 10-year follow-up. At baseline, people with MS exhibited worse functioning of IPS and working memory. Over 10 years, cognitive decline was most severe in attention, verbal memory, and EF. At baseline, people with MS had a median cortical lesion count of 7 (range 0-73), which was related to subsequent decline in attention (B[95% CI] = -0.22 [-0.40 to -0.03]) and verbal fluency (B[95% CI] = -0.23[-0.37 to -0.09]). Over time, cortical lesions increased by a median count of 4 (range -2 to 71), particularly in late and progressive disease, and was related to decline in verbal fluency (B [95% CI] = -0.33 [-0.51 to -0.15]). The associations between (change in) cortical lesion load and cognitive decline were not modified by MS disease stage. Cognition worsened over 10 years, particularly affecting attention, verbal memory, and EF, while preexisting impairments were worst in other functions such as IPS. Worse baseline cognitive functioning was related to baseline cortical lesions, whereas baseline cortical lesions and cortical lesion formation were related to cognitive decline in functions less affected at baseline. Accumulating cortical damage leads to spreading of cognitive impairments toward additional functions.

Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation.

Ravishankar H, Paluru N, Sudhakar P, Yalavarthy PK

pubmed logopapersJun 1 2025
The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.
Page 118 of 2202194 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.