Sort by:
Page 44 of 1301294 results

Early prediction of adverse outcomes in liver cirrhosis using a CT-based multimodal deep learning model.

Xie N, Liang Y, Luo Z, Hu J, Ge R, Wan X, Wang C, Zou G, Guo F, Jiang Y

pubmed logopapersJun 27 2025
Early-stage cirrhosis frequently presents without symptoms, making timely identification of high-risk patients challenging. We aimed to develop a deep learning-based triple-modal fusion liver cirrhosis network (TMF-LCNet) for the prediction of adverse outcomes, offering a promising tool to enhance early risk assessment and improve clinical management strategies. This retrospective study included 243 patients with early-stage cirrhosis across two centers. Adverse outcomes were defined as the development of severe complications like ascites, hepatic encephalopathy and variceal bleeding. TMF-LCNet was developed by integrating three types of data: non-contrast abdominal CT images, radiomic features extracted from liver and spleen, and clinical text detailing laboratory parameters and adipose tissue composition measurements. TMF-LCNet was compared with conventional methods on the same dataset, and single-modality versions of TMF-LCNet were tested to determine the impact of each data type. Model effectiveness was measured using the area under the receiver operating characteristics curve (AUC) for discrimination, calibration curves for model fit, and decision curve analysis (DCA) for clinical utility. TMF-LCNet demonstrated superior predictive performance compared to conventional image-based, radiomics-based, and multimodal methods, achieving an AUC of 0.797 in the training cohort (n = 184) and 0.747 in the external test cohort (n = 59). Only TMF-LCNet exhibited robust model calibration in both cohorts. Of the three data types, the imaging modality contributed the most, as the image-only version of TMF-LCNet achieved performance closest to the complete version (AUC = 0.723 and 0.716, respectively; p > 0.05). This was followed by the text modality, with radiomics contributing the least, a pattern consistent with the clinical utility trends observed in DCA. TMF-LCNet represents an accurate and robust tool for predicting adverse outcomes in early-stage cirrhosis by integrating multiple data types. It holds potential for early identification of high-risk patients, guiding timely interventions, and ultimately improving patient prognosis.

Association of Covert Cerebrovascular Disease With Falls Requiring Medical Attention.

Clancy Ú, Puttock EJ, Chen W, Whiteley W, Vickery EM, Leung LY, Luetmer PH, Kallmes DF, Fu S, Zheng C, Liu H, Kent DM

pubmed logopapersJun 27 2025
The impact of covert cerebrovascular disease on falls in the general population is not well-known. Here, we determine the time to a first fall following incidentally detected covert cerebrovascular disease during a clinical neuroimaging episode. This longitudinal cohort study assessed computed tomography (CT) and magnetic resonance imaging from 2009 to 2019 of patients aged >50 years registered with Kaiser Permanente Southern California which is a healthcare organization combining health plan coverage with coordinated medical services, excluding those with before stroke/dementia. We extracted evidence of incidental covert brain infarcts (CBI) and white matter hyperintensities/hypoattenuation (WMH) from imaging reports using natural language processing. We examined associations of CBI and WMH with falls requiring medical attention, using Cox proportional hazards regression models with adjustment for 12 variables including age, sex, ethnicity multimorbidity, polypharmacy, and incontinence. We assessed 241 050 patients, mean age 64.9 (SD, 10.42) years, 61.3% female, detecting covert cerebrovascular disease in 31.1% over a mean follow-up duration of 3.04 years. A recorded fall occurred in 21.2% (51 239/241 050) during follow-up. On CT, single fall incidence rate/1000 person-years (p-y) was highest in individuals with both CBI and WMH on CT (129.3 falls/1000 p-y [95% CI, 123.4-135.5]), followed by WMH (109.9 falls/1000 p-y [108.0-111.9]). On magnetic resonance imaging, the incidence rate was the highest with both CBI and WMH (76.3 falls/1000 p-y [95% CI, 69.7-83.2]), followed by CBI (71.4 falls/1000 p-y [95% CI, 65.9-77.2]). The adjusted hazard ratio for single index fall in individuals with CBI on CT was 1.13 (95% CI, 1.09-1.17); versus magnetic resonance imaging 1.17 (95% CI, 1.08-1.27). On CT, the risk for single index fall incrementally increased for mild (1.37 [95% CI, 1.32-1.43]), moderate (1.57 [95% CI, 1.48-1.67]), or severe WMH (1.57 [95% CI, 1.45-1.70]). On magnetic resonance imaging, index fall risk similarly increased with increasing WMH severity: mild (1.11 [95% CI, 1.07-1.17]), moderate (1.21 [95% CI, 1.13-1.28]), and severe WMH (1.34 [95% CI, 1.22-1.46]). In a large population with neuroimaging, CBI and WMH are independently associated with greater risks of an index fall. Increasing severities of WMH are associated incrementally with fall risk across imaging modalities.

A multi-view CNN model to predict resolving of new lung nodules on follow-up low-dose chest CT.

Wang J, Zhang X, Tang W, van Tuinen M, Vliegenthart R, van Ooijen P

pubmed logopapersJun 27 2025
New, intermediate-sized nodules in lung cancer screening undergo follow-up CT, but some of these will resolve. We evaluated the performance of a multi-view convolutional neural network (CNN) in distinguishing resolving and non-resolving new, intermediate-sized lung nodules. This retrospective study utilized data on 344 intermediate-sized nodules (50-500 mm<sup>3</sup>) in 250 participants from the NELSON (Dutch-Belgian Randomized Lung Cancer Screening) trial. We implemented four-fold cross-validation for model training and testing. A multi-view CNN model was developed by combining three two-dimensional (2D) CNN models and one three-dimensional (3D) CNN model. We used 2D, 2.5D, and 3D models for comparison. The models' performance was evaluated using sensitivity, specificity, and area under the ROC curve (AUC). Specificity, indicating what percentage of non-resolving nodules requiring follow-up can be correctly predicted, was maximized. Among all nodules, 18.3% (63) were resolving. The multi-view CNN model achieved an AUC of 0.81, with a mean sensitivity of 0.63 (SD, 0.15) and a mean specificity of 0.93 (SD, 0.02). The model significantly improved performance compared to 2D, 2.5D, or 3D models (p < 0.05). Under the premise of specificity greater than 90% (meaning < 10% of non-resolving nodules are incorrectly identified as resolving), follow-up CT in 14% of individuals could be prevented. The multi-view CNN model achieved high specificity in discriminating new intermediate nodules that would need follow-up CT by identifying non-resolving nodules. After further validation and optimization, this model may assist with decision-making when new intermediate nodules are found in lung cancer screening. The multi-view CNN-based model has the potential to reduce unnecessary follow-up scans when new nodules are detected, aiding radiologists in making earlier, more informed decisions. Predicting the resolution of new intermediate lung nodules in lung cancer screening CT is a challenge. Our multi-view CNN model showed an AUC of 0.81, a specificity of 0.93, and a sensitivity of 0.63 at the nodule level. The multi-view model demonstrated a significant improvement in AUC compared to the three 2D models, one 2.5D model, and one 3D model.

A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

Liu Q, Jiang J, Wu K, Zhang Y, Sun N, Luo J, Ba T, Lv A, Liu C, Yin Y, Yang Z, Xu H

pubmed logopapersJun 27 2025
To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images. A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated. In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP). The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy. Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data. Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

Deep Learning-Based Prediction of PET Amyloid Status Using MRI.

Kim D, Ottesen JA, Kumar A, Ho BC, Bismuth E, Young CB, Mormino E, Zaharchuk G

pubmed logopapersJun 27 2025
Identifying amyloid-beta (Aβ)-positive patients is essential for Alzheimer's disease (AD) clinical trials and disease-modifying treatments but currently requires PET or cerebrospinal fluid sampling. Previous MRI-based deep learning models, using only T1-weighted (T1w) images, have shown moderate performance. Multi-contrast MRI and PET-based quantitative Aβ deposition were retrospectively obtained from three public datasets: ADNI, OASIS3, and A4. Aβ positivity was defined using each dataset's recommended centiloid threshold. Two EfficientNet models were trained to predict amyloid positivity: one using only T1w images and another incorporating both T1w and T2-FLAIR. Model performance was assessed using an internal held-out test set, evaluating AUC, accuracy, sensitivity, and specificity. External validation was conducted using an independent cohort from Stanford Alzheimer's Disease Research Center. DeLong's and McNemar's tests were used to compare AUC and accuracy, respectively. A total of 4,056 exams (mean [SD] age: 71.6 [6.3] years; 55% female; 55% amyloid-positive) were used for network development, and 149 exams were used for external testing (mean [SD] age: 72.1 [9.6] years; 58% female; 56% amyloid-positive). The multi-contrast model outperformed the single-modality model in the internal held-out test set (AUC: 0.67, 95% CI: 0.65-0.70, <i>P</i> < 0.001; accuracy: 0.63, 95% CI: 0.62-0.65, <i>P</i> < 0.001) compared to the T1w-only model (AUC: 0.61; accuracy: 0.59). Among cognitive subgroups, the highest performance (AUC: 0.71) was observed in mild cognitive impairment. The multi-contrast model also demonstrated consistent performance in the external test set (AUC: 0.65, 95% CI: 0.60-0.71, <i>P</i> = 0.014; accuracy: 0.62, 95% CI: 0.58- 0.65, <i>P</i> < 0.001). The use of multi-contrast MRI, specifically incorporating T2-FLAIR in addition to T1w images, significantly improved the predictive accuracy of PET-determined amyloid status from MRI scans using a deep learning approach. Aβ= amyloid-beta; AD= Alzheimer's disease; AUC= area under the receiver operating characteristic curve; CN= cognitively normal; MCI= mild cognitive impairment; T1w = T1-wegithed; T2-FLAIR = T2-weighted fluid attenuated inversion recovery; FBP=<sup>18</sup>F-florbetapir; FBB=<sup>18</sup>F-florbetaben; SUVR= standard uptake value ratio.

Cardiovascular disease classification using radiomics and geometric features from cardiac CT

Ajay Mittal, Raghav Mehta, Omar Todd, Philipp Seeböck, Georg Langs, Ben Glocker

arxiv logopreprintJun 27 2025
Automatic detection and classification of Cardiovascular disease (CVD) from Computed Tomography (CT) images play an important part in facilitating better-informed clinical decisions. However, most of the recent deep learning based methods either directly work on raw CT data or utilize it in pair with anatomical cardiac structure segmentation by training an end-to-end classifier. As such, these approaches become much more difficult to interpret from a clinical perspective. To address this challenge, in this work, we break down the CVD classification pipeline into three components: (i) image segmentation, (ii) image registration, and (iii) downstream CVD classification. Specifically, we utilize the Atlas-ISTN framework and recent segmentation foundational models to generate anatomical structure segmentation and a normative healthy atlas. These are further utilized to extract clinically interpretable radiomic features as well as deformation field based geometric features (through atlas registration) for CVD classification. Our experiments on the publicly available ASOCA dataset show that utilizing these features leads to better CVD classification accuracy (87.50\%) when compared against classification model trained directly on raw CT images (67.50\%). Our code is publicly available: https://github.com/biomedia-mira/grc-net

BrainMT: A Hybrid Mamba-Transformer Architecture for Modeling Long-Range Dependencies in Functional MRI Data

Arunkumar Kannan, Martin A. Lindquist, Brian Caffo

arxiv logopreprintJun 27 2025
Recent advances in deep learning have made it possible to predict phenotypic measures directly from functional magnetic resonance imaging (fMRI) brain volumes, sparking significant interest in the neuroimaging community. However, existing approaches, primarily based on convolutional neural networks or transformer architectures, often struggle to model the complex relationships inherent in fMRI data, limited by their inability to capture long-range spatial and temporal dependencies. To overcome these shortcomings, we introduce BrainMT, a novel hybrid framework designed to efficiently learn and integrate long-range spatiotemporal attributes in fMRI data. Our framework operates in two stages: (1) a bidirectional Mamba block with a temporal-first scanning mechanism to capture global temporal interactions in a computationally efficient manner; and (2) a transformer block leveraging self-attention to model global spatial relationships across the deep features processed by the Mamba block. Extensive experiments on two large-scale public datasets, UKBioBank and the Human Connectome Project, demonstrate that BrainMT achieves state-of-the-art performance on both classification (sex prediction) and regression (cognitive intelligence prediction) tasks, outperforming existing methods by a significant margin. Our code and implementation details will be made publicly available at this https://github.com/arunkumar-kannan/BrainMT-fMRI

Deep learning for hydrocephalus prognosis: Advances, challenges, and future directions: A review.

Huang J, Shen N, Tan Y, Tang Y, Ding Z

pubmed logopapersJun 27 2025
Diagnosis of hydrocephalus involves a careful check of the patient's history and thorough neurological assessment. The traditional diagnosis has predominantly depended on the professional judgment of physicians based on clinical experience, but with the advancement of precision medicine and individualized treatment, such experience-based methods are no longer sufficient to keep pace with current clinical requirements. To fit this adjustment, the medical community actively devotes itself to data-driven intelligent diagnostic solutions. Building a prognosis prediction model for hydrocephalus has thus become a new focus, among which intelligent prediction systems supported by deep learning offer new technical advantages for clinical diagnosis and treatment decisions. Over the past several years, algorithms of deep learning have demonstrated conspicuous advantages in medical image analysis. Studies revealed that the accuracy rate of the diagnosis of hydrocephalus by magnetic resonance imaging can reach 90% through convolutional neural networks, while their sensitivity and specificity are also better than these of traditional methods. With the extensive use of medical technology in terms of deep learning, its successful use in modeling hydrocephalus prognosis has also drawn extensive attention and recognition from scholars. This review explores the application of deep learning in hydrocephalus diagnosis and prognosis, focusing on image-based, biochemical, and structured data models. Highlighting recent advancements, challenges, and future trajectories, the study emphasizes deep learning's potential to enhance personalized treatment and improve outcomes.

Regional Cortical Thinning and Area Reduction Are Associated with Cognitive Impairment in Hemodialysis Patients.

Chen HJ, Qiu J, Qi Y, Guo Y, Zhang Z, Qin H, Wu F, Chen F

pubmed logopapersJun 27 2025
Magnetic resonance imaging (MRI) has shown that patients with end-stage renal disease have decreased gray matter volume and density. However, the cortical area and thickness in patients on hemodialysis are uncertain, and the relationship between patients' cognition and cortical alterations remains unclear. Thirty-six hemodialysis patients and 25 age- and sex-matched healthy controls were enrolled in this study and underwent brain MRI scans and neuropsychological assessments. According to the Desikan-Killiany atlas, the brain is divided into 68 regions. Using FreeSurfer software, we analyzed the differences in cortical area and thickness of each region between groups. Machine learning-based classification was also used to differentiate hemodialysis patients from healthy individuals. The patients exhibited decreased cortical thickness in the frontal and temporal regions, including the left bankssts, left lingual gyrus, left pars triangularis, bilateral superior temporal gyrus, and right pars opercularis and decreased cortical area in the left rostral middle frontal gyrus, left superior frontal gyrus, right fusiform gyrus, right pars orbitalis and right superior frontal gyrus. Decreased cortical thickness was positively associated with poorer scores on the neuropsychological tests and increased uric acid and urea levels. Cortical thickness pattern allowed differentiating the patients from the controls with 96.7% accuracy (97.5% sensitivity, 95.0% specificity, 97.5% precision, and AUC: 0.983) on the support vector machine analysis. Patients on hemodialysis exhibited decreased cortical area and thickness, which was associated with poorer cognition and uremic toxins.

Machine learning-based radiomic nomogram from unenhanced computed tomography and clinical data predicts bowel resection in incarcerated inguinal hernia.

Li DL, Zhu L, Liu SL, Wang ZB, Liu JN, Zhou XM, Hu JL, Liu RQ

pubmed logopapersJun 27 2025
Early identification of bowel resection risks is crucial for patients with incarcerated inguinal hernia (IIH). However, the prompt detection of these risks remains a significant challenge. Advancements in radiomic feature extraction and machine learning algorithms have paved the way for innovative diagnostic approaches to assess IIH more effectively. To devise a sophisticated radiomic-clinical model to evaluate bowel resection risks in IIH patients, thereby enhancing clinical decision-making processes. This single-center retrospective study analyzed 214 IIH patients randomized into training (<i>n</i> = 161) and test (<i>n</i> = 53) sets (3:1). Radiologists segmented hernia sac-trapped bowel volumes of interest (VOIs) on computed tomography images. Radiomic features extracted from VOIs generated Rad-scores, which were combined with clinical data to construct a nomogram. The nomogram's performance was evaluated against standalone clinical and radiomic models in both cohorts. A total of 1561 radiomic features were extracted from the VOIs. After dimensionality reduction, 13 radiomic features were used with eight machine learning algorithms to develop the radiomic model. The logistic regression algorithm was ultimately selected for its effectiveness, showing an area under the curve (AUC) of 0.828 [95% confidence interval (CI): 0.753-0.902] in the training set and 0.791 (95%CI: 0.668-0.915) in the test set. The comprehensive nomogram, incorporating clinical indicators showcased strong predictive capabilities for assessing bowel resection risks in IIH patients, with AUCs of 0.864 (95%CI: 0.800-0.929) and 0.800 (95%CI: 0.669-0.931) for the training and test sets, respectively. Decision curve analysis revealed the integrated model's superior performance over standalone clinical and radiomic approaches. This innovative radiomic-clinical nomogram has proven to be effective in predicting bowel resection risks in IIH patients and has substantially aided clinical decision-making.
Page 44 of 1301294 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.