Sort by:
Page 75 of 3113104 results

Explainable CT-based deep learning model for predicting hematoma expansion including intraventricular hemorrhage growth.

Zhao X, Zhang Z, Shui J, Xu H, Yang Y, Zhu L, Chen L, Chang S, Du C, Yao Z, Fang X, Shi L

pubmed logopapersJul 18 2025
Hematoma expansion (HE), including intraventricular hemorrhage (IVH) growth, significantly affects outcomes in patients with intracerebral hemorrhage (ICH). This study aimed to develop, validate, and interpret a deep learning model, HENet, for predicting three definitions of HE. Using CT scans and clinical data from 718 ICH patients across three hospitals, the multicenter retrospective study focused on revised hematoma expansion (RHE) definitions 1 and 2, and conventional HE (CHE). HENet's performance was compared with 2D models and physician predictions using two external validation sets. Results showed that HENet achieved high AUC values for RHE1, RHE2, and CHE predictions, surpassing physicians' predictions and 2D models in net reclassification index and integrated discrimination index for RHE1 and RHE2 outcomes. The Grad-CAM technique provided visual insights into the model's decision-making process. These findings suggest that integrating HENet into clinical practice could improve prediction accuracy and patient outcomes in ICH cases.

Performance of Machine Learning in Diagnosing KRAS (Kirsten Rat Sarcoma) Mutations in Colorectal Cancer: Systematic Review and Meta-Analysis.

Chen K, Qu Y, Han Y, Li Y, Gao H, Zheng D

pubmed logopapersJul 18 2025
With the widespread application of machine learning (ML) in the diagnosis and treatment of colorectal cancer (CRC), some studies have investigated the use of ML techniques for the diagnosis of KRAS (Kirsten rat sarcoma) mutation. Nevertheless, there is scarce evidence from evidence-based medicine to substantiate its efficacy. Our study was carried out to systematically review the performance of ML models developed using different modeling approaches, in diagnosing KRAS mutations in CRC. We aim to offer evidence-based foundations for the development and enhancement of future intelligent diagnostic tools. PubMed, Cochrane Library, Embase, and Web of Science were systematically retrieved, with the search cutoff date set to December 22, 2024. The encompassed studies are publicly published research papers that use ML to diagnose KRAS gene mutations in CRC. The risk of bias in the encompassed models was evaluated via the PROBAST (Prediction Model Risk of Bias Assessment Tool). A meta-analysis of the model's concordance index (c-index) was performed, and a bivariate mixed-effects model was used to summarize sensitivity and specificity based on diagnostic contingency tables. A total of 43 studies involving 10,888 patients were included. The modeling variables were derived from clinical characteristics, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/computed tomography, and pathological histology. In the validation cohort, for the ML model developed based on CT radiomic features, the c-index, sensitivity, and specificity were 0.87 (95% CI 0.84-0.90), 0.85 (95% CI 0.80-0.89), and 0.83 (95% CI 0.73-0.89), respectively. For the model developed using MRI radiomic features, the c-index, sensitivity, and specificity were 0.77 (95% CI 0.71-0.83), 0.78 (95% CI 0.72-0.83), and 0.73 (95% CI 0.63-0.81), respectively. For the ML model developed based on positron emission tomography/computed tomography radiomic features, the c-index, sensitivity, and specificity were 0.84 (95% CI 0.77-0.90), 0.73, and 0.83, respectively. Notably, the deep learning (DL) model based on pathological images demonstrated a c-index, sensitivity, and specificity of 0.96 (95% CI 0.94-0.98), 0.83 (95% CI 0.72-0.91), and 0.87 (95% CI 0.77-0.92), respectively. The DL model MRI-based model showed a c-index of 0.93 (95% CI 0.90-0.96), sensitivity of 0.85 (95% CI 0.75-0.91), and specificity of 0.83 (95% CI 0.77-0.88). ML is highly accurate in diagnosing KRAS mutations in CRC, and DL models based on MRI and pathological images exhibit particularly strong diagnosis accuracy. More broadly applicable DL-based diagnostic tools may be developed in the future. However, the clinical application of DL models remains relatively limited at present. Therefore, future research should focus on increasing sample sizes, improving model architectures, and developing more advanced DL models to facilitate the creation of highly efficient intelligent diagnostic tools for KRAS mutation diagnosis in CRC.

SegMamba-V2: Long-range Sequential Modeling Mamba For General 3D Medical Image Segmentation.

Xing Z, Ye T, Yang Y, Cai D, Gai B, Wu XJ, Gao F, Zhu L

pubmed logopapersJul 18 2025
The Transformer architecture has demonstrated remarkable results in 3D medical image segmentation due to its capability of modeling global relationships. However, it poses a significant computational burden when processing high-dimensional medical images. Mamba, as a State Space Model (SSM), has recently emerged as a notable approach for modeling long-range dependencies in sequential data. Although a substantial amount of Mamba-based research has focused on natural language and 2D image processing, few studies explore the capability of Mamba on 3D medical images. In this paper, we propose SegMamba-V2, a novel 3D medical image segmentation model, to effectively capture long-range dependencies within whole-volume features at each scale. To achieve this goal, we first devise a hierarchical scale downsampling strategy to enhance the receptive field and mitigate information loss during downsampling. Furthermore, we design a novel tri-orientated spatial Mamba block that extends the global dependency modeling process from one plane to three orthogonal planes to improve feature representation capability. Moreover, we collect and annotate a large-scale dataset (named CRC-2000) with fine-grained categories to facilitate benchmarking evaluation in 3D colorectal cancer (CRC) segmentation. We evaluate the effectiveness of our SegMamba-V2 on CRC-2000 and three other large-scale 3D medical image segmentation datasets, covering various modalities, organs, and segmentation targets. Experimental results demonstrate that our Segmamba-V2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on 3D medical image segmentation tasks. The code for SegMamba-V2 is publicly available at: https://github.com/ge-xing/SegMamba-V2.

AI Prognostication in Nonsmall Cell Lung Cancer: A Systematic Review.

Augustin M, Lyons K, Kim H, Kim DG, Kim Y

pubmed logopapersJul 18 2025
The systematic literature review was performed on the use of artificial intelligence (AI) algorithms in nonsmall cell lung cancer (NSCLC) prognostication. Studies were evaluated for the type of input data (histology and whether CT, PET, and MRI were used), cancer therapy intervention, prognosis performance, and comparisons to clinical prognosis systems such as TNM staging. Further comparisons were drawn between different types of AI, such as machine learning (ML) and deep learning (DL). Syntheses of therapeutic interventions and algorithm input modalities were performed for comparison purposes. The review adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The initial database identified 3880 results, which were reduced to 513 after the automatic screening, and 309 after the exclusion criteria. The prognostic performance of AI for NSCLC has been investigated using histology and genetic data, and CT, PET, and MR imaging for surgery, immunotherapy, and radiation therapy patients with and without chemotherapy. Studies per therapy intervention were 13 for immunotherapy, 10 for radiotherapy, 14 for surgery, and 34 for other, multiple, or no specific therapy. The results of this systematic review demonstrate that AI-based prognostication methods consistently present higher prognostic performance for NSCLC, especially when directly compared with traditional prognostication techniques such as TNM staging. The use of DL outperforms ML-based prognostication techniques. DL-based prognostication demonstrates the potential for personalized precision cancer therapy as a supplementary decision-making tool. Before it is fully utilized in clinical practice, it is recommended that it be thoroughly validated through well-designed clinical trials.

Diagnostic Performance of Artificial Intelligence in Detecting and Distinguishing Pancreatic Ductal Adenocarcinoma via Computed Tomography: A Systematic Review and Meta-Analysis.

Harandi H, Gouravani M, Alikarami S, Shahrabi Farahani M, Ghavam M, Mohammadi S, Salehi MA, Reynolds S, Dehghani Firouzabadi F, Huda F

pubmed logopapersJul 18 2025
We conducted a systematic review and meta-analysis in diagnostic performance of studies that tried to use artificial intelligence (AI) algorithms in detecting pancreatic ductal adenocarcinoma (PDAC) and distinguishing them from other types of pancreatic lesions. We systematically searched for studies on pancreatic lesions and AI from January 2014 to May 2024. Data were extracted and a meta-analysis was performed using contingency tables and a random-effects model to calculate pooled sensitivity and specificity. Quality assessment was done using modified TRIPOD and PROBAST tools. We included 26 studies in this systematic review, with 22 studies chosen for meta-analysis. The evaluation of AI algorithms' performance in internal validation exhibited a pooled sensitivity of 93% (95% confidence interval [CI], 90 to 95) and specificity of 95% (95% CI, 92 to 97). Additionally, externally validated AI algorithms demonstrated a combined sensitivity of 89% (95% CI, 85 to 92) and specificity of 91% (95% CI, 85 to 95). Subgroup analysis indicated that diagnostic performance differed by comparator group, image contrast, segmentation technique, and algorithm type, with contrast-enhanced imaging and specific AI models (e.g., random forest for sensitivity and CNN for specificity) demonstrating superior accuracy. Although the potential biases should be further addressed, results of this systematic review and meta-analysis showed that AI models have the potential to be incorporated in clinical settings for the detection of smaller tumors and underpinning early signs of PDAC.

Deep learning-based automatic detection of pancreatic ductal adenocarcinoma ≤ 2 cm with high-resolution computed tomography: impact of the combination of tumor mass detection and indirect indicator evaluation.

Ozawa M, Sone M, Hijioka S, Hara H, Wakatsuki Y, Ishihara T, Hattori C, Hirano R, Ambo S, Esaki M, Kusumoto M, Matsui Y

pubmed logopapersJul 18 2025
Detecting small pancreatic ductal adenocarcinomas (PDAC) is challenging owing to their difficulty in being identified as distinct tumor masses. This study assesses the diagnostic performance of a three-dimensional convolutional neural network for the automatic detection of small PDAC using both automatic tumor mass detection and indirect indicator evaluation. High-resolution contrast-enhanced computed tomography (CT) scans from 181 patients diagnosed with PDAC (diameter ≤ 2 cm) between January 2018 and December 2023 were analyzed. The D/P ratio, which is the cross-sectional area of the MPD to that of the pancreatic parenchyma, was identified as an indirect indicator. A total of 204 patient data sets including 104 normal controls were analyzed for automatic tumor mass detection and D/P ratio evaluation. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were evaluated to detect tumor mass. The sensitivity of PDAC detection was compared with that of the software and radiologists, and tumor localization accuracy was validated against endoscopic ultrasonography (EUS) findings. The sensitivity, specificity, PPV, and NPV for tumor mass detection were 77.0%, 76.0%, 75.5%, and 77.5%, respectively; for D/P ratio detection, 87.0%, 94.2%, 93.5%, and 88.3%, respectively; and for combined tumor mass and D/P ratio detections, 96.0%, 70.2%, 75.6%, and 94.8%, respectively. No significant difference was observed between the software's sensitivity and that of the radiologist's report (software, 96.0%; radiologist, 96.0%; p = 1). The concordance rate between software findings and EUS was 96.0%. Combining indirect indicator evaluation with tumor mass detection may improve small PDAC detection accuracy.

Deep learning reconstruction enhances image quality in contrast-enhanced CT venography for deep vein thrombosis.

Asari Y, Yasaka K, Kurashima J, Katayama A, Kurokawa M, Abe O

pubmed logopapersJul 18 2025
This study aimed to evaluate and compare the diagnostic performance and image quality of deep learning reconstruction (DLR) with hybrid iterative reconstruction (Hybrid IR) and filtered back projection (FBP) in contrast-enhanced CT venography for deep vein thrombosis (DVT). A retrospective analysis was conducted on 51 patients who underwent lower limb CT venography, including 20 with DVT lesions and 31 without DVT lesions. CT images were reconstructed using DLR, Hybrid IR, and FBP. Quantitative image quality metrics, such as contrast-to-noise ratio (CNR) and image noise, were measured. Three radiologists independently assessed DVT lesion detection, depiction of DVT lesions and normal structures, subjective image noise, artifacts, and overall image quality using scoring systems. Diagnostic performance was evaluated using sensitivity and area under the receiver operating characteristic curve (AUC). The paired t-test and Wilcoxon signed-rank test compared the results for continuous variables and ordinal scales, respectively, between DLR and Hybrid IR as well as between DLR and FBP. DLR significantly improved CNR and reduced image noise compared to Hybrid IR and FBP (p < 0.001). AUC and sensitivity for DVT detection were not statistically different across reconstruction methods. Two readers reported improved lesion visualization with DLR. DLR was also rated superior in image quality, normal structure depiction, and noise suppression by all readers (p < 0.001). DLR enhances image quality and anatomical clarity in CT venography. These findings support the utility of DLR in improving diagnostic confidence and image interpretability in DVT assessment.

Feasibility and accuracy of the fully automated three-dimensional echocardiography right ventricular quantification software in children: validation against cardiac magnetic resonance.

Liu Q, Zheng Z, Zhang Y, Wu A, Lou J, Chen X, Yuan Y, Xie M, Zhang L, Sun P, Sun W, Lv Q

pubmed logopapersJul 18 2025
Previous studies have confirmed that fully automated three-dimensional echocardiography (3DE) right ventricular (RV) quantification software can accurately assess adult RV function. However, data on its accuracy in children are scarce. This study aimed to test the accuracy of the software in children using cardiac magnetic resonance (MR) as the gold standard. This study prospectively enrolled 82 children who underwent both echocardiography and cardiac MR within 24 h. The RV end-diastolic volume (EDV), end-systolic volume (ESV), and ejection fraction (EF) were obtained using the novel 3DE-RV quantification software and compared with cardiac MR values across different groups. The novel 3DE-RV quantification software was feasible in all 82 children (100%). Fully automated analysis was achieved in 35% patients with an analysis time of 8 ± 2 s and 100% reproducibility. Manual editing was necessary in the remaining 65% patients. The 3DE-derived RV volumes and EF correlated well with cardiac MR measurements (RVEDV, r=0.93; RVESV, r=0.90; RVEF, r=0.82; all P <0.001). Although the automated approach slightly underestimated RV volumes and overestimated RVEF compared with cardiac MR in the entire cohort, the bias was smaller in children with RVEF ≥ 45%, normal RV size, and good 3DE image quality. Fully automated 3DE-RV quantification software provided accurate and completely reproducible results in 35% children without any adjustment. The RV volumes and EF measured using the automated 3DE method correlated well with those from cardiac MR, especially in children with RVEF ≥ 45%, normal RV size, and good 3DE image quality. Therefore, the novel automated 3DE method may achieve rapid and accurate assessment of RV function in children with normal heart anatomy.

Clinical Translation of Integrated PET-MRI for Neurodegenerative Disease.

Shepherd TM, Dogra S

pubmed logopapersJul 18 2025
The prevalence of Alzheimer's disease and other dementias is increasing as populations live longer lifespans. Imaging is becoming a key component of the workup for patients with cognitive impairment or dementia. Integrated PET-MRI provides a unique opportunity for same-session multimodal characterization with many practical benefits to patients, referring physicians, radiologists, and researchers. The impact of integrated PET-MRI on clinical practice for early adopters of this technology can be profound. Classic imaging findings with integrated PET-MRI are illustrated for common neurodegenerative diseases or clinical-radiological syndromes. This review summarizes recent technical innovations that are being introduced into PET-MRI clinical practice and research for neurodegenerative disease. More recent MRI-based attenuation correction now performs similarly compared to PET-CT (e.g., whole-brain bias < 0.5%) such that early concerns for accurate PET tracer quantification with integrated PET-MRI appear resolved. Head motion is common in this patient population. MRI- and PET data-driven motion correction appear ready for routine use and should substantially improve PET-MRI image quality. PET-MRI by definition eliminates ~50% of the radiation from CT. Multiple hardware and software techniques for improving image quality with lower counts are reviewed (including motion correction). These methods can lower radiation to patients (and staff), increase scanner throughput, and generate better temporal resolution for dynamic PET. Deep learning has been broadly applied to PET-MRI. Deep learning analysis of PET and MRI data may provide accurate classification of different stages of Alzheimer's disease or predict progression to dementia. Over the past 5 years, clinical imaging of neurodegenerative disease has changed due to imaging research and the introduction of anti-amyloid immunotherapy-integrated PET-MRI is best suited for imaging these patients and its use appears poised for rapid growth outside academic medical centers. Evidence level: 5. Technical efficacy: Stage 3.

Deep learning reconstruction for improving image quality of pediatric abdomen MRI using a 3D T1 fast spoiled gradient echo acquisition.

Zucker EJ, Milshteyn E, Machado-Rivas FA, Tsai LL, Roberts NT, Guidon A, Gee MS, Victoria T

pubmed logopapersJul 18 2025
Deep learning (DL) reconstructions have shown utility for improving image quality of abdominal MRI in adult patients, but a paucity of literature exists in children. To compare image quality between three-dimensional fast spoiled gradient echo (SPGR) abdominal MRI acquisitions reconstructed conventionally and using a prototype method based on a commercial DL algorithm in a pediatric cohort. Pediatric patients (age < 18 years) who underwent abdominal MRI from 10/2023-3/2024 including gadolinium-enhanced accelerated 3D SPGR 2-point Dixon acquisitions (LAVA-Flex, GE HealthCare) were identified. Images were retrospectively generated using a prototype reconstruction method leveraging a commercial deep learning algorithm (AIR™ Recon DL, GE HealthCare) with the 75% noise reduction setting. For each case/reconstruction, three radiologists independently scored DL and non-DL image quality (overall and of selected structures) on a 5-point Likert scale (1-nondiagnostic, 5-excellent) and indicated reconstruction preference. The signal-to-noise ratio (SNR) and mean number of edges (inverse correlate of image sharpness) were also quantified. Image quality metrics and preferences were compared using Wilcoxon signed-rank, Fisher exact, and paired t-tests. Interobserver agreement was evaluated with the Kendall rank correlation coefficient (W). The final cohort consisted of 38 patients with mean ± standard deviation age of 8.6 ± 5.7 years, 23 males. Mean image quality scores for evaluated structures ranged from 3.8 ± 1.1 to 4.6 ± 0.6 in the DL group, compared to 3.1 ± 1.1 to 3.9 ± 0.6 in the non-DL group (all P < 0.001). All radiologists preferred DL in most cases (32-37/38, P < 0.001). There were a 2.3-fold increase in SNR and a 3.9% reduction in the mean number of edges in DL compared to non-DL images (both P < 0.001). In all scored anatomic structures except the spine and non-DL adrenals, interobserver agreement was moderate to substantial (W = 0.41-0.74, all P < 0.01). In a broad spectrum of pediatric patients undergoing contrast-enhanced Dixon abdominal MRI acquisitions, the prototype deep learning reconstruction is generally preferred to conventional methods with improved image quality across a wide range of structures.
Page 75 of 3113104 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.