Sort by:
Page 95 of 1311301 results

A magnetic resonance imaging (MRI)-based deep learning radiomics model predicts recurrence-free survival in lung cancer patients after surgical resection of brain metastases.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Automated Ensemble Multimodal Machine Learning for Healthcare.

Imrie F, Denner S, Brunschwig LS, Maier-Hein K, van der Schaar M

pubmed logopapersJun 1 2025
The application of machine learning in medicine and healthcare has led to the creation of numerous diagnostic and prognostic models. However, despite their success, current approaches generally issue predictions using data from a single modality. This stands in stark contrast with clinician decision-making which employs diverse information from multiple sources. While several multimodal machine learning approaches exist, significant challenges in developing multimodal systems remain that are hindering clinical adoption. In this paper, we introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning. AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies. In an illustrative application using a multimodal skin lesion dataset, we highlight the importance of multimodal machine learning and the power of combining multiple fusion strategies using ensemble learning. We have open-sourced our framework as a tool for the community and hope it will accelerate the uptake of multimodal machine learning in healthcare and spur further innovation.

A Survey of Surrogates and Health Care Professionals Indicates Support of Cognitive Motor Dissociation-Assisted Prognostication.

Heinonen GA, Carmona JC, Grobois L, Kruger LS, Velazquez A, Vrosgou A, Kansara VB, Shen Q, Egawa S, Cespedes L, Yazdi M, Bass D, Saavedra AB, Samano D, Ghoshal S, Roh D, Agarwal S, Park S, Alkhachroum A, Dugdale L, Claassen J

pubmed logopapersJun 1 2025
Prognostication of patients with acute disorders of consciousness is imprecise but more accurate technology-supported predictions, such as cognitive motor dissociation (CMD), are emerging. CMD refers to the detection of willful brain activation following motor commands using functional magnetic resonance imaging or machine learning-supported analysis of the electroencephalogram in clinically unresponsive patients. CMD is associated with long-term recovery, but acceptance by surrogates and health care professionals is uncertain. The objective of this study was to determine receptiveness for CMD to inform goals of care (GoC) decisions and research participation among health care professionals and surrogates of behaviorally unresponsive patients. This was a two-center study of surrogates of and health care professionals caring for unconscious patients with severe neurological injury who were enrolled in two prospective US-based studies. Participants completed a 13-item survey to assess demographics, religiosity, minimal acceptable level of recovery, enthusiasm for research participation, and receptiveness for CMD to support GoC decisions. Completed surveys were obtained from 196 participants (133 health care professionals and 63 surrogates). Across all respondents, 93% indicated that they would want their loved one or the patient they cared for to participate in a research study that supports recovery of consciousness if CMD were detected, compared to 58% if CMD were not detected. Health care professionals were more likely than surrogates to change GoC with a positive (78% vs. 59%, p = 0.005) or negative (83% vs. 59%, p = 0.0002) CMD result. Participants who reported religion was the most important part of their life were least likely to change GoC with or without CMD. Participants who identified as Black (odds ratio [OR] 0.12, 95% confidence interval [CI] 0.04-0.36) or Hispanic/Latino (OR 0.39, 95% CI 0.2-0.75) and those for whom religion was the most important part of their life (OR 0.18, 95% CI 0.05-0.64) were more likely to accept a lower minimum level of recovery. Technology-supported prognostication and enthusiasm for clinical trial participation was supported across a diverse spectrum of health care professionals and surrogate decision-makers. Education for surrogates and health care professionals should accompany integration of technology-supported prognostication.

Assessing the diagnostic accuracy and prognostic utility of artificial intelligence detection and grading of coronary artery calcification on nongated computed tomography (CT) thorax.

Shear B, Graby J, Murphy D, Strong K, Khavandi A, Burnett TA, Charters PFP, Rodrigues JCL

pubmed logopapersJun 1 2025
This study assessed the diagnostic accuracy and prognostic implications of an artificial intelligence (AI) tool for coronary artery calcification (CAC) assessment on nongated, noncontrast thoracic computed tomography (CT). A single-centre retrospective analysis of 75 consecutive patients per age group (<40, 40-49, 50-59, 60-69, 70-79, 80-89, and ≥90 years) undergoing non-gated, non-contrast CT (January-December 2015) was conducted. AI analysis reported CAC presence and generated an Agatston score, and the performance was compared with baseline CT reports and a dedicated radiologist re-review. Interobserver variability between AI and radiologist assessments was measured using Cohen's κ. All-cause mortality was recorded, and its association with AI-detected CAC was tested. A total of 291 patients (mean age: 64 ± 19, 51% female) were included, with 80% (234/291) of AI reports passing radiologist quality assessment. CAC was reported on 7% (17/234) of initial clinical reports, 58% (135/234) on radiologist re-review, and 57% (134/234) by AI analysis. After manual quality assurance (QA) assessment, the AI tool demonstrated high sensitivity (96%), specificity (96%), positive predictive value (95%), and negative predictive value (97%) for CAC detection compared with radiologist re-review. Interobserver agreement was strong for CAC prevalence (κ = 0.92) and moderate for severity grading (κ = 0.60). AI-detected CAC presence and severity predicted all-cause mortality (p < 0.001). The AI tool exhibited feasible analysis potential for non-contrast, non-gated thoracic CTs, offering prognostic insights if integrated into routine practice. Nonetheless, manual quality assessment remains essential. This AI tool represents a potential enhancement to CAC detection and reporting on routine noncardiac chest CT.

MedKAFormer: When Kolmogorov-Arnold Theorem Meets Vision Transformer for Medical Image Representation.

Wang G, Zhu Q, Song C, Wei B, Li S

pubmed logopapersJun 1 2025
Vision Transformers (ViTs) suffer from high parameter complexity because they rely on Multi-layer Perceptrons (MLPs) for nonlinear representation. This issue is particularly challenging in medical image analysis, where labeled data is limited, leading to inadequate feature representation. Existing methods have attempted to optimize either the patch embedding stage or the non-embedding stage of ViTs. Still, they have struggled to balance effective modeling, parameter complexity, and data availability. Recently, the Kolmogorov-Arnold Network (KAN) was introduced as an alternative to MLPs, offering a potential solution to the large parameter issue in ViTs. However, KAN cannot be directly integrated into ViT due to challenges such as handling 2D structured data and dimensionality catastrophe. To solve this problem, we propose MedKAFormer, the first ViT model to incorporate the Kolmogorov-Arnold (KA) theorem for medical image representation. It includes a Dynamic Kolmogorov-Arnold Convolution (DKAC) layer for flexible nonlinear modeling in the patch embedding stage. Additionally, it introduces a Nonlinear Sparse Token Mixer (NSTM) and a Nonlinear Dynamic Filter (NDF) in the non-embedding stage. These components provide comprehensive nonlinear representation while reducing model overfitting. MedKAFormer reduces parameter complexity by 85.61% compared to ViT-Base and achieves competitive results on 14 medical datasets across various imaging modalities and structures.

Association of the characteristics of brain magnetic resonance imaging with genes related to disease onset in schizophrenia patients.

Lin J, Wang B, Chen S, Cao F, Zhang J, Lu Z

pubmed logopapersJun 1 2025
Schizophrenia (SCH) is a complex neurodevelopmental disorder, whose pathogenesis is not fully elucidated. This article aims to reveal disease-specific brain structural and functional changes and their potential genetic basis by analyzing the characteristics of brain magnetic resonance imaging (MRI) in SCH patients and related gene expression patterns. Differentially expressed genes (DEGs) between SCH and healthy control (NC) groups in the GSE48072 dataset were identified and functionally analyzed, and a protein-protein interaction (PPI) network was fabricated to screen for core genes (CGs). Meanwhile, MRI data from the COBRE, the Human Connectome Project (HCP), the 1000 Functional Connectomes Project (FCP), and the Consortium for Reliability and Reproducibility (CoRR) were utilized to explore differences in brain activity patterns between SCH patients and NC group using a 3D deep aggregation network (3D DANet) machine learning approach. A correlation analysis was performed between the identified CGs and MRI imaging characteristics. 82 DEGs were collected from the GSE48072 dataset, primarily involved in cytotoxic granules, growth factor binding, and graft-versus-host disease pathways. The construction of the PPI network revealed KLRD1, KLRF1, CD244, GZMH, GZMA, GZMB, PRF1, and SLAMF6 as CGs. SCH patients exhibited relatively enhanced activity patterns in the frontoparietal attention network (FAN) and default mode network (DMN) across four datasets, while showing a trend of weakening in most other networks. The 3D DANet demonstrated higher accuracy, specificity, and sensitivity in brain image classification. The correlation between enhancement of the DMN and genetic abnormalities was the strongest, followed by the enhancement of the frontal and parietal attention networks. In contrast, the correlation between the weakening of the sensory-motor network and occipital network and genetic abnormalities was relatively weak. The strongest correlation was observed between MRI characteristics and the KLRD1 and CD244 genes. The granzyme-mediated programmed cell death signaling pathway is related to pathogenesis of SCH, and CD244 may serve as potential biological markers for diagnosing SCH. The correlation between enhancement of the DMN and genetic abnormalities was the strongest, followed by the enhancement of the frontal and parietal attention networks. In contrast, the correlation between weakening of the sensory-motor network and occipital network and genetic abnormalities was relatively weak. Additionally, the strongest correlation was observed between MRI features and the KLRD1 and CD244 genes. The use of the 3D DANet method has improved the detection precision of brain structural and functional changes in SCH patients, providing a new perspective for understanding the biological basis of the disease.

DeepValve: The first automatic detection pipeline for the mitral valve in Cardiac Magnetic Resonance imaging.

Monopoli G, Haas D, Singh A, Aabel EW, Ribe M, Castrini AI, Hasselberg NE, Bugge C, Five C, Haugaa K, Forsch N, Thambawita V, Balaban G, Maleckar MM

pubmed logopapersJun 1 2025
Mitral valve (MV) assessment is key to diagnosing valvular disease and to addressing its serious downstream complications. Cardiac magnetic resonance (CMR) has become an essential diagnostic tool in MV disease, offering detailed views of the valve structure and function, and overcoming the limitations of other imaging modalities. Automated detection of the MV leaflets in CMR could enable rapid and precise assessments that enhance diagnostic accuracy. To address this gap, we introduce DeepValve, the first deep learning (DL) pipeline for MV detection using CMR. Within DeepValve, we tested three valve detection models: a keypoint-regression model (UNET-REG), a segmentation model (UNET-SEG) and a hybrid model based on keypoint detection (DSNT-REG). We also propose metrics for evaluating the quality of MV detection, including Procrustes-based metrics (UNET-REG, DSNT-REG) and customized Dice-based metrics (UNET-SEG). We developed and tested our models on a clinical dataset comprising 120 CMR images from patients with confirmed MV disease (mitral valve prolapse and mitral annular disjunction). Our results show that DSNT-REG delivered the best regression performance, accurately locating landmark locations. UNET-SEG achieved satisfactory Dice and customized Dice scores, also accurately predicting valve location and topology. Overall, our work represents a critical first step towards automated MV assessment using DL in CMR and paving the way for improved clinical assessment in MV disease.

Artificial intelligence medical image-aided diagnosis system for risk assessment of adjacent segment degeneration after lumbar fusion surgery.

Dai B, Liang X, Dai Y, Ding X

pubmed logopapersJun 1 2025
The existing assessment of adjacent segment degeneration (ASD) risk after lumbar fusion surgery focuses on a single type of clinical information or imaging manifestations. In the early stages, it is difficult to show obvious degeneration characteristics, and the patients' true risks cannot be fully revealed. The evaluation results based on imaging ignore the clinical symptoms and changes in quality of life of patients, limiting the understanding of the natural process of ASD and the comprehensive assessment of its risk factors, and hindering the development of effective prevention strategies. To improve the quality of postoperative management and effectively identify the characteristics of ASD, this paper studies the risk assessment of ASD after lumbar fusion surgery by combining the artificial intelligence (AI) medical image-aided diagnosis system. First, the collaborative attention mechanism is adopted to start with the extraction of single-modal features and fuse the multi-modal features of computed tomography (CT) and magnetic resonance imaging (MRI) images. Then, the similarity matrix is weighted to achieve the complementarity of multi-modal information, and the stability of feature extraction is improved through the residual network structure. Finally, the fully connected network (FCN) is combined with the multi-task learning framework to provide a more comprehensive assessment of the risk of ASD. The experimental analysis results show that compared with three advanced models, three dimensional-convolutional neural networks (3D-CNN), U-Net++, and deep residual networks (DRN), the accuracy of the model in this paper is 3.82 %, 6.17 %, and 6.68 % higher respectively; the precision is 0.56 %, 1.09 %, and 4.01 % higher respectively; the recall is 3.41 %, 4.85 %, and 5.79 % higher respectively. The conclusion shows that the AI medical image-aided diagnosis system can help to accurately identify the characteristics of ASD and effectively assess the risks after lumbar fusion surgery.

Driving Knowledge to Action: Building a Better Future With Artificial Intelligence-Enabled Multidisciplinary Oncology.

Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, <i>Driving Knowledge to Action.</i> We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.

Virtual monochromatic image-based automatic segmentation strategy using deep learning method.

Chen L, Yu S, Chen Y, Wei X, Yang J, Guo C, Zeng W, Yang C, Zhang J, Li T, Lin C, Le X, Zhang Y

pubmed logopapersJun 1 2025
The image quality of single-energy CT (SECT) limited the accuracy of automatic segmentation. Dual-energy CT (DECT) may potentially improve automatic segmentation yet the performance and strategy have not been investigated thoroughly. Based on DECT-generated virtual monochromatic images (VMIs), this study proposed a novel deep learning model (MIAU-Net) and evaluated the segmentation performance on the head organs-at-risk (OARs). The VMIs from 40 keV to 190 keV were retrospectively generated at intervals of 10 keV using the DECT of 46 patients. Images with expert delineation were used for training, validation, and testing MIAU-Net for automatic segmentation. Theperformance of MIAU-Net was compared with the existingU-Net, Attention-UNet, nnU-Net and TransFuse methods based on Dice Similarity Coefficient (DSC). Correlationanalysis was performed to evaluate and optimize the impact of different virtual energies on the accuracy of segmentation. Using MIAU-Net, average DSCs across all virtual energy levels were 93.78 %, 81.75 %, 84.46 %, 92.85 %, 94.40 %, and 84.75 % for the brain stem, optic chiasm, lens, mandible, eyes, and optic nerves, respectively, higher than the previous publications using SECT. MIAU-Net achieved the highest average DSC (88.84 %) and the lowest parameters (14.54 M) in all tested models. The results suggested that 60 keV-80 keV is the optimal VMI energy level for soft tissue delineation, while 100 keV is optimal for skeleton segmentation. This work proposed and validated a novel deep learning model for automatic segmentation based on DECT, suggesting potential advantages and OAR-specific optimal energy of using VMIs for automatic delineation.
Page 95 of 1311301 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.