Sort by:
Page 78 of 3693681 results

Classification of Brain Tumors using Hybrid Deep Learning Models

Neerav Nemchand Gala

arxiv logopreprintAug 2 2025
The use of Convolutional Neural Networks (CNNs) has greatly improved the interpretation of medical images. However, conventional CNNs typically demand extensive computational resources and large training datasets. To address these limitations, this study applied transfer learning to achieve strong classification performance using fewer training samples. Specifically, the study compared EfficientNetV2 with its predecessor, EfficientNet, and with ResNet50 in classifying brain tumors into three types: glioma, meningioma, and pituitary tumors. Results showed that EfficientNetV2 delivered superior performance compared to the other models. However, this improvement came at the cost of increased training time, likely due to the model's greater complexity.

Multimodal Attention-Aware Fusion for Diagnosing Distal Myopathy: Evaluating Model Interpretability and Clinician Trust

Mohsen Abbaspour Onari, Lucie Charlotte Magister, Yaoxin Wu, Amalia Lupi, Dario Creazzo, Mattia Tordin, Luigi Di Donatantonio, Emilio Quaia, Chao Zhang, Isel Grau, Marco S. Nobile, Yingqian Zhang, Pietro Liò

arxiv logopreprintAug 2 2025
Distal myopathy represents a genetically heterogeneous group of skeletal muscle disorders with broad clinical manifestations, posing diagnostic challenges in radiology. To address this, we propose a novel multimodal attention-aware fusion architecture that combines features extracted from two distinct deep learning models, one capturing global contextual information and the other focusing on local details, representing complementary aspects of the input data. Uniquely, our approach integrates these features through an attention gate mechanism, enhancing both predictive performance and interpretability. Our method achieves a high classification accuracy on the BUSI benchmark and a proprietary distal myopathy dataset, while also generating clinically relevant saliency maps that support transparent decision-making in medical diagnosis. We rigorously evaluated interpretability through (1) functionally grounded metrics, coherence scoring against reference masks and incremental deletion analysis, and (2) application-grounded validation with seven expert radiologists. While our fusion strategy boosts predictive performance relative to single-stream and alternative fusion strategies, both quantitative and qualitative evaluations reveal persistent gaps in anatomical specificity and clinical usefulness of the interpretability. These findings highlight the need for richer, context-aware interpretability methods and human-in-the-loop feedback to meet clinicians' expectations in real-world diagnostic settings.

Deep Learning in Myocarditis: A Novel Approach to Severity Assessment

Nishimori, M., Otani, T., Asaumi, Y., Ohta-Ogo, K., Ikeda, Y., Amemiya, K., Noguchi, T., Izumi, C., Shinohara, M., Hatakeyama, K., Nishimura, K.

medrxiv logopreprintAug 2 2025
BackgroundMyocarditis is a life-threatening disease with significant hemodynamic risks during the acute phase. Although histopathological examination of myocardial biopsy specimens remains the gold standard for diagnosis, there is no established method for objectively quantifying cardiomyocyte damage. We aimed to develop an AI model to evaluate clinical myocarditis severity using comprehensive pathology data. MethodsWe retrospectively analyzed 314 patients (1076 samples) who underwent myocardial biopsy from 2002 to 2021 at the National Cerebrovascular Center. Among these patients, 158 were diagnosed with myocarditis based on the Dallas criteria. A Multiple Instance Learning (MIL) model served as a pre-trained classifier to detect myocarditis across whole-slide images. We then constructed two clinical severity-prediction models: (1) a logistic regression model (Model 1) using the density of inflammatory cells per unit area, and (2) a Transformer-based model (Model 2), which processed the top-ranked patches identified by the MIL model to predict clinical severe outcomes. ResultsModel 1 achieved an AUROC of 0.809, indicating a robust association between inflammatory cell density and severe myocarditis. In contrast, Model 2, the Transformer-based approach, yielded an AUROC of 0.993 and demonstrated higher accuracy and precision for severity prediction. Attention score visualizations showed that Model 2 captured both inflammatory cell infiltration and additional morphological features. These findings suggest that combining MIL with Transformer architectures enables more comprehensive identification of key histological markers associated with clinical severe disease. ConclusionsOur results highlight that a Transformer-based AI model analyzing whole-slide pathology images can accurately assess clinical myocarditis severity. Moreover, simply quantifying the extent of inflammatory cell infiltration also correlates strongly with clinical outcomes. These methods offer a promising avenue for improving diagnostic precision, guiding treatment decisions, and ultimately enhancing patient management. Future prospective studies are warranted to validate these models in broader clinical settings and facilitate their integration into routine pathological workflows. What is new?- This is the first study to apply an AI model for the diagnosis and severity assessment of myocarditis. - New evidence shows that inflammatory cell infiltration is related to the severity of myocarditis. - Using information from the entire tissue, not just inflammatory cells, allows for a more accurate assessment of myocarditis severity. What are the clinical implications?- The use of the AI model allows for an unprecedented histological evaluation of myocarditis severity, which can enhance early diagnosis and intervention strategies. - Rapid and precise assessments of myocarditis severity by the AI model can support clinicians in making timely and appropriate treatment decisions, potentially improving patient outcomes. - The incorporation of this AI model into clinical practice may streamline diagnostic workflows and optimize the allocation of medical resources, enhancing overall patient care.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Anatomical Considerations for Achieving Optimized Outcomes in Individualized Cochlear Implantation.

Timm ME, Avallone E, Timm M, Salcher RB, Rudnik N, Lenarz T, Schurzig D

pubmed logopapersAug 1 2025
Machine learning models can assist with the selection of electrode arrays required for optimal insertion angles. Cochlea implantation is a successful therapy in patients with severe to profound hearing loss. The effectiveness of a cochlea implant depends on precise insertion and positioning of electrode array within the cochlea, which is known for its variability in shape and size. Preoperative imaging like CT or MRI plays a significant role in evaluating cochlear anatomy and planning the surgical approach to optimize outcomes. In this study, preoperative and postoperative CT and CBCT data of 558 cochlea-implant patients were analyzed in terms of the influence of anatomical factors and insertion depth onto the resulting insertion angle. Machine learning models can predict insertion depths needed for optimal insertion angles, with performance improving by including cochlear dimensions in the models. A simple linear regression using just the insertion depth explained 88% of variability, whereas adding cochlear length or diameter and width further improved predictions up to 94%.

Establishing a Deep Learning Model That Integrates Pretreatment and Midtreatment Computed Tomography to Predict Treatment Response in Non-Small Cell Lung Cancer.

Chen X, Meng F, Zhang P, Wang L, Yao S, An C, Li H, Zhang D, Li H, Li J, Wang L, Liu Y

pubmed logopapersAug 1 2025
Patients with identical stages or similar tumor volumes can vary significantly in their responses to radiation therapy (RT) due to individual characteristics, making personalized RT for non-small cell lung cancer (NSCLC) challenging. This study aimed to develop a deep learning model by integrating pretreatment and midtreatment computed tomography (CT) to predict the treatment response in NSCLC patients. We retrospectively collected data from 168 NSCLC patients across 3 hospitals. Data from Shanghai General Hospital (SGH, 35 patients) and Shanxi Cancer Hospital (SCH, 93 patients) were used for model training and internal validation, while data from Linfen Central Hospital (LCH, 40 patients) were used for external validation. Deep learning, radiomics, and clinical features were extracted to establish a varying time interval long short-term memory network for response prediction. Furthermore, we derived a model-deduced personalize dose escalation (DE) for patients predicted to have suboptimal gross tumor volume regression. The area under the receiver operating characteristic curve (AUC) and predicted absolute error were used to evaluate the predictive Response Evaluation Criteria in Solid Tumors classification and the proportion of gross tumor volume residual. DE was calculated as the biological equivalent dose using an /α/β ratio of 10 Gy. The model using only pretreatment CT achieved the highest AUC of 0.762 and 0.687 in internal and external validation respectively, whereas the model integrating both pretreatment and midtreatment CT achieved AUC of 0.869 and 0.798, with predicted absolute error of 0.137 and 0.185, respectively. We performed personalized DE for 29 patients. Their original biological equivalent dose was approximately 72 Gy, within the range of 71.6 Gy to 75 Gy. DE ranged from 77.7 to 120 Gy for 29 patients, with 17 patients exceeding 100 Gy and 8 patients reaching the model's preset upper limit of 120 Gy. Combining pretreatment and midtreatment CT enhances prediction performance for RT response and offers a promising approach for personalized DE in NSCLC.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

Multimodal multiphasic preoperative image-based deep-learning predicts HCC outcomes after curative surgery.

Hui RW, Chiu KW, Lee IC, Wang C, Cheng HM, Lu J, Mao X, Yu S, Lam LK, Mak LY, Cheung TT, Chia NH, Cheung CC, Kan WK, Wong TC, Chan AC, Huang YH, Yuen MF, Yu PL, Seto WK

pubmed logopapersAug 1 2025
HCC recurrence frequently occurs after curative surgery. Histological microvascular invasion (MVI) predicts recurrence but cannot provide preoperative prognostication, whereas clinical prediction scores have variable performances. Recurr-NET, a multimodal multiphasic residual-network random survival forest deep-learning model incorporating preoperative CT and clinical parameters, was developed to predict HCC recurrence. Preoperative triphasic CT scans were retrieved from patients with resected histology-confirmed HCC from 4 centers in Hong Kong (internal cohort). The internal cohort was randomly divided in an 8:2 ratio into training and internal validation. External testing was performed in an independent cohort from Taiwan.Among 1231 patients (age 62.4y, 83.1% male, 86.8% viral hepatitis, and median follow-up 65.1mo), cumulative HCC recurrence rates at years 2 and 5 were 41.8% and 56.4%, respectively. Recurr-NET achieved excellent accuracy in predicting recurrence from years 1 to 5 (internal cohort AUROC 0.770-0.857; external AUROC 0.758-0.798), significantly outperforming MVI (internal AUROC 0.518-0.590; external AUROC 0.557-0.615) and multiple clinical risk scores (ERASL-PRE, ERASL-POST, DFT, and Shim scores) (internal AUROC 0.523-0.587, external AUROC: 0.524-0.620), respectively (all p < 0.001). Recurr-NET was superior to MVI in stratifying recurrence risks at year 2 (internal: 72.5% vs. 50.0% in MVI; external: 65.3% vs. 46.6% in MVI) and year 5 (internal: 86.4% vs. 62.5% in MVI; external: 81.4% vs. 63.8% in MVI) (all p < 0.001). Recurr-NET was also superior to MVI in stratifying liver-related and all-cause mortality (all p < 0.001). The performance of Recurr-NET remained robust in subgroup analyses. Recurr-NET accurately predicted HCC recurrence, outperforming MVI and clinical prediction scores, highlighting its potential in preoperative prognostication.

M4CXR: Exploring Multitask Potentials of Multimodal Large Language Models for Chest X-Ray Interpretation.

Park J, Kim S, Yoon B, Hyun J, Choi K

pubmed logopapersAug 1 2025
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the LLMs' capability for multitask learning or lacking clinical accuracy. This article presents M4CXR, a multimodal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought (CoT) prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multiimage, and multistudy contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.
Page 78 of 3693681 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.