Sort by:
Page 255 of 3093083 results

Optimizing MRI sequence classification performance: insights from domain shift analysis.

Mahmutoglu MA, Rastogi A, Brugnara G, Vollmuth P, Foltyn-Dumitru M, Sahm F, Pfister S, Sturm D, Bendszus M, Schell M

pubmed logopapersMay 26 2025
MRI sequence classification becomes challenging in multicenter studies due to variability in imaging protocols, leading to unreliable metadata and requiring labor-intensive manual annotation. While numerous automated MRI sequence identification models are available, they frequently encounter the issue of domain shift, which detrimentally impacts their accuracy. This study addresses domain shift, particularly from adult to pediatric MRI data, by evaluating the effectiveness of pre-trained models under these conditions. This retrospective and multicentric study explored the efficiency of a pre-trained convolutional (ResNet) and CNN-Transformer hybrid model (MedViT) to handle domain shift. The study involved training ResNet-18 and MedVit models on an adult MRI dataset and testing them on a pediatric dataset, with expert domain knowledge adjustments applied to account for differences in sequence types. The MedViT model demonstrated superior performance compared to ResNet-18 and benchmark models, achieving an accuracy of 0.893 (95% CI 0.880-0.904). Expert domain knowledge adjustments further improved the MedViT model's accuracy to 0.905 (95% CI 0.893-0.916), showcasing its robustness in handling domain shift. Advanced neural network architectures like MedViT and expert domain knowledge on the target dataset significantly enhance the performance of MRI sequence classification models under domain shift conditions. By combining the strengths of CNNs and transformers, hybrid architectures offer enhanced robustness for reliable automated MRI sequence classification in diverse research and clinical settings. Question Domain shift between adult and pediatric MRI data limits deep learning model accuracy, requiring solutions for reliable sequence classification across diverse patient populations. Findings The MedViT model outperformed ResNet-18 in pediatric imaging; expert domain knowledge adjustment further improved accuracy, demonstrating robustness across diverse datasets. Clinical relevance This study enhances MRI sequence classification by leveraging advanced neural networks and expert domain knowledge to mitigate domain shift, boosting diagnostic precision and efficiency across diverse patient populations in multicenter environments.

FROG: A Fine-Grained Spatiotemporal Graph Neural Network With Self-Supervised Guidance for Early Diagnosis of Alzheimer's Disease.

Zhang S, Wang Q, Wei M, Zhong J, Zhang Y, Song Z, Li C, Zhang X, Han Y, Li Y, Lv H, Jiang J

pubmed logopapersMay 26 2025
Functional magnetic resonance imaging (fMRI) has demonstrated significant potential in the early diagnosis and study of pathological mechanisms of Alzheimer's disease (AD). To fit subtle cross-spatiotemporal interactions and learn pathological features from fMRI, we proposed a fine-grained spatiotemporal graph neural network with self-supervised learning (SSL) for diagnosis and biomarker extraction of early AD. First, considering the spatiotemporal interaction of the brain, we designed two masks that leverage the spatial correlation and temporal repeatability of fMRI. Afterwards, temporal gated inception convolution and graph scalable inception convolution were proposed for the spatiotemporal autoencoder to enhance subtle cross-spatiotemporal variation and learn noise-suppressed signals. Furthermore, a spatiotemporal scalable cosine error with high selectivity for signal reconstruction was designed in SSL to guide the autoencoder to fit the fine-grained pathological features in an unsupervised manner. A total of 5,687 samples from four cross-population cohorts were involved. The accuracy of our model was 5.1% higher than the state-of-the-art models, which included four AD diagnostic models, four SSL strategies, and three multivariate time series models. The neuroimaging biomarkers were precisely localized to the abnormal brain regions, and correlated significantly with the cognitive scale and biomarkers (P$< $0.001). Moreover, the AD progression was reflected through the mask reconstruction error of our SSL strategy. The results demonstrate that our model can effectively capture spatiotemporal and pathological features, and providing a novel and relevant framework for the early diagnosis of AD based on fMRI.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Training a deep learning model to predict the anatomy irradiated in fluoroscopic x-ray images.

Guo L, Trujillo D, Duncan JR, Thomas MA

pubmed logopapersMay 26 2025
Accurate patient dosimetry estimates from fluoroscopically-guided interventions (FGIs) are hindered by limited knowledge of the specific anatomy that was irradiated. Current methods use data reported by the equipment to estimate the patient anatomy exposed during each irradiation event. We propose a deep learning algorithm to automatically match 2D fluoroscopic images with corresponding anatomical regions in computational phantoms, enabling more precise patient dose estimates. Our method involves two main steps: (1) simulating 2D fluoroscopic images, and (2) developing a deep learning algorithm to predict anatomical coordinates from these images. For part (1), we utilized DeepDRR for fast and realistic simulation of 2D x-ray images from 3D computed tomography datasets. We generated a diverse set of simulated fluoroscopic images from various regions with different field sizes. In part (2), we employed a Residual Neural Network (ResNet) architecture combined with metadata processing to effectively integrate patient-specific information (age and gender) to learn the transformation between 2D images and specific anatomical coordinates in each representative phantom. For the Modified ResNet model, we defined an allowable error range of ± 10 mm. The proposed method achieved over 90% of predictions within ± 10 mm, with strong alignment between predicted and true coordinates as confirmed by Bland-Altman analysis. Most errors were within ± 2%, with outliers beyond ± 5% primarily in Z-coordinates for infant phantoms due to their limited representation in the training data. These findings highlight the model's accuracy and its potential for precise spatial localization, while emphasizing the need for improved performance in specific anatomical regions. In this work, a comprehensive simulated 2D fluoroscopy image dataset was developed, addressing the scarcity of real clinical datasets and enabling effective training of deep-learning models. The modified ResNet successfully achieved precise prediction of anatomical coordinates from the simulated fluoroscopic images, enabling the goal of more accurate patient-specific dosimetry.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

Multimodal integration of longitudinal noninvasive diagnostics for survival prediction in immunotherapy using deep learning.

Yeghaian M, Bodalal Z, van den Broek D, Haanen JBAG, Beets-Tan RGH, Trebeschi S, van Gerven MAJ

pubmed logopapersMay 26 2025
Immunotherapies have revolutionized the landscape of cancer treatments. However, our understanding of response patterns in advanced cancers treated with immunotherapy remains limited. By leveraging routinely collected noninvasive longitudinal and multimodal data with artificial intelligence, we could unlock the potential to transform immunotherapy for cancer patients, paving the way for personalized treatment approaches. In this study, we developed a novel artificial neural network architecture, multimodal transformer-based simple temporal attention (MMTSimTA) network, building upon a combination of recent successful developments. We integrated pre- and on-treatment blood measurements, prescribed medications, and CT-based volumes of organs from a large pan-cancer cohort of 694 patients treated with immunotherapy to predict mortality at 3, 6, 9, and 12 months. Different variants of our extended MMTSimTA network were implemented and compared to baseline methods, incorporating intermediate and late fusion-based integration methods. The strongest prognostic performance was demonstrated using a variant of the MMTSimTA model with area under the curves of 0.84 ± 0.04, 0.83 ± 0.02, 0.82 ± 0.02, 0.81 ± 0.03 for 3-, 6-, 9-, and 12-month survival prediction, respectively. Our findings show that integrating noninvasive longitudinal data using our novel architecture yields an improved multimodal prognostic performance, especially in short-term survival prediction. Our study demonstrates that multimodal longitudinal integration of noninvasive data using deep learning may offer a promising approach for personalized prognostication in immunotherapy-treated cancer patients.

Predicting Surgical Versus Nonsurgical Management of Acute Isolated Distal Radius Fractures in Patients Under Age 60 Using a Convolutional Neural Network.

Hsu D, Persitz J, Noori A, Zhang H, Mashouri P, Shah R, Chan A, Madani A, Paul R

pubmed logopapersMay 26 2025
Distal radius fractures (DRFs) represent up to 20% of the fractures in the emergency department. Delays to surgery of more than 14 days are associated with poorer functional outcomes and increased health care utilization/costs. At our institution, the average time to surgery is more than 19 days because of the separation of surgical and nonsurgical care pathways and a lengthy referral process. To address this challenge, we aimed to create a convolutional neural network (CNN) capable of automating DRF x-ray analysis and triaging. We hypothesize that this model will accurately predict whether an acute isolated DRF fracture in a patient under the age of 60 years will be treated surgically or nonsurgically at our institution based on the radiographic input. We included 163 patients under the age of 60 years who presented to the emergency department between 2018 and 2023 with an acute isolated DRF and who were referred for clinical follow-up. Radiographs taken within 4 weeks of injury were collected in posterior-anterior and lateral views and then preprocessed for model training. The surgeons' decision to treat surgically or nonsurgically at our institution was the reference standard for assessing the model prediction accuracy. We included 723 radiographic posterior-anterior and lateral pairs (385 surgical and 338 nonsurgical) for model training. The best-performing model (seven CNN layers, one fully connected layer, an image input size of 256 × 256 pixels, and a 1.5× weighting for volarly displaced fractures) achieved 88% accuracy and 100% sensitivity. Values for true positive (100%), true negative (72.7%), false positive (27.3%), and false negative (0%) were calculated. After training based on institution-specific indications, a CNN-based algorithm can predict with 88% accuracy whether treatment of an acute isolated DRF in a patient under the age of 60 years will be treated surgically or nonsurgically. By promptly identifying patients who would benefit from expedited surgical treatment pathways, this model can reduce times for referral.

AI in Orthopedic Research: A Comprehensive Review.

Misir A, Yuce A

pubmed logopapersMay 26 2025
Artificial intelligence (AI) is revolutionizing orthopedic research and clinical practice by enhancing diagnostic accuracy, optimizing treatment strategies, and streamlining clinical workflows. Recent advances in deep learning have enabled the development of algorithms that detect fractures, grade osteoarthritis, and identify subtle pathologies in radiographic and magnetic resonance images with performance comparable to expert clinicians. These AI-driven systems reduce missed diagnoses and provide objective, reproducible assessments that facilitate early intervention and personalized treatment planning. Moreover, AI has made significant strides in predictive analytics by integrating diverse patient data-including gait and imaging features-to forecast surgical outcomes, implant survivorship, and rehabilitation trajectories. Emerging applications in robotics, augmented reality, digital twin technologies, and exoskeleton control promise to further transform preoperative planning and intraoperative guidance. Despite these promising developments, challenges such as data heterogeneity, algorithmic bias, and the "black box" nature of many models-as well as issues with robust validation-remain. This comprehensive review synthesizes current developments, critically examines limitations, and outlines future directions for integrating AI into musculoskeletal care.

Methodological Challenges in Deep Learning-Based Detection of Intracranial Aneurysms: A Scoping Review.

Joo B

pubmed logopapersMay 26 2025
Artificial intelligence (AI), particularly deep learning, has demonstrated high diagnostic performance in detecting intracranial aneurysms on computed tomography angiography (CTA) and magnetic resonance angiography (MRA). However, the clinical translation of these technologies remains limited due to methodological limitations and concerns about generalizability. This scoping review comprehensively evaluates 36 studies that applied deep learning to intracranial aneurysm detection on CTA or MRA, focusing on study design, validation strategies, reporting practices, and reference standards. Key findings include inconsistent handling of ruptured and previously treated aneurysms, underreporting of coexisting brain or vascular abnormalities, limited use of external validation, and an almost complete absence of prospective study designs. Only a minority of studies employed diagnostic cohorts that reflect real-world aneurysm prevalence, and few reported all essential performance metrics, such as patient-wise and lesion-wise sensitivity, specificity, and false positives per case. These limitations suggest that current studies remain at the stage of technical validation, with high risks of bias and limited clinical applicability. To facilitate real-world implementation, future research must adopt more rigorous designs, representative and diverse validation cohorts, standardized reporting practices, and greater attention to human-AI interaction.

Automated landmark-based mid-sagittal plane: reliability for 3-dimensional mandibular asymmetry assessment on head CT scans.

Alt S, Gajny L, Tilotta F, Schouman T, Dot G

pubmed logopapersMay 26 2025
The determination of the mid-sagittal plane (MSP) on three-dimensional (3D) head imaging is key to the assessment of facial asymmetry. The aim of this study was to evaluate the reliability of an automated landmark-based MSP to quantify mandibular asymmetry on head computed tomography (CT) scans. A dataset of 368 CT scans, including orthognathic surgery patients, was automatically annotated with 3D cephalometric landmarks via a previously published deep learning-based method. Five of these landmarks were used to automatically construct an MSP orthogonal to the Frankfurt horizontal plane. The reliability of automatic MSP construction was compared with the reliability of manual MSP construction based on 6 manual localizations by 3 experienced operators on 19 randomly selected CT scans. The mandibular asymmetry of the 368 CT scans with respect to the MSP was calculated and compared with clinical expert judgment. The construction of the MSP was found to be highly reliable, both manually and automatically. The manual reproducibility 95% limit of agreement was less than 1 mm for -y translation and less than 1.1° for -x and -z rotation, and the automatic measurement lied within the confidence interval of the manual method. The automatic MSP construction was shown to be clinically relevant, with the mandibular asymmetry measures being consistent with the expertly assessed levels of asymmetry. The proposed automatic landmark-based MSP construction was found to be as reliable as manual construction and clinically relevant in assessing the mandibular asymmetry of 368 head CT scans. Once implemented in a clinical software, fully automated landmark-based MSP construction could be clinically used to assess mandibular asymmetry on head CT scans.
Page 255 of 3093083 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.