Sort by:
Page 220 of 2922917 results

Functional connectome-based predictive modeling of suicidal ideation.

Averill LA, Tamman AJF, Fouda S, Averill CL, Nemati S, Ragnhildstveit A, Gosnell S, Akiki TJ, Salas R, Abdallah CG

pubmed logopapersMay 27 2025
Suicide represents an egregious threat to society despite major advancements in medicine, in part due to limited knowledge of the biological mechanisms of suicidal behavior. We apply a connectome predictive modeling machine learning approach to identify a reproducible brain network associated with suicidal ideation in the hopes of demonstrating possible targets for novel anti-suicidal therapeutics. Patients were recruited from an inpatient facility at The Menninger Clinic, in Houston, Texas (N = 261; 181 with active and specific suicidal ideation) and had a current major depressive episode and recurrent major depressive disorder, underwent resting-state functional magnetic resonance imaging. The participants' ages ranged from 18 to 70 (mean ± SEM = 31.6 ± 0.8 years) and 136 (52 %) were males. Using this approach, we found a robust and reproducible biomarker of suicidal ideation relative to controls without ideation, showing that increased suicidal ideation was associated with greater internal connectivity and reduced internetwork external connectivity in the central executive, default mode, and dorsal salience networks. We also found evidence for higher external connectivity between ventral salience and sensorimotor/visual networks as being associated with increased suicidal ideation. Overall, these observed differences may reflect reduced network integration and higher segregation of connectivity in individuals with increased suicide risk. Our findings provide avenues for future work to test novel drugs targeting these identified neural alterations, for instance drugs that increase network integration.

A Deep Neural Network Framework for the Detection of Bacterial Diseases from Chest X-Ray Scans.

Jain S, Jindal H, Bharti M

pubmed logopapersMay 27 2025
This research aims to develop an advanced deep-learning framework for detecting respiratory diseases, including COVID-19, pneumonia, and tuberculosis (TB), using chest X-ray scans. A Deep Neural Network (DNN)-based system was developed to analyze medical images and extract key features from chest X-rays. The system leverages various DNN learning algorithms to study X-ray scan color, curve, and edge-based features. The Adam optimizer is employed to minimize error rates and enhance model training. A dataset of 1800 chest X-ray images, consisting of COVID-19, pneumonia, TB, and typical cases, was evaluated across multiple DNN models. The highest accuracy was achieved using the VGG19 model. The proposed system demonstrated an accuracy of 94.72%, with a sensitivity of 92.73%, a specificity of 96.68%, and an F1-score of 94.66%. The error rate was 5.28% when trained with 80% of the dataset and tested on 20%. The VGG19 model showed significant accuracy improvements of 32.69%, 36.65%, 42.16%, and 8.1% over AlexNet, GoogleNet, InceptionV3, and VGG16, respectively. The prediction time was also remarkably low, ranging between 3 and 5 seconds. The proposed deep learning model efficiently detects respiratory diseases, including COVID-19, pneumonia, and TB, within seconds. The method ensures high reliability and efficiency by optimizing feature extraction and maintaining system complexity, making it a valuable tool for clinicians in rapid disease diagnosis.

Fetal origins of adult disease: transforming prenatal care by integrating Barker's Hypothesis with AI-driven 4D ultrasound.

Andonotopo W, Bachnas MA, Akbar MIA, Aziz MA, Dewantiningrum J, Pramono MBA, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersMay 26 2025
The fetal origins of adult disease, widely known as Barker's Hypothesis, suggest that adverse fetal environments significantly impact the risk of developing chronic diseases, such as diabetes and cardiovascular conditions, in adulthood. Recent advancements in 4D ultrasound (4D US) and artificial intelligence (AI) technologies offer a promising avenue for improving prenatal diagnostics and validating this hypothesis. These innovations provide detailed insights into fetal behavior and neurodevelopment, linking early developmental markers to long-term health outcomes. This study synthesizes contemporary developments in AI-enhanced 4D US, focusing on their roles in detecting fetal anomalies, assessing neurodevelopmental markers, and evaluating congenital heart defects. The integration of AI with 4D US allows for real-time, high-resolution visualization of fetal anatomy and behavior, surpassing the diagnostic precision of traditional methods. Despite these advancements, challenges such as algorithmic bias, data diversity, and real-world validation persist and require further exploration. Findings demonstrate that AI-driven 4D US improves diagnostic sensitivity and accuracy, enabling earlier detection of fetal abnormalities and optimization of clinical workflows. By providing a more comprehensive understanding of fetal programming, these technologies substantiate the links between early-life conditions and adult health outcomes, as proposed by Barker's Hypothesis. The integration of AI and 4D US has the potential to revolutionize prenatal care, paving the way for personalized maternal-fetal healthcare. Future research should focus on addressing current limitations, including ethical concerns and accessibility challenges, to promote equitable implementation. Such advancements could significantly reduce the global burden of chronic diseases and foster healthier generations.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Optimizing MRI sequence classification performance: insights from domain shift analysis.

Mahmutoglu MA, Rastogi A, Brugnara G, Vollmuth P, Foltyn-Dumitru M, Sahm F, Pfister S, Sturm D, Bendszus M, Schell M

pubmed logopapersMay 26 2025
MRI sequence classification becomes challenging in multicenter studies due to variability in imaging protocols, leading to unreliable metadata and requiring labor-intensive manual annotation. While numerous automated MRI sequence identification models are available, they frequently encounter the issue of domain shift, which detrimentally impacts their accuracy. This study addresses domain shift, particularly from adult to pediatric MRI data, by evaluating the effectiveness of pre-trained models under these conditions. This retrospective and multicentric study explored the efficiency of a pre-trained convolutional (ResNet) and CNN-Transformer hybrid model (MedViT) to handle domain shift. The study involved training ResNet-18 and MedVit models on an adult MRI dataset and testing them on a pediatric dataset, with expert domain knowledge adjustments applied to account for differences in sequence types. The MedViT model demonstrated superior performance compared to ResNet-18 and benchmark models, achieving an accuracy of 0.893 (95% CI 0.880-0.904). Expert domain knowledge adjustments further improved the MedViT model's accuracy to 0.905 (95% CI 0.893-0.916), showcasing its robustness in handling domain shift. Advanced neural network architectures like MedViT and expert domain knowledge on the target dataset significantly enhance the performance of MRI sequence classification models under domain shift conditions. By combining the strengths of CNNs and transformers, hybrid architectures offer enhanced robustness for reliable automated MRI sequence classification in diverse research and clinical settings. Question Domain shift between adult and pediatric MRI data limits deep learning model accuracy, requiring solutions for reliable sequence classification across diverse patient populations. Findings The MedViT model outperformed ResNet-18 in pediatric imaging; expert domain knowledge adjustment further improved accuracy, demonstrating robustness across diverse datasets. Clinical relevance This study enhances MRI sequence classification by leveraging advanced neural networks and expert domain knowledge to mitigate domain shift, boosting diagnostic precision and efficiency across diverse patient populations in multicenter environments.

FROG: A Fine-Grained Spatiotemporal Graph Neural Network With Self-Supervised Guidance for Early Diagnosis of Alzheimer's Disease.

Zhang S, Wang Q, Wei M, Zhong J, Zhang Y, Song Z, Li C, Zhang X, Han Y, Li Y, Lv H, Jiang J

pubmed logopapersMay 26 2025
Functional magnetic resonance imaging (fMRI) has demonstrated significant potential in the early diagnosis and study of pathological mechanisms of Alzheimer's disease (AD). To fit subtle cross-spatiotemporal interactions and learn pathological features from fMRI, we proposed a fine-grained spatiotemporal graph neural network with self-supervised learning (SSL) for diagnosis and biomarker extraction of early AD. First, considering the spatiotemporal interaction of the brain, we designed two masks that leverage the spatial correlation and temporal repeatability of fMRI. Afterwards, temporal gated inception convolution and graph scalable inception convolution were proposed for the spatiotemporal autoencoder to enhance subtle cross-spatiotemporal variation and learn noise-suppressed signals. Furthermore, a spatiotemporal scalable cosine error with high selectivity for signal reconstruction was designed in SSL to guide the autoencoder to fit the fine-grained pathological features in an unsupervised manner. A total of 5,687 samples from four cross-population cohorts were involved. The accuracy of our model was 5.1% higher than the state-of-the-art models, which included four AD diagnostic models, four SSL strategies, and three multivariate time series models. The neuroimaging biomarkers were precisely localized to the abnormal brain regions, and correlated significantly with the cognitive scale and biomarkers (P$< $0.001). Moreover, the AD progression was reflected through the mask reconstruction error of our SSL strategy. The results demonstrate that our model can effectively capture spatiotemporal and pathological features, and providing a novel and relevant framework for the early diagnosis of AD based on fMRI.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Training a deep learning model to predict the anatomy irradiated in fluoroscopic x-ray images.

Guo L, Trujillo D, Duncan JR, Thomas MA

pubmed logopapersMay 26 2025
Accurate patient dosimetry estimates from fluoroscopically-guided interventions (FGIs) are hindered by limited knowledge of the specific anatomy that was irradiated. Current methods use data reported by the equipment to estimate the patient anatomy exposed during each irradiation event. We propose a deep learning algorithm to automatically match 2D fluoroscopic images with corresponding anatomical regions in computational phantoms, enabling more precise patient dose estimates. Our method involves two main steps: (1) simulating 2D fluoroscopic images, and (2) developing a deep learning algorithm to predict anatomical coordinates from these images. For part (1), we utilized DeepDRR for fast and realistic simulation of 2D x-ray images from 3D computed tomography datasets. We generated a diverse set of simulated fluoroscopic images from various regions with different field sizes. In part (2), we employed a Residual Neural Network (ResNet) architecture combined with metadata processing to effectively integrate patient-specific information (age and gender) to learn the transformation between 2D images and specific anatomical coordinates in each representative phantom. For the Modified ResNet model, we defined an allowable error range of ± 10 mm. The proposed method achieved over 90% of predictions within ± 10 mm, with strong alignment between predicted and true coordinates as confirmed by Bland-Altman analysis. Most errors were within ± 2%, with outliers beyond ± 5% primarily in Z-coordinates for infant phantoms due to their limited representation in the training data. These findings highlight the model's accuracy and its potential for precise spatial localization, while emphasizing the need for improved performance in specific anatomical regions. In this work, a comprehensive simulated 2D fluoroscopy image dataset was developed, addressing the scarcity of real clinical datasets and enabling effective training of deep-learning models. The modified ResNet successfully achieved precise prediction of anatomical coordinates from the simulated fluoroscopic images, enabling the goal of more accurate patient-specific dosimetry.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

Multimodal integration of longitudinal noninvasive diagnostics for survival prediction in immunotherapy using deep learning.

Yeghaian M, Bodalal Z, van den Broek D, Haanen JBAG, Beets-Tan RGH, Trebeschi S, van Gerven MAJ

pubmed logopapersMay 26 2025
Immunotherapies have revolutionized the landscape of cancer treatments. However, our understanding of response patterns in advanced cancers treated with immunotherapy remains limited. By leveraging routinely collected noninvasive longitudinal and multimodal data with artificial intelligence, we could unlock the potential to transform immunotherapy for cancer patients, paving the way for personalized treatment approaches. In this study, we developed a novel artificial neural network architecture, multimodal transformer-based simple temporal attention (MMTSimTA) network, building upon a combination of recent successful developments. We integrated pre- and on-treatment blood measurements, prescribed medications, and CT-based volumes of organs from a large pan-cancer cohort of 694 patients treated with immunotherapy to predict mortality at 3, 6, 9, and 12 months. Different variants of our extended MMTSimTA network were implemented and compared to baseline methods, incorporating intermediate and late fusion-based integration methods. The strongest prognostic performance was demonstrated using a variant of the MMTSimTA model with area under the curves of 0.84 ± 0.04, 0.83 ± 0.02, 0.82 ± 0.02, 0.81 ± 0.03 for 3-, 6-, 9-, and 12-month survival prediction, respectively. Our findings show that integrating noninvasive longitudinal data using our novel architecture yields an improved multimodal prognostic performance, especially in short-term survival prediction. Our study demonstrates that multimodal longitudinal integration of noninvasive data using deep learning may offer a promising approach for personalized prognostication in immunotherapy-treated cancer patients.
Page 220 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.