Sort by:
Page 153 of 6486473 results

Darrudi R, Hosseini A, Emami H, Roshanpoor A, Nahayati MA

pubmed logopapersSep 19 2025
Multiple sclerosis (MS) diagnosis remains challenging due to its heterogeneous clinical manifestations and the absence of a definitive diagnostic test. Conventional magnetic resonance imaging, while central to diagnosis, faces limitations in specificity and inter-rater variability. Artificial intelligence offers promising solutions for enhancing medical imaging analysis in MS, yet its efficacy requires systematic validation. This systematic review and meta-analysis followed Preferred Reporting Items for Systematic Review and Meta-Analysis guidelines. We searched Embase, PubMed, Web of Science, Scopus, Google Scholar, and gray literature (inception to January 5, 2025) for case-control studies applying AI to magnetic resonance imaging-based MS diagnosis. A random-effects model pooled sensitivity, specificity, and accuracy. Heterogeneity was assessed via the Q-statistic and I². Meta-regression evaluated pixel count impact. Meta-analysis revealed pooled sensitivity, specificity, and accuracy of 93%, 95%, and 94%, respectively, showcasing the efficacy of AI models in MS diagnosis. Additionally, meta-regression analysis showed no significant correlation between the number of pixels and diagnostic performance parameters. Sensitivity analysis confirmed the robustness of results, while publication bias assessment indicated no evidence of bias. AI-based algorithms show promise in augmenting traditional diagnostic approaches for MS, offering accurate and timely diagnosis. Further research is warranted to standardize AI methodologies and optimize their integration into clinical practice. This study contributes to the growing evidence supporting AI's role in enhancing diagnostics and patient care in MS.

Singh, P., Kumar, S., Tyagi, R., Young, B. K., Jordan, B. K., Scottoline, B., Evers, P. D., Ostmo, S., Coyner, A. S., Lin, W.-C., Gupta, A., Erdogmus, D., Chan, R. V. P., McCourt, E. A., Barry, J. S., McEvoy, C. T., Chiang, M. F., Campbell, J. P., Kalpathy-Cramer, J.

medrxiv logopreprintSep 19 2025
ImportanceBronchopulmonary dysplasia (BPD) and pulmonary hypertension (PH) are leading causes of morbidity and mortality in premature infants. ObjectiveTo determine whether images obtained as part of retinopathy of prematurity (ROP) screening might contain features associated with BPD and PH in infants, and whether a multi-modal model integrating imaging features with demographic risk factors might outperform a model based on demographic risk alone. DesignA deep learning model was used to study retinal images collected from patients enrolled in the multi-institutional Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study. SettingSeven neonatal intensive care units. Participants493 infants at risk for ROP undergoing routine ROP screening examinations from 2012 to 2020. Images were limited to <=34 weeks post-menstrual age (PMA) so as to precede the clinical diagnosis of BPD or PH. ExposureBPD was diagnosed by the presence of an oxygen requirement at 36 weeks PMA, and PH was diagnosed by echocardiogram at 34 weeks. A support vector machine model was trained to predict BPD, or PH, diagnosis using: A) image features alone (extracted using Resnet18), B) demographics alone, C) image features concatenated with demographics. To reduce the possibility of confounding with ROP, secondary models were trained using only images without clinical signs of ROP. Main Outcome MeasureFor both BPD and PH, we report performance on a held-out testset (99 patients from the BPD cohort and 37 patients from the PH cohort), assessed by the area under receiver operating characteristic curve. ResultsFor BPD, the diagnostic accuracy of a multimodal model was 0.82 (95% CI: 0.72-0.90), compared to demographics 0.72 (0.60-0.82; P=0.07) or imaging 0.72 (0.61-0.82; P=0.002) alone. For PH, it was 0.91 (0.71-1.0) combined compared to 0.68 (0.43-0.9; P=0.04) for demographics and 0.91 (0.78-1.0; P=0.4) for imaging alone. These associations remained even when models were trained on the subset of images without any clinical signs of ROP. Conclusions and RelevanceRetinal images obtained during ROP screening can be used to predict the diagnosis of BPD and PH in preterm infants, which may lead to earlier diagnosis and avoid the need for invasive diagnostic testing in the future. KEY POINTSO_ST_ABSQuestionC_ST_ABSCan an artificial intelligence (AI) algorithm diagnose bronchopulmonary dysplasia (BPD) or pulmonary hypertension (PH) in retinal images in preterm infants obtained during retinopathy of prematurity (ROP) screening examinations? FindingsAI was able to predict the presence of both BPD and PH in retinal images with higher accuracy than what could be predicted based on baseline demographic risk alone. MeaningDeploying AI models using images obtained during retinopathy of prematurity screening could lead to earlier diagnosis and avoid the need for more invasive diagnostic testing.

Zhi, Y.-C., Anguajibi, V., Oryema, J. B., Nabatte, B., Opio, C. K., Kabatereine, N. B., Chami, G. F.

medrxiv logopreprintSep 19 2025
One in 25 deaths worldwide is related to liver disease, and often with multiple hepatosplenic conditions. Yet, little is understood of the risk factors for hepatosplenic multimorbidity, especially in the context of chronic infections. We present a novel Bayesian multitask learning framework to jointly model 45 hepatosplenic conditions assessed using point-of-care B-mode ultrasound for 3155 individuals aged 5-91 years within the SchistoTrack cohort across rural Uganda where chronic intestinal schistosomiasis is endemic. We identified distinct and shared biomedical, socioeconomic, and spatial risk factors for individual conditions and hepatosplenic multimorbidity, and introduced methods for measuring condition dependencies as risk factors. Notably, for gastro-oesophageal varices, we discovered key risk factors of older age, lower hemoglobin concentration, and severe schistosomal liver fibrosis. Our findings provide a compendium of risk factors to inform surveillance, triage, and follow-up, while our model enables improved prediction of hepatosplenic multimorbidity, and if validated on other systems, general multimorbidity.

Duraiswamy, A., Harris-Birtill, D.

medrxiv logopreprintSep 19 2025
We overcome current limitations in Acute Myeloid Leukemia (AML) diagnosis by leveraging a transfer learning approach from Acute Lymphoblastic Leukemia (ALL) classification models, thus addressing the urgent need for more accurate and accessible AML diagnostic tools. AML has poorer prognosis than ALL, with a 5-year relative survival rate of only 17-19% compared to ALL survival rates of up to 75%, making early and accurate detection of AML paramount. Current diagnostic methods, rely heavily on manual microscopic examination, and are often subjective, time-consuming, and can suffer from inter-observer variability. While machine learning has shown promise in cancer classification, its application to AML detection, particularly leveraging the potential of transfer learning from related cancers like Acute Lymphoblastic Leukemia (ALL), remains underexplored. A comprehensive review of state-of-the-art advancements in acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) classification using deep learning algorithms is undertaken and key approaches are evaluated. The insights gained from this review inform the development of two novel machine learning pipelines designed to benchmark effectiveness of proposed transfer learning approaches. Five pre-trained models are fine-tuned using ALL training data (a novel approach in this context) to optimize their potential for AML classification. The result was the development of a best-in-class (BIC) model that surpasses current state-of-the-art (SOTA) performance in AML classification, advancing the accuracy of machine learning (ML)-driven cancer diagnostics. Author summaryAcute Myeloid Leukemia (AML) is an aggressive cancer with a poor prognosis. Early and accurate diagnosis is critical, but current methods are often subjective and time-consuming. We wanted to create a more accurate diagnostic tool by applying a technique called transfer learning from a similar cancer, Acute Lymphoblastic Leukemia (ALL). Two machine learning pipelines were developed. The first trained five different models on a large AML dataset to establish a baseline. The second pipeline first trained these models on an ALL dataset to "learn" from it before fine-tuning them on the AML data. Our experiments showed that the models that underwent transfer learning process consistently performed better than the models trained on AML data alone. The MobileNetV2 model, in particular, was the best-in-class, outperforming all other models and surpassing the best-reported metrics for AML classification in current literature. Our research demonstrates that transfer learning can enable highly accurate AML diagnostic models. The best-in-class model could potentially be used as a AML diagnostic tool, helping clinicians make faster and more accurate diagnoses, improving patient outcomes.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersSep 18 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and mass, which increase the risk of fragility fractures in patients. Artificial intelligence can mine imaging features specific to different bone densities, shapes, and structures and fuse other multimodal features for synergistic diagnosis to improve prediction accuracy. This study aims to develop a multimodal model that fuses chest X-rays and clinical parameters for opportunistic screening of osteoporosis and to compare and analyze the experimental results with existing methods. We used multimodal data, including chest X-ray images and clinical data, from a total of 1780 patients at Chongqing Daping Hospital from January 2019 to August 2024. We adopted a probability fusion strategy to construct a multimodal model. In our model, we used a convolutional neural network as the backbone network for image processing and fine-tuned it using a transfer learning technique to suit the specific task of this study. In addition, we introduced a gradient-based wavelet feature extraction method. We combined it with an attention mechanism to assist in feature fusion, which enhanced the model's focus on key regions of the image and further improved its ability to extract image features. The multimodal model proposed in this paper outperforms the traditional methods in the 4 evaluation metrics of area under the curve value, accuracy, sensitivity, and specificity. Compared with using only the X-ray image model, the multimodal model improved the area under the curve value significantly from 0.951 to 0.975 (P=.004), the accuracy from 89.32% to 92.36% (P=.045), the sensitivity from 89.82% to 91.23% (P=.03), and the specificity from 88.64% to 93.92% (P=.008). While the multimodal model that fuses chest X-ray images and clinical data demonstrated superior performance compared to unimodal models and traditional methods, this study has several limitations. The dataset size may not be sufficient to capture the full diversity of the population. The retrospective nature of the study may introduce selection bias, and the lack of external validation limits the generalizability of the findings. Future studies should address these limitations by incorporating larger, more diverse datasets and conducting rigorous external validation to further establish the model's clinical use.

Zhao X, Yang X, Song Z

pubmed logopapersSep 18 2025
High-resolution magnetic resonance imaging (HR MRI) can provide accurate and rich information for doctors to better detect subtle lesions, delineate tumor boundaries, evaluate small anatomical structures, and assess early-stage pathological changes that might be obscured in lower resolution images. However, the acquisition of HR MRI images often requires prolonged scanning time, which causes the patient's physical and mental discomfort. The patient's slight movement may produce the motion artifacts and make the obtained MRI image become blurry, affecting the accuracy of clinical diagnosis. To tackle these problems, we propose a novel method, Mamba-enhanced Diffusion Model (MDM) for perception-aware blind super-resolution of Magnetic Resonance Imaging, which includes two important components: kernel noise estimator and SR reconstructor. Specifically, we propose a Perception-aware Blur Kernel Noise estimator (PBKN estimator), which takes advantage of the diffusion model to estimate the blur kernel from lowresolution images. Meanwhile, we construct a novel progressive feature reconstructor, which takes the estimated blur kernel and the content information of LR images as prior knowledge to reconstruct more accurate SR MRI images by using diffusion model. Moreover, we design a novel Semantic Information Fusion Mamba (SIF-Mamba) module for the SR reconstruction task. SIF-Mamba is specifically designed in the progressive feature reconstructor to capture the global context of MRI images and improve the feature reconstruction. The extensive experiments demonstrate that our proposed MDM achieves better SR reconstruction results than several outstanding methods. Our codes are available at https://github.com/YXDBright/MDM.

Zhang C, Nan P, Song L, Wang Y, Su K, Zheng Q

pubmed logopapersSep 18 2025
Brain age estimation plays a significant role in understanding the aging process and its relationship with neurodegenerative diseases. The aim of the study is to devise a unified multi-dimensional feature fusion model (MDFNet) to enhance the brain age estimation solely on structural MRI but with a diverse representation of whole brain, tissue segmentation of gray matter volume, node message passing of brain network, edge-based graph path convolution of brain connectivity, and demographic data. The MDFNet was developed by devising and integrating a whole-brain-level Euclidean-Convolution channel (WBEC-channel), a tissue-level Euclidean-convolution channel (TEC-channel), a Graph-convolution channel based on node message passing (nodeGCN-channel) and an edge-based graph path convolution channel on brain connectivity (edgeGCN-channel), and a multilayer perceptron (MLP) channel for demographic data (MLP-channel) to enhance the multi-dimensional feature fusion. The MDFNet was validated on 1872 healthy subjects from four public datasets, and applied to an independent cohort of Alzheimer's Disease (AD) patients. The interpretability analysis and normative modeling of the MDFNet in brain age estimation were also performed. The MDFNet achieved a superior performance of Mean Absolute Error (MAE) of 4.396 ± 0.244 years, a Pearson Correlation Coefficient (PCC) of 0.912 ± 0.002, and a Spearman's Rank Correlation (SRCC) of 0.819 ± 0.015 when comparing with the state-of-the-art deep learning models. The AD group exhibited a significantly greater brain age gap (BAG) than health group (P < 0.05), and the normative modeling also exhibited a significantly higher mean Z-scores of AD patients than healthy subjects (P < 0.05). The interpretability was also visualized at both the group and individual level, enhancing the reliability of the MDFNet. The MDFNet enhanced the brain age estimation solely on structural MRI by employing a multi-dimensional feature integration strategy.

Alhosanie TN, Hammo B, Klaib AF, Alshudifat A

pubmed logopapersSep 18 2025
Meningiomas and schwannomas are benign tumors that affect the central nervous system, comprising up to one-third of intracranial neoplasms. Gamma Knife radiosurgery (GKRS), or stereotactic radiosurgery (SRS), is a form of radiation therapy. Although referred to as "surgery," GKRS does not involve incisions. The GK medical device effectively utilizes highly focused gamma rays to treat lesions or tumors, primarily in the brain. In radiation oncology, machine learning (ML) has been used in various aspects, including outcome prediction, quality control, treatment planning, and image segmentation. This review will showcase the advantages of integrating artificial intelligence with Gamma Knife technology in treating schwannomas and meningiomas.This review adheres to PRISMA guidelines. We searched the PubMed, Scopus, and IEEE databases to identify studies published between 2021 and March 2025 that met our inclusion and exclusion criteria. The focus was on AI algorithms applied to patients with vestibular schwannoma and meningioma treated with GKRS. Two reviewers participated in the data extraction and quality assessment process.A total of nine studies were reviewed in this analysis. One distinguished deep learning (DL) model is a dual-pathway convolutional neural network (CNN) that integrates T1-weighted (T1W) and T2-weighted (T2W) MRI scans. This model was tested on 861 patients who underwent GKRS, achieving a Dice Similarity Coefficient (DSC) of 0.90. ML-based radiomics models have also demonstrated that certain radiomic features can predict the response of vestibular schwannomas and meningiomas to radiosurgery. Among these, the neural network model exhibited the best performance. AI models were also employed to predict complications following GKRS, such as peritumoral edema. A Random Survival Forest (RSF) model was developed using clinical, semantic, and radiomics variables, achieving a C-index score of 0.861 and 0.780. This model enables the classification of patients into high-risk and low-risk categories for developing post-GKRS edema.AI and ML models show great potential in tumor segmentation, volumetric assessment, and predicting treatment outcomes for vestibular schwannomas and meningiomas treated with GKRS. However, their successful clinical implementation relies on overcoming challenges related to external validation, standardization, and computational demands. Future research should focus on large-scale, multi-institutional validation studies, integrating multimodal data, and developing cost-effective strategies for deploying AI technologies.

Huo S, Zhang W, Wang Y, Qi J, Wang Y, Bai C

pubmed logopapersSep 18 2025
<b><i>Background:</i></b> Early diagnosis and accurate prediction of treatment response in esophageal squamous cell carcinoma (ESCC) remain major clinical challenges due to the lack of reliable and noninvasive biomarkers. Recently, artificial intelligence-driven endoscopic ultrasound image analysis has shown great promise in revealing genomic features associated with imaging phenotypes. <b><i>Methods:</i></b> A prospective study of 115 patients with ESCC was conducted. Deep features were extracted from endoscopic ultrasound using a ResNet50 convolutional neural network. Important features shared across three machine learning models (NN, GLM, DT) were used to construct an image-derived signature. Plasma levels of leukotriene B4 (<i>LTB4</i>) and other inflammatory markers were measured using enzyme-linked immunosorbent assay. Correlations between signature and inflammation markers were analyzed, followed by logistic regression and subgroup analyses. <b><i>Results:</i></b> The endoscopic ultrasound image-derived signature, generated using deep learning algorithms, effectively distinguished esophageal cancer from normal esophageal tissue. Among all inflammatory markers, <i>LTB4</i> exhibited the strongest negative correlation with the image signature and showed significantly higher expression in the healthy control group. Multivariate logistic regression analysis identified <i>LTB4</i> as an independent risk factor for ESCC (odds ratio = 1.74, <i>p</i> = 0.037). Furthermore, <i>LTB4</i> expression was significantly associated with patient sex, age, and chemotherapy response. Notably, higher <i>LTB4</i> levels were linked to an increased likelihood of achieving a favorable therapeutic response. <b><i>Conclusions:</i></b> This study demonstrates that deep learning-derived endoscopic ultrasound image features can effectively distinguish ESCC from normal esophageal tissue. By integrating image features with serological data, the authors identified <i>LTB4</i> as a key inflammation-related biomarker with significant diagnostic and therapeutic predictive value.

Zhang X, Ferry J, Hewson DW, Collins GS, Wiles MD, Zhao Y, Martindale APL, Tomaschek M, Bowness JS

pubmed logopapersSep 18 2025
The application of artificial intelligence to enhance the clinical practice of ultrasound-guided regional anaesthesia is of increasing interest to clinicians, researchers and industry. The lack of standardised reporting for studies in this field hinders the comparability, reproducibility and integration of findings. We aimed to develop a consensus-based reporting guideline for research evaluating artificial intelligence applications for ultrasound scanning in regional anaesthesia. We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines. Review of published literature and expert consultation generated a preliminary list of candidate reporting items. An international, multidisciplinary, modified Delphi process was then undertaken, involving experts from clinical practice, academia and industry. Two rounds of expert consultation were conducted, in which participants evaluated each item for inclusion in a final reporting guideline, followed by an online discussion. A total of 67 experts participated in the first Delphi round, 63 in the second round and 25 in the roundtable consensus meeting. The GRAITE-USRA reporting guideline comprises 40 items addressing key aspects of reporting in artificial intelligence research for ultrasound scanning in regional anaesthesia. Specific items include ultrasound acquisition protocols and operator expertise, which are not covered in existing artificial intelligence reporting guidelines. The GRAITE-USRA reporting guideline provides a minimum set of recommendations for artificial intelligence-related research for ultrasound scanning in regional anaesthesia. Its adoption will promote consistent reporting standards, enhance transparency, improve study reproducibility and ultimately support the effective integration of evidence into clinical practice.
Page 153 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.