Sort by:
Page 199 of 3623611 results

ViTaL: A Multimodality Dataset and Benchmark for Multi-pathological Ovarian Tumor Recognition

You Zhou, Lijiang Chen, Guangxia Cui, Wenpei Bai, Yu Guo, Shuchang Lyu, Guangliang Cheng, Qi Zhao

arxiv logopreprintJul 6 2025
Ovarian tumor, as a common gynecological disease, can rapidly deteriorate into serious health crises when undetected early, thus posing significant threats to the health of women. Deep neural networks have the potential to identify ovarian tumors, thereby reducing mortality rates, but limited public datasets hinder its progress. To address this gap, we introduce a vital ovarian tumor pathological recognition dataset called \textbf{ViTaL} that contains \textbf{V}isual, \textbf{T}abular and \textbf{L}inguistic modality data of 496 patients across six pathological categories. The ViTaL dataset comprises three subsets corresponding to different patient data modalities: visual data from 2216 two-dimensional ultrasound images, tabular data from medical examinations of 496 patients, and linguistic data from ultrasound reports of 496 patients. It is insufficient to merely distinguish between benign and malignant ovarian tumors in clinical practice. To enable multi-pathology classification of ovarian tumor, we propose a ViTaL-Net based on the Triplet Hierarchical Offset Attention Mechanism (THOAM) to minimize the loss incurred during feature fusion of multi-modal data. This mechanism could effectively enhance the relevance and complementarity between information from different modalities. ViTaL-Net serves as a benchmark for the task of multi-pathology, multi-modality classification of ovarian tumors. In our comprehensive experiments, the proposed method exhibited satisfactory performance, achieving accuracies exceeding 90\% on the two most common pathological types of ovarian tumor and an overall performance of 85\%. Our dataset and code are available at https://github.com/GGbond-study/vitalnet.

FB-Diff: Fourier Basis-guided Diffusion for Temporal Interpolation of 4D Medical Imaging

Xin You, Runze Yang, Chuyan Zhang, Zhongliang Jiang, Jie Yang, Nassir Navab

arxiv logopreprintJul 6 2025
The temporal interpolation task for 4D medical imaging, plays a crucial role in clinical practice of respiratory motion modeling. Following the simplified linear-motion hypothesis, existing approaches adopt optical flow-based models to interpolate intermediate frames. However, realistic respiratory motions should be nonlinear and quasi-periodic with specific frequencies. Intuited by this property, we resolve the temporal interpolation task from the frequency perspective, and propose a Fourier basis-guided Diffusion model, termed FB-Diff. Specifically, due to the regular motion discipline of respiration, physiological motion priors are introduced to describe general characteristics of temporal data distributions. Then a Fourier motion operator is elaborately devised to extract Fourier bases by incorporating physiological motion priors and case-specific spectral information in the feature space of Variational Autoencoder. Well-learned Fourier bases can better simulate respiratory motions with motion patterns of specific frequencies. Conditioned on starting and ending frames, the diffusion model further leverages well-learned Fourier bases via the basis interaction operator, which promotes the temporal interpolation task in a generative manner. Extensive results demonstrate that FB-Diff achieves state-of-the-art (SOTA) perceptual performance with better temporal consistency while maintaining promising reconstruction metrics. Codes are available.

Predicting Cardiopulmonary Exercise Testing Performance in Patients Undergoing Transthoracic Echocardiography - An AI Based, Multimodal Model

Alishetti, S., Pan, W., Beecy, A. N., Liu, Z., Gong, A., Huang, Z., Clerkin, K. J., Goldsmith, R. L., Majure, D. T., Kelsey, C., vanMaanan, D., Ruhl, J., Tesfuzigta, N., Lancet, E., Kumaraiah, D., Sayer, G., Estrin, D., Weinberger, K., Kuleshov, V., Wang, F., Uriel, N.

medrxiv logopreprintJul 6 2025
Background and AimsTransthoracic echocardiography (TTE) is a widely available tool for diagnosing and managing heart failure but has limited predictive value for survival. Cardiopulmonary exercise test (CPET) performance strongly correlates with survival in heart failure patients but is less accessible. We sought to develop an artificial intelligence (AI) algorithm using TTE and electronic medical records to predict CPET peak oxygen consumption (peak VO2) [≤] 14 mL/kg/min. MethodsAn AI model was trained to predict peak VO2 [≤] 14 mL/kg/min from TTE images, structured TTE reports, demographics, medications, labs, and vitals. The training set included patients with a TTE within 6 months of a CPET. Performance was retrospectively tested in a held-out group from the development cohort and an external validation cohort. Results1,127 CPET studies paired with concomitant TTE were identified. The best performance was achieved by using all components (TTE images, all structured clinical data). The model performed well at predicting a peak VO2 [≤] 14 mL/kg/min, with an AUROC of 0.84 (development cohort) and 0.80 (external validation cohort). It performed consistently well using higher ([≤] 18 mL/kg/min) and lower ([≤] 12 mL/kg/min) cut-offs. ConclusionsThis multimodal AI model effectively categorized patients into low and high risk predicted peak VO2, demonstrating the potential to identify previously unrecognized patients in need of advanced heart failure therapies where CPET is not available.

A CT-Based Deep Learning Radiomics Nomogram for Early Recurrence Prediction in Pancreatic Cancer: A Multicenter Study.

Guan X, Liu J, Xu L, Jiang W, Wang C

pubmed logopapersJul 6 2025
Early recurrence (ER) following curative-intent surgery remains a major obstacle to improving long-term outcomes in patients with pancreatic cancer (PC). The accurate preoperative prediction of ER could significantly aid clinical decision-making and guide postoperative management. A retrospective cohort of 493 patients with histologically confirmed PC who underwent resection was analyzed. Contrast-enhanced computed tomography (CT) images were used for tumor segmentation, followed by radiomics and deep learning feature extraction. In total, four distinct feature selection algorithms were employed. Predictive models were constructed using random forest (RF) and support vector machine (SVM) classifiers. The model performance was evaluated by the area under the receiver operating characteristic curve (AUC). A comprehensive nomogram integrating feature scores and clinical factors was developed and validated. Among all of the constructed models, the Inte-SVM demonstrated superior classification performance. The nomogram, incorporating the Inte-feature score, CT-assessed lymph node status, and carbohydrate antigen 19-9 (CA19-9), yielded excellent predictive accuracy in the validation cohort (AUC = 0.920). Calibration curves showed strong agreement between predicted and observed outcomes, and decision curve analysis confirmed the clinical utility of the nomogram. A CT-based deep learning radiomics nomogram enabled the accurate preoperative prediction of early recurrence in patients with pancreatic cancer. This model may serve as a valuable tool to assist clinicians in tailoring postoperative strategies and promoting personalized therapeutic approaches.

Early warning and stratification of the elderly cardiopulmonary dysfunction-related diseases: multicentre prospective study protocol.

Zhou X, Jin Q, Xia Y, Guan Y, Zhang Z, Guo Z, Liu Z, Li C, Bai Y, Hou Y, Zhou M, Liao WH, Lin H, Wang P, Liu S, Fan L

pubmed logopapersJul 5 2025
In China, there is a lack of standardised clinical imaging databases for multidimensional evaluation of cardiopulmonary diseases. To address this gap, this study protocol launched a project to build a clinical imaging technology integration and a multicentre database for early warning and stratification of cardiopulmonary dysfunction in the elderly. This study employs a cross-sectional design, enrolling over 6000 elderly participants from five regions across China to evaluate cardiopulmonary function and related diseases. Based on clinical criteria, participants are categorized into three groups: a healthy cardiopulmonary function group, a functional decrease group and an established cardiopulmonary diseases group. All subjects will undergo comprehensive assessments including chest CT scans, echocardiography, and laboratory examinations. Additionally, at least 50 subjects will undergo cardiopulmonary exercise testing (CPET). By leveraging artificial intelligence technology, multimodal data will be integrated to establish reference ranges for cardiopulmonary function in the elderly population, as well as to develop early-warning models and severity grading standard models. The study has been approved by the local ethics committee of Shanghai Changzheng Hospital (approval number: 2022SL069A). All the participants will sign the informed consent. The results will be disseminated through peer-reviewed publications and conferences.

Artifact-robust Deep Learning-based Segmentation of 3D Phase-contrast MR Angiography: A Novel Data Augmentation Approach.

Tamada D, Oechtering TH, Heidenreich JF, Starekova J, Takai E, Reeder SB

pubmed logopapersJul 5 2025
This study presents a novel data augmentation approach to improve deep learning (DL)-based segmentation for 3D phase-contrast magnetic resonance angiography (PC-MRA) images affected by pulsation artifacts. Augmentation was achieved by simulating pulsation artifacts through the addition of periodic errors in k-space magnitude. The approach was evaluated on PC-MRA datasets from 16 volunteers, comparing DL segmentation with and without pulsation artifact augmentation to a level-set algorithm. Results demonstrate that DL methods significantly outperform the level-set approach and that pulsation artifact augmentation further improves segmentation accuracy, especially for images with lower velocity encoding. Quantitative analysis using Dice-Sørensen coefficient, Intersection over Union, and Average Symmetric Surface Distance metrics confirms the effectiveness of the proposed method. This technique shows promise for enhancing vascular segmentation in various anatomical regions affected by pulsation artifacts, potentially improving clinical applications of PC-MRA.

Unveiling knee morphology with SHAP: shaping personalized medicine through explainable AI.

Cansiz B, Arslan S, Gültekin MZ, Serbes G

pubmed logopapersJul 5 2025
This study aims to enhance personalized medical assessments and the early detection of knee-related pathologies by examining the relationship between knee morphology and demographic factors such as age, gender, and body mass index. Additionally, gender-specific reference values for knee morphological features will be determined using explainable artificial intelligence (XAI). A retrospective analysis was conducted on the MRI data of 500 healthy knees aged 20-40 years. The study included various knee morphological features such as Distal Femoral Width (DFW), Lateral Femoral Condyler Width (LFCW), Intercondylar Femoral Width (IFW), Anterior Cruciate Ligament Width (ACLW), and Anterior Cruciate Ligament Length (ACLL). Machine learning models, including Decision Trees, Random Forests, Light Gradient Boosting, Multilayer Perceptron, and Support Vector Machines, were employed to predict gender based on these features. The SHapley Additive exPlanation was used to analyze feature importance. The learning models demonstrated high classification performance, with 83.2% (±5.15) for classification of clusters based on morphological feature and 88.06% (±4.8) for gender classification. These results validated that the strong correlation between knee morphology and gender. The study found that DFW is the most significant feature for gender prediction, with values below 78-79 mm range indicating females and values above this range indicating males. LFCW, IFW, ACLW, and ACLL also showed significant gender-based differences. The findings establish gender-specific reference values for knee morphological features, highlighting the impact of gender on knee morphology. These reference values can improve the accuracy of diagnoses and treatment plans tailored to each gender, enhancing personalized medical care.

Artificial Intelligence in Prenatal Ultrasound: A Systematic Review of Diagnostic Tools for Detecting Congenital Anomalies

Dunne, J., Kumarasamy, C., Belay, D. G., Betran, A. P., Gebremedhin, A. T., Mengistu, S., Nyadanu, S. D., Roy, A., Tessema, G., Tigest, T., Pereira, G.

medrxiv logopreprintJul 5 2025
BackgroundArtificial intelligence (AI) has potentially shown promise in interpreting ultrasound imaging through flexible pattern recognition and algorithmic learning, but implementation in clinical practice remains limited. This study aimed to investigate the current application of AI in prenatal ultrasounds to identify congenital anomalies, and to synthesise challenges and opportunities for the advancement of AI-assisted ultrasound diagnosis. This comprehensive analysis addresses the clinical translation gap between AI performance metrics and practical implementation in prenatal care. MethodsSystematic searches were conducted in eight electronic databases (CINAHL Plus, Ovid/EMBASE, Ovid/MEDLINE, ProQuest, PubMed, Scopus, Web of Science and Cochrane Library) and Google Scholar from inception to May 2025. Studies were included if they applied an AI-assisted ultrasound diagnostic tool to identify a congenital anomaly during pregnancy. This review adhered to PRISMA guidelines for systematic reviews. We evaluated study quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. FindingsOf 9,918 records, 224 were identified for full-text review and 20 met the inclusion criteria. The majority of studies (11/20, 55%) were conducted in China, with most published after 2020 (16/20, 80%). All AI models were developed as an assistive tool for anomaly detection or classification. Most models (85%) focused on single-organ systems: heart (35%), brain/cranial (30%), or facial features (20%), while three studies (15%) attempted multi-organ anomaly detection. Fifty percent of the included studies reported exceptionally high model performance, with both sensitivity and specificity exceeding 0.95, with AUC-ROC values ranging from 0.91 to 0.97. Most studies (75%) lacked external validation, with internal validation often limited to small training and testing datasets. InterpretationWhile AI applications in prenatal ultrasound showed potential, current evidence indicates significant limitations in their practical implementation. Much work is required to optimise their application, including the external validation of diagnostic models with clinical utility to have real-world implications. Future research should prioritise larger-scale multi-centre studies, developing multi-organ anomaly detection capabilities rather than the current single-organ focus, and robust evaluation of AI tools in real-world clinical settings.

Quantifying features from X-ray images to assess early stage knee osteoarthritis.

Helaly T, Faisal TR, Moni ASB, Naznin M

pubmed logopapersJul 5 2025
Knee osteoarthritis (KOA) is a progressive degenerative joint disease and a leading cause of disability worldwide. Manual diagnosis of KOA from X-ray images is subjective and prone to inter- and intra-observer variability, making early detection challenging. While deep learning (DL)-based models offer automation, they often require large labeled datasets, lack interpretability, and do not provide quantitative feature measurements. Our study presents an automated KOA severity assessment system that integrates a pretrained DL model with image processing techniques to extract and quantify key KOA imaging biomarkers. The pipeline includes contrast limited adaptive histogram equalization (CLAHE) for contrast enhancement, DexiNed-based edge extraction, and thresholding for noise reduction. We design customized algorithms that automatically detect and quantify joint space narrowing (JSN) and osteophytes from the extracted edges. The proposed model quantitatively assesses JSN and finds the number of intercondylar osteophytes, contributing to severity classification. The system achieves accuracies of 88% for JSN detection, 80% for osteophyte identification, and 73% for KOA classification. Its key strength lies in eliminating the need for any expensive training process and, consequently, the dependency on labeled data except for validation. Additionally, it provides quantitative data that can support classification in other OA grading frameworks.

MRI-based detection of multiple sclerosis using an optimized attention-based deep learning framework.

Palaniappan R, Delshi Howsalya Devi R, Mathankumar M, Ilangovan K

pubmed logopapersJul 5 2025
Multiple Sclerosis (MS) is a chronic neurological disorder affecting millions worldwide. Early detection is vital to prevent long-term disability. Magnetic Resonance Imaging (MRI) plays a crucial role in MS diagnosis, yet differentiating MS lesions from other brain anomalies remains a complex challenge. To develop and evaluate a novel deep learning framework-2DRK-MSCAN-for the early and accurate detection of MS lesions using MRI data. The proposed approach is validated using three publicly available MRI-based brain tumor datasets and comprises three main stages. First, Gradient Domain Guided Filtering (GDGF) is applied during pre-processing to enhance image quality. Next, an EfficientNetV2L backbone embedded within a U-shaped encoder-decoder architecture facilitates precise segmentation and rich feature extraction. Finally, classification of MS lesions is performed using the 2DRK-MSCAN model, which incorporates deep diffusion residual kernels and multiscale snake convolutional attention mechanisms to improve detection accuracy and robustness. The proposed framework achieved 99.9% accuracy in cross-validation experiments, demonstrating its capability to distinguish MS lesions from other anomalies with high precision. The 2DRK-MSCAN framework offers a reliable and effective solution for early MS detection using MRI. While clinical validation is ongoing, the method shows promising potential for aiding timely intervention and improving patient care.
Page 199 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.