Sort by:
Page 199 of 2352341 results

Toward diffusion MRI in the diagnosis and treatment of pancreatic cancer.

Lee J, Lin T, He Y, Wu Y, Qin J

pubmed logopapersMay 28 2025
Pancreatic cancer is a highly aggressive malignancy with rising incidence and mortality rates, often diagnosed at advanced stages. Conventional imaging methods, such as computed tomography (CT) and magnetic resonance imaging (MRI), struggle to assess tumor characteristics and vascular involvement, which are crucial for treatment planning. This paper explores the potential of diffusion magnetic resonance imaging (dMRI) in enhancing pancreatic cancer diagnosis and treatment. Diffusion-based techniques, such as diffusion-weighted imaging (DWI), diffusion tensor imaging (DTI), intravoxel incoherent motion (IVIM), and diffusion kurtosis imaging (DKI), combined with emerging AI‑powered analysis, provide insights into tissue microstructure, allowing for earlier detection and improved evaluation of tumor cellularity. These methods may help assess prognosis and monitor therapy response by tracking diffusion and perfusion metrics. However, challenges remain, such as standardized protocols and robust data analysis pipelines. Ongoing research, including deep learning applications, aims to improve reliability, and dMRI shows promise in providing functional insights and improving patient outcomes. Further clinical validation is necessary to maximize its benefits.

Chest Disease Detection In X-Ray Images Using Deep Learning Classification Method

Alanna Hazlett, Naomi Ohashi, Timothy Rodriguez, Sodiq Adewole

arxiv logopreprintMay 28 2025
In this work, we investigate the performance across multiple classification models to classify chest X-ray images into four categories of COVID-19, pneumonia, tuberculosis (TB), and normal cases. We leveraged transfer learning techniques with state-of-the-art pre-trained Convolutional Neural Networks (CNNs) models. We fine-tuned these pre-trained architectures on a labeled medical x-ray images. The initial results are promising with high accuracy and strong performance in key classification metrics such as precision, recall, and F1 score. We applied Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretability to provide visual explanations for classification decisions, improving trust and transparency in clinical applications.

Comparative Analysis of Machine Learning Models for Lung Cancer Mutation Detection and Staging Using 3D CT Scans

Yiheng Li, Francisco Carrillo-Perez, Mohammed Alawad, Olivier Gevaert

arxiv logopreprintMay 28 2025
Lung cancer is the leading cause of cancer mortality worldwide, and non-invasive methods for detecting key mutations and staging are essential for improving patient outcomes. Here, we compare the performance of two machine learning models - FMCIB+XGBoost, a supervised model with domain-specific pretraining, and Dinov2+ABMIL, a self-supervised model with attention-based multiple-instance learning - on 3D lung nodule data from the Stanford Radiogenomics and Lung-CT-PT-Dx cohorts. In the task of KRAS and EGFR mutation detection, FMCIB+XGBoost consistently outperformed Dinov2+ABMIL, achieving accuracies of 0.846 and 0.883 for KRAS and EGFR mutations, respectively. In cancer staging, Dinov2+ABMIL demonstrated competitive generalization, achieving an accuracy of 0.797 for T-stage prediction in the Lung-CT-PT-Dx cohort, suggesting SSL's adaptability across diverse datasets. Our results emphasize the clinical utility of supervised models in mutation detection and highlight the potential of SSL to improve staging generalization, while identifying areas for enhancement in mutation sensitivity.

Single Domain Generalization for Alzheimer's Detection from 3D MRIs with Pseudo-Morphological Augmentations and Contrastive Learning

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Although Alzheimer's disease detection via MRIs has advanced significantly thanks to contemporary deep learning models, challenges such as class imbalance, protocol variations, and limited dataset diversity often hinder their generalization capacity. To address this issue, this article focuses on the single domain generalization setting, where given the data of one domain, a model is designed and developed with maximal performance w.r.t. an unseen domain of distinct distribution. Since brain morphology is known to play a crucial role in Alzheimer's diagnosis, we propose the use of learnable pseudo-morphological modules aimed at producing shape-aware, anatomically meaningful class-specific augmentations in combination with a supervised contrastive learning module to extract robust class-specific representations. Experiments conducted across three datasets show improved performance and generalization capacity, especially under class imbalance and imaging protocol variations. The source code will be made available upon acceptance at https://github.com/zobia111/SDG-Alzheimer.

Distance Transform Guided Mixup for Alzheimer's Detection

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Alzheimer's detection efforts aim to develop accurate models for early disease diagnosis. Significant advances have been achieved with convolutional neural networks and vision transformer based approaches. However, medical datasets suffer heavily from class imbalance, variations in imaging protocols, and limited dataset diversity, which hinder model generalization. To overcome these challenges, this study focuses on single-domain generalization by extending the well-known mixup method. The key idea is to compute the distance transform of MRI scans, separate them spatially into multiple layers and then combine layers stemming from distinct samples to produce augmented images. The proposed approach generates diverse data while preserving the brain's structure. Experimental results show generalization performance improvement across both ADNI and AIBL datasets.

Deep learning radiomics fusion model to predict visceral pleural invasion of clinical stage IA lung adenocarcinoma: a multicenter study.

Zhao J, Wang T, Wang B, Satishkumar BM, Ding L, Sun X, Chen C

pubmed logopapersMay 28 2025
To assess the predictive performance, risk stratification capabilities, and auxiliary diagnostic utility of radiomics, deep learning, and fusion models in identifying visceral pleural invasion (VPI) in lung adenocarcinoma. A total of 449 patients (female:male, 263:186; 59.8 ± 10.5 years) diagnosed with clinical IA stage lung adenocarcinoma (LAC) from two distinct hospitals were enrolled in the study and divided into a training cohort (n = 289) and an external test cohort (n = 160). The fusion models were constructed from the feature level and the decision level respectively. A comprehensive analysis was conducted to assess the prediction ability and prognostic value of radiomics, deep learning, and fusion models. The diagnostic performance of radiologists of varying seniority with and without the assistance of the optimal model was compared. The late fusion model demonstrated superior diagnostic performance (AUC = 0.812) compared to clinical (AUC = 0.650), radiomics (AUC = 0.710), deep learning (AUC = 0.770), and the early fusion models (AUC = 0.586) in the external test cohort. The multivariate Cox regression analysis showed that the VPI status predicted by the late fusion model were independently associated with patient disease-free survival (DFS) (p = 0.044). Furthermore, model assistance significantly improved radiologist performance, particularly for junior radiologists; the AUC increased by 0.133 (p < 0.001) reaching levels comparable to the senior radiologist without model assistance (AUC: 0.745 vs. 0.730, p = 0.790). The proposed decision-level (late fusion) model significantly reducing the risk of overfitting and demonstrating excellent robustness in multicenter external validation, which can predict VPI status in LAC, aid in prognostic stratification, and assist radiologists in achieving higher diagnostic performance.

Efficient feature extraction using light-weight CNN attention-based deep learning architectures for ultrasound fetal plane classification.

Sivasubramanian A, Sasidharan D, Sowmya V, Ravi V

pubmed logopapersMay 28 2025
Ultrasound fetal imaging is beneficial to support prenatal development because it is affordable and non-intrusive. Nevertheless, fetal plane classification (FPC) remains challenging and time-consuming for obstetricians since it depends on nuanced clinical aspects, which increases the difficulty in identifying relevant features of the fetal anatomy. Thus, to assist with its accurate feature extraction, a lightweight artificial intelligence architecture leveraging convolutional neural networks and attention mechanisms is proposed to classify the largest benchmark ultrasound dataset. The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k. to classify key fetal planes such as the brain, femur, thorax, cervix, and abdomen. Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576. Importantly, the model has 40x fewer trainable parameters than existing benchmark ensemble or transformer pipelines, facilitating easy deployment on edge devices to help clinical practitioners with real-time FPC. The findings are also interpreted using GradCAM to carry out clinical correlation to aid doctors with diagnostics and improve treatment plans for expectant mothers.

Integrating SEResNet101 and SE-VGG19 for advanced cervical lesion detection: a step forward in precision oncology.

Ye Y, Chen Y, Pan J, Li P, Ni F, He H

pubmed logopapersMay 28 2025
Cervical cancer remains a significant global health issue, with accurate differentiation between low-grade (LSIL) and high-grade squamous intraepithelial lesions (HSIL) crucial for effective screening and management. Current methods, such as Pap smears and HPV testing, often fall short in sensitivity and specificity. Deep learning models hold the potential to enhance the accuracy of cervical cancer screening but require thorough evaluation to ascertain their practical utility. This study compares the performance of two advanced deep learning models, SEResNet101 and SE-VGG19, in classifying cervical lesions using a dataset of 3,305 high-quality colposcopy images. We assessed the models based on their accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The SEResNet101 model demonstrated superior performance over SE-VGG19 across all evaluated metrics. Specifically, SEResNet101 achieved a sensitivity of 95%, a specificity of 97%, and an AUC of 0.98, compared to 89% sensitivity, 93% specificity, and an AUC of 0.94 for SE-VGG19. These findings suggest that SEResNet101 could significantly reduce both over- and under-treatment rates by enhancing diagnostic precision. Our results indicate that SEResNet101 offers a promising enhancement over existing screening methods, integrating advanced deep learning algorithms to significantly improve the precision of cervical lesion classification. This study advocates for the inclusion of SEResNet101 in clinical workflows to enhance cervical cancer screening protocols, thereby improving patient outcomes. Future work should focus on multicentric trials to validate these findings and facilitate widespread clinical adoption.

C2 pars interarticularis length on the side of high-riding vertebral artery with implications for pars screw insertion.

Klepinowski T, Kałachurska M, Chylewski M, Żyłka N, Taterra D, Łątka K, Pala B, Poncyljusz W, Sagan L

pubmed logopapersMay 28 2025
C2 pars interarticularis length (C2PIL) required for pars screws has not been thoroughly studied in subjects with high-riding vertebral artery (HRVA). We aimed to measure C2PIL specifically on the sides with HRVA, define short pars, optimal pars screw length, and incorporate C2PIL into HRVA clusters using machine learning algorithms. A clinical anatomical study based on cervical CT was conducted with STROBE-compliant case-control design. HRVA was defined as accepted. Interobserver, intraobserver, and inter-software agreement coefficients for HRVA were adopted from our previous study. Sample size was estimated with pwr package and C2PIL was measured. Cut-off value and predictive statistics of C2PIL for HRVA were computed with cutpointr package. Unsupervised machine learning clustering was applied with all three pars parameters. 345 potential screw insertion sites (PSIS) were grouped as HRVA (143 PSIS in 110 subjects) or controls (202 PSIS in 101 subjects). 68% participants were females. The median C2PIL in HRVA group was 13.7 mm with interquartile range (IQR) of 1.7, whereas in controls it was 19.8 mm (IQR = 2.7). The optimal cut-off value of C2PIL discriminating HRVA was 16.06 mm with sensitivity of 96.5% and specificity of 99.3%. Therefore, clinically important short pars was defined as ≤ 16 mm rounding to the nearest screw length. Two clusters were created incorportating three parameters of pars interarticularis. In preoperative planning, the identified C2PIL cut-off of ≤ 16 mm may assist surgeons in early recognition of HRVA. The average screw lengths of 14 mm for bicortical and 12 mm for safer unicortical purchase in HRVA cases may serve as practical intraoperative reference points, particularly in situations requiring rapid decision-making or when navigation systems are unavailable. Moreover, C2PIL complements the classic HRVA parameters within the dichotomized clustering framework.

Artificial Intelligence Augmented Cerebral Nuclear Imaging.

Currie GM, Hawk KE

pubmed logopapersMay 28 2025
Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has significant potential to advance the capabilities of nuclear neuroimaging. The current and emerging applications of ML and DL in the processing, analysis, enhancement and interpretation of SPECT and PET imaging are explored for brain imaging. Key developments include automated image segmentation, disease classification, and radiomic feature extraction, including lower dimensionality first and second order radiomics, higher dimensionality third order radiomics and more abstract fourth order deep radiomics. DL-based reconstruction, attenuation correction using pseudo-CT generation, and denoising of low-count studies have a role in enhancing image quality. AI has a role in sustainability through applications in radioligand design and preclinical imaging while federated learning addresses data security challenges to improve research and development in nuclear cerebral imaging. There is also potential for generative AI to transform the nuclear cerebral imaging space through solutions to data limitations, image enhancement, patient-centered care, workflow efficiencies and trainee education. Innovations in ML and DL are re-engineering the nuclear neuroimaging ecosystem and reimagining tomorrow's precision medicine landscape.
Page 199 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.