Sort by:
Page 78 of 6346332 results

Ghasemi R, Islam N, Bayat S, Shabir M, Rahman S, Amin F, de la Torre I, Castilla ÁK, García DLRV

pubmed logopapersOct 8 2025
Accurate diagnosis of brain tumors is critical in understanding the prognosis in terms of the type, growth rate, location, removal strategy, and overall well-being of the patients. Among different modalities used for the detection and classification of brain tumors, a computed tomography (CT) scan is often performed as an early-stage procedure for minor symptoms like headaches. Automated procedures based on artificial intelligence (AI) and machine learning (ML) methods are used to detect and classify brain tumors in Computed Tomography (CT) scan images. However, the key challenges in achieving the desired outcome are associated with the model's complexity and generalization. To address these issues, we propose a hybrid model that extracts features from CT images using classical machine learning. Additionally, although MRI is a common modality for brain tumor diagnosis, its high cost and longer acquisition time make CT scans a more practical choice for early-stage screening and widespread clinical use. The proposed framework has different stages, including image acquisition, pre-processing, feature extraction, feature selection, and classification. The hybrid architecture combines features from ResNet50, AlexNet, LBP, HOG, and median intensity, classified using a multilayer perceptron. The selection of the relevant features in our proposed hybrid model was extracted using the SelectKBest algorithm. Thus, it optimizes the proposed model performance. In addition, the proposed model incorporates data augmentation to handle the imbalanced datasets. We employed a scoring function to extract the features. The Classification is ensured using a multilayer perceptron neural network (MLP). Unlike most existing hybrid approaches, which primarily target MRI-based brain tumor classification, our method is specifically designed for CT scan images, addressing their unique noise patterns and lower soft-tissue contrast. To the best of our knowledge, this is the first work to integrate LBP, HOG, median intensity, and deep features from both ResNet50 and AlexNet in a structured fusion pipeline for CT brain tumor classification. The proposed hybrid model is tested on data from numerous sources and achieved an accuracy of 94.82%, precision of 94.52%, specificity of 98.35%, and sensitivity of 94.76% compared to state-of-the-art models. While MRI-based models often report higher accuracies, the proposed model achieves 94.82% on CT scans, within 3-4% of leading MRI-based approaches, demonstrating strong generalization despite the modality difference. The proposed hybrid model, combining hand-crafted and deep learning features, effectively improves brain tumor detection and classification accuracy in CT scans. It has the potential for clinical application, aiding in early and accurate diagnosis. Unlike MRI, which is often time-intensive and costly, CT scans are more accessible and faster to acquire, making them suitable for early-stage screening and emergency diagnostics. This reinforces the practical and clinical value of the proposed model in real-world healthcare settings.

Özdemir EY, Koç C, Özyurt F

pubmed logopapersOct 8 2025
Alzheimer's disease is a progressive neurodegenerative disorder that is challenging to diagnose at an early stage. Affecting over 55 million people worldwide, its prevalence is expected to rise sharply by 2030. The use of artificial intelligence (AI) techniques has become increasingly important to improve the speed and accuracy of diagnosis. In this study, we propose the NCA-Enhanced Voting Algorithm for Alzheimer's Classification (NCA-EVA) to support computer-aided diagnosis. A total of 66 models were trained for four-class data and six models for two-class data. The proposed method successfully classified all four stages of Alzheimer's disease, achieving 98.97% accuracy in four-class classification and 99.87% accuracy in binary classification. Moreover, with a processing time of just 1.26 s, NCA-EVA is approximately 1200 times faster than a comparable study using NCA-based feature selection. These findings demonstrate that Alzheimer's diagnosis can be performed both quickly and with high accuracy, and the proposed approach has potential applications in other healthcare data analysis tasks.

Zhu F, Zhang Y, Liang G, Nan J, Li Y, Han C, Sun D, Wang Z, Zhao C, Zhou W, He J, Xu Y, Cheang I, Zhu X, Zhou Y, Zhou W

pubmed logopapersOct 8 2025
Early and accurate diagnosis of pulmonary hypertension (PH), including differentiating pre-capillary from post-capillary PH, is crucial for guiding effective clinical management. This study developed and validated a deep learning-based diagnostic model to classify patients into non-PH, pre-capillary PH, or post-capillary PH categories. A retrospective dataset from 204 patients (112 pre-capillary PH, 32 post-capillary PH, and 60 non-PH controls) was collected at the First Affiliated Hospital of Nanjing Medical University, with diagnoses confirmed by right heart catheterization (RHC). Patients were randomly divided into training (186 patients, 90%) and testing sets (18 patients, 10%) stratified by diagnostic category. We trained and evaluated the model using 35 repeated random splits. The proposed deep learning model combined graph convolutional networks (GCN), convolutional neural networks (CNN), and Transformers to analyze multimodal data, including cine short-axis (SAX) sequences, four-chamber (4CH) sequences, and clinical parameters. Across test splits, the model achieved an overall area under the receiver operating characteristic curve (AUC) of 0.814 ± 0.06 and accuracy (ACC) of 0.734 ± 0.06 (mean ± SD). Class-specific AUCs were 0.745 ± 0.11 for non-PH, 0.863 ± 0.06 for pre-capillary PH, and 0.834 ± 0.10 for post-capillary PH, indicating good discriminative ability. This study demonstrated three-class PH classification using multimodal inputs. By fusing imaging and clinical data, the model may support accurate and timely clinical decision-making in PH.

Buzdugan S, Mazher M, Puig D

pubmed logopapersOct 8 2025
Glioblastoma (GBM) remains one of the most formidable brain malignancies, characterized by a heterogeneous genetic profile that significantly influences patient prognosis. Per the 2021 WHO central nervous system classification, GBM is defined as an isocitrate dehydrogenase (IDH) wild-type diffuse astrocytic tumor. We analyzed two multi-institutional cohorts, UPENN-GBM (644 patients) and UCSF-PDGM (420 patients); after excluding the 116 and 42 IDH-mutant records, 528 and 378 wild-type cases remained for modelling. MGMT promoter methylation, present in 43% of GBM cases, correlates with enhanced survival outcomes, demonstrating a median survival of 504 days versus 329 days in unmethylated cases. In this study, we present a novel integration of imaging phenotypes, clinical characteristics, and molecular markers through the application of advanced machine learning methodologies, including Random Forest, XGBoost, LightGBM, and an optimized dense neural network (Dense NN). This integrative approach aims to refine survival prediction in GBM patients. MRI data were meticulously processed using the MRIPreprocessor tool and the radiomics Python library, facilitating the extraction of high-dimensional radiomic features. Our findings reveal that the proposed custom Dense NN model outperformed traditional tree-based algorithms, with the Dense NN achieving a concordance index (CI) of 0.86 on the UPENN-GBM dataset and 0.83 on the UCSF-PDGM dataset. The optimized Dense NN architecture features three hidden layers with 256, 128, and 64 units respectively, employing ReLU activation, L1/L2 regularization to mitigate overfitting, batch normalization to stabilize training, and dropout for improved generalization. This specific configuration was determined through hyperparameter tuning using techniques like RandomizedSearchCV. This integrative, non-invasive methodology provides a more nuanced assessment of tumor biology, thereby advancing the development of personalized therapeutic strategies. Our results underscore the transformative potential of artificial intelligence in delineating disease trajectories and optimizing treatment paradigms. Moreover, this research establishes a robust framework for future investigations in glioblastoma survival prediction, illustrating the efficacy of combining clinical, genetic, and imaging data to enhance prognostic accuracy within precision medicine paradigms for GBM patients.

Yadav NL, Singh S, Kumar R, Nishad DK

pubmed logopapersOct 8 2025
Accurate and efficient classification of lung diseases from medical images remains a significant challenge in computer-aided diagnosis systems. This research presents a novel approach integrating transfer learning techniques with fuzzy decision support systems for multi-class lung disease classification. We compare the performance of three pre-trained CNN architectures-VGG16, VGG19, and ResNet50-enhanced with a fuzzy logic decision layer. The proposed methodology employs transfer learning to leverage knowledge from large-scale datasets while adapting to the specific characteristics of lung disease images. A k-symbol Lerch transcendent function is implemented for image enhancement during preprocessing, significantly improving feature extraction capabilities by 23.4% in contrast enhancement and 18.7% in feature visibility. The fuzzy decision support system addresses inherent uncertainties in medical image classification through membership functions and rule-based inference mechanisms specifically designed for lung pathology features. Experimental evaluation was conducted on a comprehensive dataset of 8,409 chest X-ray images across six disease classes: COVID-19, Pneumonia, Tuberculosis, Lung Opacity, Cardiomegaly, and Normal cases. Results demonstrate that the ResNet50-based model with fuzzy integration achieves superior classification accuracy of 98.7%, sensitivity of 98.4%, and specificity of 98.8%, outperforming standard implementations of VGG16 (97.8% accuracy) and VGG19 (98.2% accuracy). The proposed approach shows particular strength in handling borderline cases where traditional CNN confidence falls below 75%, achieving 8.4% improvement in uncertain case classification. Statistical significance testing confirms meaningful performance gains (p < 0.05) across all architectures, with ResNet50 showing the most substantial enhancement (p = 0.0018). The fuzzy inference system activates an average of 8.4 rules per classification decision, providing transparent reasoning pathways that enhance clinical interpretability while maintaining real-time processing capability (0.23 s per image). This research contributes to advancing automated lung disease diagnosis systems with improved accuracy, uncertainty handling, and clinical interpretability for computer-aided diagnostic applications.

Fujita S, Polak D, Nickel D, Splitthoff DN, Huang Y, Gil N, Buathong S, Chiang CH, Lo WC, Clifford B, Cauley SF, Conklin J, Huang SY

pubmed logopapersOct 8 2025
Motion artifacts remain a key limitation in brain MRI, particularly during 3D acquisitions in cognitively impaired patients. Most deep learning (DL) reconstruction techniques improve signal-to-noise ratio but lack explicit mechanisms to correct for motion. This study aims to validate a DL reconstruction method that integrates retrospective motion correction into the reconstruction pipeline for 3D T1-weighted brain MRI. This prospective, intra-individual comparison study included a controlled-motion cohort of healthy volunteers and a clinical cohort of patients undergoing evaluation for memory loss. Each cohort was scanned at distinct imaging sites between October 2022 and August 2023 in staggered periods. All participants underwent 4-fold under-sampled 3D magnetization-prepared rapid gradient-echo imaging with integrated Scout Accelerated Motion Estimation and Reduction (SAMER) acquisition. Image volumes were reconstructed using standard-of-care methods and the proposed DL approach. Quantitative morphometric accuracy was assessed by comparing brain segmentation results of instructed-motion scans to motion-free reference scans in the healthy volunteers. Image quality was rated by two board-certified neuroradiologists using a five-point Likert scale. Statistical analysis included Wilcoxon tests and intraclass correlation coefficients. A total of 41 participants (15 women [37%]; mean age, 58 years) and 154 image volumes were evaluated. The DL-based method with integrated motion correction significantly reduced segmentation error under moderate and severe motion (12.4% to 3.5% and 44.2% to 12.5%, respectively; P < .001). Visual ratings showed improved scores across all criteria compared with standard reconstructions (overall image quality, 4.26 ± 0.72 vs. 3.59 ± 0.82; P < .001). In 47% of cases, motion artifact severity was improved following DL-based processing. Inter-reader agreement ranged from moderate to substantial. Motion-informed DL reconstruction improved both morphometric accuracy and perceived image quality in 3D T1-weighted brain MRI. This technique may enhance diagnostic utility and reduce scan failure rates in motion-prone patients with cognitive impairment. AD = Alzheimer's disease; DL = deep learning; ICC = intra-class correlation coefficient; SAMER = scout accelerated motion estimation and reduction.

Safari M, Wang S, Li Q, Eidex Z, Qiu RLJ, Chang CW, Mao H, Yang X

pubmed logopapersOct 8 2025
Motion artifacts in brain MRI, mainly from rigid head motion, degrade image quality and hinder downstream applications. Conventional methods to mitigate these artifacts, including repeated acquisitions or motion tracking, impose workflow burdens. This study introduces Res-MoCoDiff, an efficient denoising diffusion probabilistic model specifically designed for MRI motion artifact correction. Res-MoCoDiff exploits a novel residual error shifting mechanism during the forward diffusion process to incorporate information from motion-corrupted images. This mechanism allows the model to simulate the evolution of noise with a probability distribution closely matching that of the corrupted data, enabling a reverse diffusion process that requires only four steps. The model employs a U-net backbone, with attention layers replaced by Swin Transformer blocks, to enhance robustness across resolutions. Furthermore, the training process integrates a combined l1+l2 loss function, which promotes image sharpness and reduces pixel-level errors. Res-MoCoDiff was evaluated on both an in-silico dataset generated using a realistic motion simulation framework and an in-vivo MR-ART dataset. Comparative analyses were conducted against established methods, including CycleGAN, Pix2pix, and a diffusion model with a vision transformer backbone (MT-DDPM), using quantitative metrics such as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and normalized mean squared error (NMSE). The proposed method demonstrated superior performance in removing motion artifacts across minor, moderate, and heavy distortion levels. Res-MoCoDiff consistently achieved the highest SSIM and the lowest NMSE values, with a PSNR of up to 41.91±2.94 dB for minor distortions. Notably, the average sampling time was reduced to 0.37 seconds per batch of two image slices, compared with 101.74 seconds for conventional approaches. Res-MoCoDiff offers a robust and efficient solution for correcting MRI motion artifacts, preserving fine structural details while significantly reducing computational overhead. Its speed and restoration fidelity underscore its potential for integration into clinical workflows, enhancing diagnostic accuracy and patient care.

Schott B, Klanecek Z, Santoro-Fernandes V, Tie X, Salgado-Maldonado SI, Deatsch A, Jeraj R

pubmed logopapersOct 8 2025
Deep learning models are prone to failure when inferring upon out-of-distribution (OOD) data, i.e., data whose features fundamentally differ from those in the training set. Existing OOD measures often lack sensitivity to the subtle image variations encountered within clinical settings. In this work, we investigate a post hoc, information-based approach to OOD detection-termed InfoOOD-which iteratively quantifies the amount of embedded feature information that can be shared between the training data and test data without degrading the model output.&#xD;Approach. Abdominal CT images from patients with metastatic liver lesions were used. A 3D U-Net was trained to segment liver organs and lesions using N=157 images. Physics-based artifacts-low dose, sparse view angles, and rings artifacts-were simulated on a separate set of N=40 test images at three intensity magnitudes. Segmentation performance and the ability of the InfoOOD measure to detect the artifact-induced OOD data were evaluated. An additional N=131 test images were used to assess the correlation between the InfoOOD measure and segmentation model performance metrics. In all evaluations, InfoOOD was compared with established embedded feature-based and reconstruction-based OOD detection methods. &#xD;Results. Artifact simulation significantly degraded segmentation model performance across all artifact types and magnitudes (ρ<0.001), with model performance worsening as artifact magnitude increased. The InfoOOD measure consistently outperformed the embedded feature-based measures in detecting OOD data (e.g., AUC=0.93 vs. AUC=0.57 for the strong rings artifact) and surpassed the reconstruction-based measure across weak magnitude artifacts (e.g., AUC=0.75 vs. AUC=0.61 for the weak sparse view artifact). The InfoOOD measure also achieved stronger, negative correlations with segmentation performance metrics (e.g., ρ=-0.52 vs. ρ≥-0.11 for the lesion sensitivity metric). In both assessments, InfoOOD measure performance increased considerably with information bottleneck optimization iterations. &#xD;Significance. This work introduces and validates a novel, highly sensitive, and clinically relevant information-theoretic approach for medical image OOD detection, supporting the safe deployment of deep learning models in clinical settings.

Lu J, Shen L, Zhou C, Bi Z, Ye X, Zhao Z, Zeng M, Wang M

pubmed logopapersOct 8 2025
To evaluate the image quality of deep learning reconstruction (DLR)-based ultra-low dose (ULD) CT pulmonary angiography (CTPA) images and determine whether the artificial intelligence (AI) software can improve the diagnostic performance of radiologist for detecting pulmonary embolism (PE) with ULD images. This prospective two-center study enrolled 144 patients with suspected PE who underwent CTPA from July to October 2024. Patients were randomized into two groups equally. Images in the routine-dose (RD) group were reconstructed using hybrid-iterative reconstruction (HIR), while ULD images were reconstructed using HIR and DLR. A subset of 56 participants (1:1 PE to non-PE ratio) in ULD group was randomly selected and evaluated by three radiologists with and without AI software. Reference standard was established by expert consensus. Interrater reliability was determined by intraclass correlation coefficient (ICC). The diagnostic results and interpretation times were recorded. There were no significant differences in demographics between the two groups. ULD-DLR images exhibited significantly higher objective and subjective image quality compared to both RD-HIR and ULD-HIR images. Interobserver agreement was moderate for RD-HIR (ICC=0.77) and excellent for ULD-DLR images (ICC=0.84). For radiologist detection of PE assisted by AI, both ULD-HIR and ULD-DLR cohorts exhibited near-perfect accuracy, outperforming unassisted readings (sensitivity 79.8% vs. 91.7% and specificity 95.5% vs. 99.2% in ULD-HIR; sensitivity 90.5% vs. 96.4% and specificity 95.8% vs. 100.0% in ULD-DLR). AI assistance reduced interpretation time by 19.7% for ULD-HIR and 15.6% for ULD-DLR scans. The effective dose of ULD group was decreased by 74% compared to RD group. DLR can maintain the CTPA image quality even at ultra-low dose level, further ensuring the accuracy and efficiency of AI-assisted PE diagnosis while improving radiation safety.

Nabeta T, Bär S, Maaniitty T, Kärpijoki H, Bax JJ, Saraste A, Knuuti J

pubmed logopapersOct 8 2025
A novel artificial intelligence-guided quantitative computed tomography ischemia algorithm (AI-QCT<sub>ischemia</sub>) comprises a machine-learned method using atherosclerosis and vascular morphology features from coronary computed tomography angiography (CCTA) images to predict myocardial ischemia. This study evaluates the diagnostic performance of AI-QCT<sub>ischemia</sub> compared to standard CCTA interpretation in detecting myocardial ischemia. Patients with suspected coronary artery disease (CAD) undergoing CCTA were analyzed, with ischemia detected by stress [<sup>15</sup>O]H<sub>2</sub>O positron emission tomography (PET) as the reference. AI-QCT<sub>ischemia</sub> analysis was successfully completed in 84 ​% of patients undergoing CCTA. A total of 1746 patients (mean age 62 ​± ​10 years, 44 ​% male) were included. In visual CCTA reading, 518 (30 ​%) patients had obstructive CAD, defined as diameter stenosis of ≥50 ​%. Myocardial ischemia on PET was detected in 325 (19 ​%) patients whereas AI-QCT<sub>ischemia</sub> was positive in 430 (25 ​%) patients. The diagnostic accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the AI-QCT<sub>ischemia</sub> for the assessment of myocardial ischemia were 87 ​%, 81 ​%, 88 ​%, 61 ​%, and 95 ​%, respectively, compared to 86 ​%, 93 ​%, 85 ​%, 58 ​%, and 98 ​% for visual CCTA reading. AI-QCT<sub>ischemia</sub> demonstrated higher diagnostic accuracy, specificity, and positive predictive value, but lower sensitivity and negative predictive value than visual CCTA reading (p-value <0.001). Combining AI-QCT<sub>ischemia</sub> with visual CCTA reading improved ischemia discrimination compared with visual CCTA reading alone (area under the receiver operating characteristic curve 0.899 vs. 0.868, p ​< ​0.001). Among patients with suspected CAD, the AI-guided CCTA-derived ischemia algorithm demonstrated improved specificity as compared with visual CCTA reading but this was at a cost of decreased sensitivity, resulting in a slight improvement in diagnostic accuracy for predicting PET-defined myocardial ischemia. These findings suggest that AI-QCT<sub>ischemia</sub> may support clinicians in refining diagnostic decision-making and streamlining patient selection for further testing.
Page 78 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.