Sort by:
Page 35 of 6076063 results

Peng X, Zhu B, Shao J

pubmed logopapersOct 13 2025
Magnetic resonance imaging (MRI) plays an important role in the diagnosis and treatment of hippocampal sclerosis. However, this exam presents challenges due to long scan times and image quality variability in pediatric patients. This study aims to compare conventional reconstructed MRI and accelerated sequences with and without deep learning-based reconstruction (DLR) with regard to image quality and diagnostic performance in pediatric hippocampal sclerosis patients. A total of 68 pediatric patients proven or suspected to have temporal lobe epilepsy with hippocampal sclerosis who underwent recommended epilepsy structural MRI were included in this study. MRI examination included standard sequences and accelerated sequences with and without DLR. Standard sequences were reconstructed using the conventional pipeline, while accelerated sequences were reconstructed using both the conventional pipeline and DLR pipeline. Two experienced pediatric radiologists independently evaluated the following parameters of three reconstructed image sets on a 5-point scale: image quality, anatomic structure visibility, motion artifact, truncation artifact, image noise, and detectability of hippocampal abnormalities. Signal-to-noise ratio (SNR) measurements of the hippocampus were performed in all sequences and compared between the three sets of images. Inter-reader agreement and agreement between image sets for detecting hippocampal abnormalities were assessed using Cohen's kappa. Images reconstructed with DLR received significantly higher scores of overall image quality, presence of lesion, and image noise than with conventional or original accelerated reconstructions (all P<0.05), while there was no statistical difference of artifacts between the three groups (all P>0.05). The SNR for all sequences with DLR was significantly higher than conventional or original reconstructions without DLR (all P<0.001). Inter-reader agreement showed almost perfect agreement (κ=0.803-0.963) of the imaging manifestations, while agreement between image sets showed substantial agreement to almost perfect agreement (κ=0.778-0.965) of the imaging manifestations. Accelerated sequences with DLR provide a 44% scan time reduction with similar subjective image quality, artifacts, and diagnostic performance to conventional reconstruction sequences.

Ardila CM, Vivares-Builes AM, Pineda-Vélez E

pubmed logopapersOct 13 2025
This systematic review and meta-analysis aimed to synthesize diagnostic and prognostic performance metrics of machine learning (ML)-based biomarker models in oral squamous cell carcinoma (OSCC) and to integrate biological insights through a functional metasynthesis. Following PRISMA 2020 guidelines, a comprehensive search was conducted up to July 2025. Eligible studies applied ML algorithms to molecular or imaging biomarkers from OSCC patients. Data synthesis incorporated meta-analysis when endpoints and designs were sufficiently comparable; otherwise, study-level results were summarized narratively. Twenty-five studies encompassing 4408 patients were included. Diagnostic performance was strongest for salivary DNA methylation (AUC up to 1.00), metabolomics (AUC ≈0.92), and FTIR imaging (AUC ≈0.91), while autoantibody and microbiome models showed more variable accuracy. Prognostic models based on immune-feature signatures outperformed conventional scores, while multimodal approaches integrating imaging and metabolomics retained strong performance under external validation. Models based on pathomics and MRI radiomics also achieved clinically meaningful accuracy across independent cohorts. Functional metasynthesis revealed convergent biological processes-metabolic reprogramming, immune-inflammatory remodeling, microbiome dysbiosis, and epithelial/extracellular matrix disruption-that underpin predictive accuracy. ML models leveraging molecular and imaging biomarkers show strong potential to improve OSCC diagnosis, risk stratification, and prognosis, particularly through multimodal integration.

Kai Han, Siqi Ma, Chengxuan Qian, Jun Chen, Chongwen Lyu, Yuqing Song, Zhe Liu

arxiv logopreprintOct 13 2025
Accurate segmentation of tumors and adjacent normal tissues in medical images is essential for surgical planning and tumor staging. Although foundation models generally perform well in segmentation tasks, they often struggle to focus on foreground areas in complex, low-contrast backgrounds, where some malignant tumors closely resemble normal organs, complicating contextual differentiation. To address these challenges, we propose the Foreground-Aware Spectrum Segmentation (FASS) framework. First, we introduce a foreground-aware module to amplify the distinction between background and the entire volume space, allowing the model to concentrate more effectively on target areas. Next, a feature-level frequency enhancement module, based on wavelet transform, extracts discriminative high-frequency features to enhance boundary recognition and detail perception. Eventually, we introduce an edge constraint module to preserve geometric continuity in segmentation boundaries. Extensive experiments on multiple medical datasets demonstrate superior performance across all metrics, validating the effectiveness of our framework, particularly in robustness under complex conditions and fine structure recognition. Our framework significantly enhances segmentation of low-contrast images, paving the way for applications in more diverse and complex medical imaging scenarios.

Su Z, Cai B, Li L, Huang Z, Fu Y

pubmed logopapersOct 13 2025
This investigation focused on developing a predictive clinical tool that combines biparametric MRI-derived PI-RADS v2.1 assessments with patient-specific biomarkers. The model was designed to optimize prostate cancer detection reliability in individuals exhibiting prostate-specific antigen concentrations below 20 ng/mL, particularly targeting the diagnostic challenges presented by this intermediate PSA range. By systematically integrating imaging characteristics with laboratory parameters, the research sought to establish a practical decision-making framework for clinicians managing suspected prostate malignancies. A total of 218 patients with confirmed pathological diagnoses between January 2020 and December 2023 underwent a retrospective review. The cohort was divided into two distinct groups: a training cohort comprising 153 cases and a validation cohort containing 65 cases. For nomogram predictor selection, statistical modeling incorporated machine learning approaches including LASSO regression with ten-fold cross-validation, supplemented by both univariate and multivariate logistic regression analyses to identify independent prognostic factors.The nomogram's predictive performance was evaluated by determining the area under the receiver operating characteristic curve (AUC), developing calibration plots, and implementing decision curve analysis (DCA). The study findings revealed that among patients with prostate-specific antigen (PSA) concentrations ≤ 20 ng/mL, four parameters - PI-RADS v2.1 classification, free PSA ratio (%fPSA), diffusion-weighted imaging-derived ADC values, and serum hemoglobin concentrations - emerged as independent predictive factors for prostate carcinoma detection. The composite predictive model demonstrated superior diagnostic performance compared to individual parameters, achieving an elevated receiver operating characteristic curve area of 0.922. Notably, the PI-RADS v2.1 scoring system alone showed an AUC of 0.848 (P < 0.05) in this patient cohort. The area under the curve (AUC) for free PSA percentage reached 0.760 (P < 0.001), while apparent diffusion coefficient (ADC) values showed superior discriminative ability with an AUC of 0.825 (P < 0.001). Hemoglobin levels exhibited moderate predictive value (AUC = 0.622, P = 0.006). The developed predictive model exhibited outstanding diagnostic accuracy, achieving AUC scores of 0.922 in the training dataset and 0.898 in the validation cohort, complemented by precise calibration metrics. Integrating PI-RADS v2.1 scores with clinical parameters enhanced diagnostic performance, yielding 81.2% sensitivity and 89.3% specificity in lesion characterization.This marked improvement becomes evident when compared to the standalone application of PI-RADS v2.1, which yielded sensitivity and specificity values of 73.2% and 86.8% correspondingly. The PI-RADS v2.1 assessment derived from biparametric MRI demonstrates standalone prognostic value for detecting prostate malignancies in patients with serum PSA concentrations below 20 ng/mL. This imaging-based scoring system, when integrated with additional clinical parameters, significantly enhances the diagnostic reliability of clinical assessments. The methodology provides clinicians with a non-invasive evaluation tool featuring intuitive visualization capabilities, potentially reducing the necessity for invasive biopsy procedures while maintaining diagnostic precision. This integrated methodology demonstrates considerable promise as an effective framework for improving diagnostic accuracy in PCa identification and supporting therapeutic choices in clinical practice.

Deng Y, Zheng L, Zhang M, Xu L, Li Q, Zhou L, Wang Q, Gong Y, Li S

pubmed logopapersOct 13 2025
The preoperative identification of cervical lymph node metastasis in papillary thyroid carcinoma is essential in tailoring surgical treatment. We aim to develop an ultrasound-based handcrafted radiomics model, a deep learning radiomics model, and a combined model for better predicting cervical lymph node metastasis in papillary thyroid carcinoma patients. A retrospective cohort of 441 patients was included (308 in the training set, 133 in the testing set). Handcrafted radiomics features, manually selected by physicians, were extracted using Pyradiomics software, whereas deep learning radiomics features were extracted from a pretrained DenseNet121 network, a fully automatic process that eliminates the need for manual selection. A combined model integrating radiomics signatures from the above models was developed. ROC analysis was used to evaluate the performance of three models. DeLong's tests were conducted to compare the AUC values of the different models in the training and testing sets. In the training set, the AUC value of the combined model (0.790) was significantly higher than that of the handcrafted radiomics (0.743, p = 0.021) and deep learning radiomics (0.730, p = 0.003) models. In the testing set, although the AUC value of the combined model (0.761) was higher than that of the handcrafted radiomics model (0.734, p = 0.368) and deep learning radiomics model (0.719, p = 0.228), statistical significance was not reached. The handcrafted radiomics model exhibited high accuracy in both the training and testing sets (0.714 and 0.707), while the deep learning radiomics model showed accuracy below 0.7 in both the training and testing sets (0.698 and 0.662). The combined model based on conventional ultrasound images enhances the predictive performance compared to different radiomics models alone.

Nikolay Nechaev, Evgenia Przhezdzetskaya, Viktor Gombolevskiy, Dmitry Umerenkov, Dmitry Dylov

arxiv logopreprintOct 13 2025
Chest X-ray classification is vital yet resource-intensive, typically demanding extensive annotated data for accurate diagnosis. Foundation models mitigate this reliance, but how many labeled samples are required remains unclear. We systematically evaluate the use of power-law fits to predict the training size necessary for specific ROC-AUC thresholds. Testing multiple pathologies and foundation models, we find XrayCLIP and XraySigLIP achieve strong performance with significantly fewer labeled examples than a ResNet-50 baseline. Importantly, learning curve slopes from just 50 labeled cases accurately forecast final performance plateaus. Our results enable practitioners to minimize annotation costs by labeling only the essential samples for targeted performance.

Neilansh Chauhan, Piyush Kumar Gupta, Faraz Doja

arxiv logopreprintOct 13 2025
Effective pneumonia diagnosis is often challenged by the difficulty of deploying large, computationally expensive deep learning models in resource-limited settings. This study introduces LightPneumoNet, an efficient, lightweight convolutional neural network (CNN) built from scratch to provide an accessible and accurate diagnostic solution for pneumonia detection from chest X-rays. Our model was trained on a public dataset of 5,856 chest X-ray images. Preprocessing included image resizing to 224x224, grayscale conversion, and pixel normalization, with data augmentation (rotation, zoom, shear) to prevent overfitting. The custom architecture features four blocks of stacked convolutional layers and contains only 388,082 trainable parameters, resulting in a minimal 1.48 MB memory footprint. On the independent test set, our model delivered exceptional performance, achieving an overall accuracy of 0.942, precision of 0.92, and an F1-Score of 0.96. Critically, it obtained a sensitivity (recall) of 0.99, demonstrating a near-perfect ability to identify true pneumonia cases and minimize clinically significant false negatives. Notably, LightPneumoNet achieves this high recall on the same dataset where existing approaches typically require significantly heavier architectures or fail to reach comparable sensitivity levels. The model's efficiency enables deployment on low-cost hardware, making advanced computer-aided diagnosis accessible in underserved clinics and serving as a reliable second-opinion tool to improve patient outcomes.

Nivea Roy, Son Tran, Atul Sajjanhar, K. Devaraja, Prakashini Koteshwara, Yong Xiang, Divya Rao

arxiv logopreprintOct 13 2025
Laryngeal cancer imaging research lacks standardised datasets to enable reproducible deep learning (DL) model development. We present LaryngealCT, a curated benchmark of 1,029 computed tomography (CT) scans aggregated from six collections from The Cancer Imaging Archive (TCIA). Uniform 1 mm isotropic volumes of interest encompassing the larynx were extracted using a weakly supervised parameter search framework validated by clinical experts. 3D DL architectures (3D CNN, ResNet18,50,101, DenseNet121) were benchmarked on (i) early (Tis,T1,T2) vs. advanced (T3,T4) and (ii) T4 vs. non-T4 classification tasks. 3D CNN (AUC-0.881, F1-macro-0.821) and ResNet18 (AUC-0.892, F1-macro-0.646) respectively outperformed the other models in the two tasks. Model explainability assessed using 3D GradCAMs with thyroid cartilage overlays revealed greater peri-cartilage attention in non-T4 cases and focal activations in T4 predictions. Through open-source data, pretrained models, and integrated explainability tools, LaryngealCT offers a reproducible foundation for AI-driven research to support clinical decisions in laryngeal oncology.

Sina EM, Limage K, Anisman E, Pudik N, Tam E, Kahn C, Daggumati S, Evans JJ, Rabinowitz MR, Rosen MR, Nyquist G

pubmed logopapersOct 13 2025
Automated machine learning (AutoML) is an artificial intelligence tool that facilitates image recognition model development. This study evaluates the diagnostic performance of AutoML in differentiating pituitary macroadenomas (PA) and parasellar meningiomas (PSM) using preoperative MRI. Model development and retrospective analysis. Single academic institution with external validation from a public dataset. 1628 contrast-enhanced T1-weighted MRI sequences from 116 patients (997 PA, 631 PSM) were uploaded to Google Cloud VertexAI AutoML. A single-label classification model was developed using an 80%-10%-10% training-validation-testing split. External validation included 930 PA and 29 PSM images. A subanalysis evaluated the classification of anatomical PSM subtypes (planum sphenoidale [PS] versus tuberculum sellae [TS]). Performance metrics were calculated at 0.25, 0.5, and 0.75 confidence thresholds. At a 0.5 confidence threshold, the AutoML model achieved an aggregate AUPRC of 0.997, with F1 score, sensitivity, specificity, PPV, and NPV equilibrated to 97.55%. The model achieved strong performance in classifying PA (F1 = 97.98%; sensitivity = 97.00%; specificity = 98.96%) and PSM (F1 = 96.88%; sensitivity = 98.41%; specificity = 95.53%). External validation demonstrated high accuracy (AUPRC = 0.999 for PA; 1.000 for PSM). The PSM subanalysis yielded an aggregate F1 score of 97.30%, with PS and TS classified at 97.44% and 97.14%, respectively. Our customized AutoML model accurately differentiates PAs from PSMs using preoperative MRIs and outperforms traditional ML. It is the first AutoML model specifically trained for parasellar tumor classification. Its highly automated, user-friendly design may facilitate scalable integration into clinical practice.

Ergün U, Çoban T, Kayadibi İ

pubmed logopapersOct 13 2025
Breast cancer remains one of the leading causes of cancer-related deaths globally, affecting both women and men. This study aims to develop a novel deep learning (DL)-based architecture, the Breast Cancer Ensemble Convolutional Neural Network (BCECNN), to enhance the diagnostic accuracy and interpretability of breast cancer detection systems. The BCECNN architecture incorporates two ensemble learning (EL) structures: Triple Ensemble CNN (TECNN) and Quintuple Ensemble CNN (QECNN). These ensemble models integrate the predictions of multiple CNN architectures-AlexNet, VGG16, ResNet-18, EfficientNetB0, and XceptionNet-using a majority voting mechanism. These models were trained using transfer learning (TL) and evaluated on five distinct sub-datasets generated from the Artificial Intelligence Smart Solution Laboratory (AISSLab) dataset, which consists of 266 mammography images labeled and validated by radiologists. To improve transparency and interpretability, Explainable Artificial Intelligence (XAI) techniques, including Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME), were applied. Additionally, explainability was assessed through clinical evaluation by an experienced radiologist. Experimental results demonstrated that the TECNN model-comprising AlexNet, VGG16, and EfficientNetB0-achieved the highest accuracy of 98.75% on the AISSLab-v2 dataset. The integration of XAI methods substantially enhanced the interpretability of the model, enabling clinicians to better understand and validate the model's decision-making process. Clinical evaluation confirmed that the XAI outputs aligned well with expert assessments, underscoring the practical utility of the model in a diagnostic setting. The BCECNN model presents a promising solution for improving both the accuracy and interpretability of breast cancer diagnostic systems. Unlike many previous studies that rely on single architectures or large datasets, BCECNN leverages the strengths of an ensemble of CNN models and performs robustly even with limited data. It integrates advanced XAI techniques-such as Grad-CAM and LIME-to provide visual justifications for model decisions, enhancing clinical interpretability. Moreover, the model was validated using AISSLab dataset, designed to reflect real-world diagnostic challenges. This combination of EL, interpretability, and robust performance on small yet clinically relevant data positions BCECNN as a novel and reliable decision support tool for AI-assisted breast cancer diagnostics.
Page 35 of 6076063 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.