Sort by:
Page 79 of 6346332 results

Srivastava S, Ghosh E, Kumar A, Chahar P, Utkarsh A, Mishra R

pubmed logopapersOct 8 2025
Advancements in medical imaging and deep learning have enabled the development of intelligent systems that assist clinicians in diagnosing complex pulmonary diseases. This study addresses the growing concern over lung abnormalities caused by diseases such as COVID-19, tuberculosis (TB), and pneumonia. We propose a convolutional neural network (CNN)-based multi-class classification framework that uses chest X-ray images to automatically detect COVID-19, TB, pneumonia, and normal conditions. The original publicly available dataset exhibited class imbalance, with significantly fewer COVID-19 cases compared to other categories. To address this, the Synthetic Minority Oversampling Technique (SMOTE) are applied at the feature level, generating a balanced dataset of 6,000 chest X-ray images equally distributed across the four classes. The preprocessing techniques have been used to enhance model generalisation, including image normalization, augmentation, and resizing. We evaluated multiple deep learning architectures, including ResNet-50, EfficientNet, DenseNet, and VGG-19. Among these, VGG-19 achieved the highest test accuracy of 97.5%, with precision, recall, and F1-score all exceeding 96% across classes. This unified deep learning pipeline integrates data preprocessing, feature extraction, and classification. The proposed model is intended as a research framework and is currently non-clinical; however, it demonstrates promising potential and could be further explored for assisting radiologists in diagnostic decision-making.

Iacob AM, Verdecchia A, García-Mesa Y, Spinas E, Cobo T

pubmed logopapersOct 8 2025
The automated segmentation of maxillary and mandibular bones in cone-beam computed tomography (CBCT) using artificial intelligence (AI) is redefining the standards of digital dentistry and orthodontics, with applications in mini-implant placement, dental implantology, orthognathic surgery, and bone graft planning. To systematically assess the performance of AI models - particularly U-Net-based convolutional neural networks (CNNs) - for automated segmentation of maxillary bone structures in CBCT, following the PICOS model (Population - CBCT scans of human maxillae; Intervention - AI-based segmentation; Comparator - manual segmentation; Outcome - accuracy; Study design - diagnostic accuracy studies). This systematic review adhered to PRISMA 2020 guidelines and was registered in PROSPERO (CRD42024592182). Eligibility criteria included studies applying AI to maxillary bone segmentation in CBCT and reporting quantitative accuracy metrics. Risk of bias was evaluated using the QUADAS-2 tool. The GRADE tool for formulating and grading recommendations in clinical practice was also employed. Data collected comprised number of CBCT scans, AI model architecture, evaluation metrics, and reported clinical applications. Thirty-one studies, analysing 11,432 CBCT scans, met the inclusion criteria. AI models consistently achieved high segmentation accuracy, with Dice similarity coefficients frequently exceeding 0.98, while substantially reducing processing time compared to manual segmentation. Applications ranged from implant planning and orthognathic surgery to digital orthodontics. Persistent challenges included anatomical variability, imaging artifacts, and the limited availability of high-quality annotated datasets. AI-based segmentation of maxillary and mandibular bones in CBCT demonstrates promising accuracy and efficiency compared with manual techniques. Nevertheless, the certainty of evidence is limited by retrospective designs and small, heterogeneous samples. Large-scale, prospective multicentre studies with standardized evaluation are needed before these methods can be reliably adopted in routine clinical practice.

Zhang M, Wang X, Li Z, Liu W, Li X, Jin X, Guo J, Wang K, Li Y, Ren J

pubmed logopapersOct 8 2025
The objective of this research was to develop and validate a machine learning-based prediction model integrating clinical data, apparent diffusion coefficient (ADC) value, and diffusion- weighted imaging (DWI)-based radiomic features, aimed at assessing microsatellite instability (MSI) status in endometrial cancer (EC) patients. In total, 292 EC patients who underwent pelvic MRI scans participated in this study and were allocated into three distinct groups: an external validation cohort (<i>n</i> = 70), a testing cohort (<i>n</i> = 68), and a training cohort (<i>n</i> = 154). Preoperative clinical indicators, ADC metrics, and radiomic parameters extracted from DWI images were comprehensively evaluated. Feature selection was conducted using least absolute shrinkage and selection operator (LASSO) regression combined with Mann-Whitney U testing. Following feature selection, three distinct machine learning classifiers—support vector machine (SVM), random forest (RF), and logistic regression (LR)—were employed to develop predictive models. The performance and clinical utility of these models were subsequently examined through receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). Among the evaluated methods, the RF model incorporating two clinical indicators, six radiomic parameters from DWI, and ADC values exhibited superior predictive ability. The areas under the ROC curves (AUC) reached 0.980 (95% CI: 0.944–0.996) in the training cohort, 0.852 (95% CI: 0.745–0.927) in the test cohort, and 0.938 (95% CI: 0.853–0.982) in the external validation cohort, respectively. These AUC values reflected better accuracy compared to separate predictive models employing only clinical factors, DWI radiomics, or ADC values (training cohort: AUC = 0.682, 0.925, and 0.851; test cohort: AUC = 0.663, 0.788, and 0.731; external validation cohort: AUC = 0.605, 0.872, and 0.828, respectively). Calibration curves indicated robust concordance, and DCA analysis confirmed that the model had substantial clinical applicability. A prediction model combining clinical factors, DWI radiomics features, and ADC values with machine learning algorithms can noninvasively assess MSI status in EC.

Li C, Qin Z, Tang Z, Wang Y, Zhang B, Tian J, Wang Z

pubmed logopapersOct 8 2025
Assessing coronary artery calcification (CAC) is crucial in evaluating the progression of atherosclerosis and planning percutaneous coronary intervention (PCI). Intravascular Optical Coherence Tomography (OCT) is a commonly used imaging tool for evaluating CAC at micrometer-scale level and in three-dimensions for optimizing PCI. While existing deep learning methods have proven effective in OCT image analysis, they are hindered by the lack of large-scale, high-quality labels to train deep neural networks that can reach human level performance in practice. In this work, we propose an annotation-efficient approach for segmenting CAC in intravascular OCT images, leveraging self-supervised learning and consistency regularization. We employ a transformer encoder paired with a simple linear projection layer for self-supervised pre-training on unlabeled OCT data. Subsequently, a transformer-based segmentation model is fine-tuned on sparsely annotated OCT pullbacks with a contrast loss using a combination of unlabeled and labeled data. We collected 2,549,073 unlabeled OCT images from 7,108 OCT pullbacks for pre-training, and 1,106,347 sparsely annotated OCT images from 3,025 OCT pullbacks for model training and testing. The proposed approach consistently outperformed existing sparsely supervised methods on both internal and external datasets. In addition, extensive comparisons under full, partial, and sparse annotation schemes substantiated its high annotation efficiency. With 80% reduction in image labeling efforts, our method has the potential to expedite the development of deep learning models for processing large-scale medical image data.

Sashank Makanaboyina

arxiv logopreprintOct 8 2025
Accurate detection and segmentation of brain tumors in magnetic resonance imaging (MRI) are critical for effective diagnosis and treatment planning. Despite advances in convolutional neural networks (CNNs) such as U-Net, existing models often struggle with generalization, boundary precision, and limited data diversity. To address these challenges, we propose NNDM (NN\_UNet Diffusion Model)a hybrid framework that integrates the robust feature extraction of NN-UNet with the generative capabilities of diffusion probabilistic models. In our approach, the diffusion model progressively refines the segmentation masks generated by NN-UNet by learning the residual error distribution between predicted and ground-truth masks. This iterative denoising process enables the model to correct fine structural inconsistencies and enhance tumor boundary delineation. Experiments conducted on the BraTS 2021 datasets demonstrate that NNDM achieves superior performance compared to conventional U-Net and transformer-based baselines, yielding improvements in Dice coefficient and Hausdorff distance metrics. Moreover, the diffusion-guided refinement enhances robustness across modalities and tumor subregions. The proposed NNDM establishes a new direction for combining deterministic segmentation networks with stochastic diffusion models, advancing the state of the art in automated brain tumor analysis.

Shayan Mohajer Hamidi, En-Hui Yang, Ben Liang

arxiv logopreprintOct 8 2025
Inverse problems, where the goal is to recover an unknown signal from noisy or incomplete measurements, are central to applications in medical imaging, remote sensing, and computational biology. Diffusion models have recently emerged as powerful priors for solving such problems. However, existing methods either rely on projection-based techniques that enforce measurement consistency through heuristic updates, or they approximate the likelihood $p(\boldsymbol{y} \mid \boldsymbol{x})$, often resulting in artifacts and instability under complex or high-noise conditions. To address these limitations, we propose a novel framework called \emph{coupled data and measurement space diffusion posterior sampling} (C-DPS), which eliminates the need for constraint tuning or likelihood approximation. C-DPS introduces a forward stochastic process in the measurement space $\{\boldsymbol{y}_t\}$, evolving in parallel with the data-space diffusion $\{\boldsymbol{x}_t\}$, which enables the derivation of a closed-form posterior $p(\boldsymbol{x}_{t-1} \mid \boldsymbol{x}_t, \boldsymbol{y}_{t-1})$. This coupling allows for accurate and recursive sampling based on a well-defined posterior distribution. Empirical results demonstrate that C-DPS consistently outperforms existing baselines, both qualitatively and quantitatively, across multiple inverse problem benchmarks.

Sanchez, A. V., Cardall, A., Siddiqui, F., Nashawaty, M., Rigau, D., Kwon, Y., Yousef, M., Patel, S., Kieturakis, A., Kim, E., Heacock, L., Reig, B., Shen, Y.

medrxiv logopreprintOct 8 2025
ObjectiveRadiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings. MethodsWe analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4os feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorffs alpha. Educational value was measured as the proportion of cases rated helpful. ResultsThree common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% across error types. Inter-reader reliability showed moderate variability ( = 0.767, 0.595, 0.567), and replacing a human reader with GPT-4o did not significantly affect agreement ({Delta} = -0.004 to 0.002). GPTs feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%. DiscussionChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.

Bjorkeli, E. B., Karlberg, A. K. M., Vindstad, B. E., Pedersen, L. K., Solheim, O. S., Geitung, J. T., Esmaeili, M., Eikenes, L.

medrxiv logopreprintOct 8 2025
Medical imaging is crucial for glioma management. Combined with MRI, amino acid PET may improve glioma diagnosis, biopsy targeting, and tumor delineation compared to structural MRI alone. Magnetic resonance spectroscopic imaging (MRSI) complements both structural MRI and PET by detecting metabolites such as N-acetylaspartate (NAA), creatine (Cr), and choline (Cho), which are markers for brain health and tumor malignancy, but is challenged by low spatial resolution. This study evaluates whether high-resolution MRSI enhanced by deep learning can improve diagnostic accuracy and serve as a complement or alternative to PET for glioma classification. Ten glioma patients (CNS WHO grades 2-4, ages 24-80) were included. Presurgical [18F]-FACBC PET/MRI, including proton 2D MRSI, was acquired for all patients. Thirty image-guided biopsies were sampled during surgery from the patients and classified as glioma tissue or non-tumor tissue, and according to IDH1 status. For each biopsy location, tumor-to-background ratio (TBR) and standardized uptake value (SUV) from PET, and tCho/NAA and tCho/tCr ratios from MRSI were calculated. ROC analysis was used to assess the accuracy of [18F]-FACBC PET and high-resolution MRSI, and the combinations of these in classifying glioma vs. non-tumor tissue and IDH1 status. The tCho/NAA ratio from the deep learning-based model demonstrated excellent diagnostic accuracy in classifying glioma vs. non-tumor tissue (AUC = 0.87, 95% CI: 0.66- 1.0), outperforming SUV (AUC = 0.71, 95% CI: 0.49-0.90), TBR (AUC = 0.68, 95% CI: 0.48-0.86), and tCho/tCr (AUC = 0.81, 95% CI: 0.54-1.00). Combining TBR with tCho/NAA and/or tCho/tCr improved tissue classification compared to either modality alone, where TBR + tCho/NAA + tCr/NAA showed the best results (AUC = 0.91, 95% CI: 0.71-1.0). MRSI was a poor predictor for IDH1-status (tCho/NAA: AUC = 0.67, 95% CI: 0.44-0.88 and tCho/tCr: AUC = 0.38, 95% CI: 0.17-0.60), while PET was an excellent predictor (SUV: AUC = 0.83, 95% CI: 0.66-0.85 and TBR: AUC = 0.82, 95% CI: 0.65-0.94) and the combination of SUV and tCho/tCr was an outstanding predictor (AUC = 0.96, 95% CI: 0.88-1.0). Incorporating high-resolution MRSI in combination with [18F]-FACBC PET improved the diagnostic accuracy in differentiating glioma tissue from non-tumor tissue. Significance StatementOur study highlights the importance of combining imaging methods for brain tumor characterization. MRI remains central in brain imaging but is limited, making PET a valuable molecular complement. MRSI provides insight into neurometabolic alterations associated with tumor growth, yet its clinical utility has been limited by low spatial resolution. By applying deep learning, we enhanced the resolution of MRSI and compared its performance with PET. Our findings demonstrate that High-resolution MRSI adds diagnostic value and, with PET, may enhance glioma classification and inform future clinical decision-making.

Wang H, Hu Q, Tong Y, Zhu H, He L, Cai J

pubmed logopapersOct 7 2025
To evaluate the role of chest CT radiomics in classifying mediastinal lymphadenopathy caused by hematologic malignancies and abdominopelvic solid cancers. A total of 231 patients with mediastinal lymphadenopathy were selected from the Mediastinal-Lymph-Node-SEG collection in The Cancer Imaging Archive, including 145 patients with hematologic malignancies (74 with chronic lymphocytic leukemia and 71 with lymphoma) and 86 with abdominopelvic solid cancers. Patients were randomly stratified into train and test sets in a 7:3 ratio. Radiomics features were extracted from enhanced CT images of mediastinal lymph nodes, followed by feature selection using univariate analysis and least absolute shrinkage and selection operator regression. A support vector machine algorithm was used to develop classification models, with performance evaluated using the area under the receiver operating characteristic curve (AUC-ROC), accuracy, and 95% CI. For differentiating mediastinal lymphadenopathy between hematologic malignancies and abdominopelvic solid cancers, the model incorporated 23 features and achieved an AUC-ROC of 0.931 (95% CI: 0.891-0.971) and an accuracy of 0.866 in the train set, and an AUC-ROC of 0.830 (95% CI: 0.730-0.929) and an accuracy of 0.759 in the test set. For distinguishing chronic lymphocytic leukemia from lymphoma, the model utilized 4 features, achieving an AUC-ROC of 0.880 (95% CI: 0.813-0.947) and an accuracy of 0.752 in the train set, and an AUC-ROC of 0.872 (95% CI: 0.763-0.982) and an accuracy of 0.836 in the test set. Chest CT radiomics shows promise for classifying mediastinal lymphadenopathy in patients with hematologic malignancies and abdominopelvic solid cancers.

Deng L, Zhang R, Lv H, Li F, Li L, Qin X, Yang J, Ai T, Huang C, Chen X, Xing H, Wu F

pubmed logopapersOct 7 2025
To preoperatively predict lymphovascular space invasion (LVSI) in early-stage cervical cancer (CC) using multi-parametric MRI (mpMRI) radiomics models. This dual-center study included 196 early-stage CC patients (Center A: 142, Dec2020-Apr2023; Center B: 54, May-Oct2023). Center A was partitioned into training (n = 99) and internal validation (n = 43) cohorts; Center B served as external validation. Radiomics features were extracted from T2WI, DWI, and CE-MRI sequences. Feature stability was assessed via intra-class correlation and Dice coefficient, with selection through linear correlation and F-tests. Seven radiomics models (single/combined sequences) were built using the top-performing algorithm among eleven machine learning methods. A combination model (CMIC) integrated the optimal mpMRI model's rad-score with clinical factors. Performance was evaluated by ROC, calibration curves, and DCA across all cohorts. The AdaBoost-based mpMRI model (CE-MRI+DWI+T2WI) utilized 12 selected features. It achieved AUCs of 0.953 (95% CI : 0.916-0.989) in training, 0.868 (0.755-0.981) in internal validation, and 0.797 (0.677-0.916) externally. The CMIC model showed comparable performance (training: 0.957; validation: 0.864; external: 0.847), with no significant differences versus the mpMRI model (p > 0.05 all cohorts). The AdaBoost-driven mpMRI radiomics model effectively predicts LVSI in early-stage CC. Both mpMRI and CMIC models demonstrate robust preoperative predictive capability. This mpMRI radiomics approach using AdaBoost outperforms single-sequence models for LVSI prediction, enabling personalized treatment strategies for early-stage CC.
Page 79 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.