Sort by:
Page 3 of 3793788 results

Pathomics-based machine learning models for optimizing LungPro navigational bronchoscopy in peripheral lung lesion diagnosis: a retrospective study.

Ying F, Bao Y, Ma X, Tan Y, Li S

pubmed logopapersSep 26 2025
To construct a pathomics-based machine learning model to enhance the diagnostic efficacy of LungPro navigational bronchoscopy for peripheral pulmonary lesions and to optimize the management strategy for LungPro-diagnosed negative lesions. Clinical data and hematoxylin and eosin (H&E)-stained whole slide images (WSIs) were collected from 144 consecutive patients undergoing LungPro virtual bronchoscopy at a single institution between January 2022 and December 2023. Patients were stratified into diagnosis-positive and diagnosis-negative cohorts based on histopathological or etiological confirmation. An artificial intelligence (AI) model was developed and validated using 94 diagnosis-positive cases. Logistic regression (LR) identified associations between clinical/imaging characteristics and malignant pulmonary lesion risk factors. We implemented a convolutional neural network (CNN) with weakly supervised learning to extract image-level features, followed by multiple instance learning (MIL) for patient-level feature aggregation. Multiple machine learning (ML) algorithms were applied to model the extracted features. A multimodal diagnostic framework integrating clinical, imaging, and pathomics data were subsequently developed and evaluated on 50 LungPro-negative patients to assess the framework's diagnostic performance and predictive validity. Univariable and multivariable logistic regression analyses identified that age, lesion boundary and mean computed tomography (CT) attenuation were independent risk factors for malignant peripheral pulmonary lesions (P < 0.05). A histopathological model using a MIL fusion strategy showed strong diagnostic performance for lung cancer, with area under the curve (AUC) values of 0.792 (95% CI 0.680-0.903) in the training cohort and 0.777 (95% CI 0.531-1.000) in the test cohort. Combining predictive clinical features with pathological characteristics enhanced diagnostic yield for peripheral pulmonary lesions to 0.848 (95% CI 0.6945-1.0000). In patients with initially negative LungPro biopsy results, the model identified 20 of 28 malignant lesions (sensitivity: 71.43%) and 15 of 22 benign lesions (specificity: 68.18%). Class activation mapping (CAM) validated the model by highlighting key malignant features, including conspicuous nucleoli and nuclear atypia. The fusion diagnostic model that incorporates clinical and pathomic features markedly enhances the diagnostic accuracy of LungPro in this retrospective cohort. This model aids in the detection of subtle malignant characteristics, thereby offering evidence to support precise and targeted therapeutic interventions for lesions that LungPro classifies as negative in clinical settings.

MedIENet: medical image enhancement network based on conditional latent diffusion model.

Yuan W, Feng Y, Wen T, Luo G, Liang J, Sun Q, Liang S

pubmed logopapersSep 26 2025
Deep learning necessitates a substantial amount of data, yet obtaining sufficient medical images is difficult due to concerns about patient privacy and high collection costs. To address this issue, we propose a conditional latent diffusion model-based medical image enhancement network, referred to as the Medical Image Enhancement Network (MedIENet). To meet the rigorous standards required for image generation in the medical imaging field, a multi-attention module is incorporated in the encoder of the denoising U-Net backbone. Additionally Rotary Position Embedding (RoPE) is integrated into the self-attention module to effectively capture positional information, while cross-attention is utilised to embed integrate class information into the diffusion process. MedIENet is evaluated on three datasets: Chest CT-Scan images, Chest X-Ray Images (Pneumonia), and Tongue dataset. Compared to existing methods, MedIENet demonstrates superior performance in both fidelity and diversity of the generated images. Experimental results indicate that for downstream classification tasks using ResNet50, the Area Under the Receiver Operating Characteristic curve (AUROC) achieved with real data alone is 0.76 for the Chest CT-Scan images dataset, 0.87 for the Chest X-Ray Images (Pneumonia) dataset, and 0.78 for the Tongue Dataset. When using mixed data consisting of real data and generated data, the AUROC improves to 0.82, 0.94, and 0.82, respectively, reflecting increases of approximately 6%, 7%, and 4%. These findings indicate that the images generated by MedIENet can enhance the performance of downstream classification tasks, providing an effective solution to the scarcity of medical image training data.

Enhanced CoAtNet based hybrid deep learning architecture for automated tuberculosis detection in human chest X-rays.

Siddharth G, Ambekar A, Jayakumar N

pubmed logopapersSep 26 2025
Tuberculosis (TB) is a serious infectious disease that remains a global health challenge. While chest X-rays (CXRs) are widely used for TB detection, manual interpretation can be subjective and time-consuming. Automated classification of CXRs into TB and non-TB cases can significantly support healthcare professionals in timely and accurate diagnosis. This paper introduces a hybrid deep learning approach for classifying CXR images. The solution is based on the CoAtNet framework, which combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). The model is pre-trained on the large-scale ImageNet dataset to ensure robust generalization across diverse images. The evaluation is conducted on the IN-CXR tuberculosis dataset from ICMR-NIRT, which contains a comprehensive collection of CXR images of both normal and abnormal categories. The hybrid model achieves a binary classification accuracy of 86.39% and an ROC-AUC score of 93.79%, outperforming tested baseline models that rely exclusively on either CNNs or ViTs when trained on this dataset. Furthermore, the integration of Local Interpretable Model-agnostic Explanations (LIME) enhances the interpretability of the model's predictions. This combination of reliable performance and transparent, interpretable results strengthens the model's role in AI-driven medical imaging research. Code will be made available upon request.

A Framework for Guiding DDPM-Based Reconstruction of Damaged CT Projections Using Traditional Methods.

Zhang Z, Yang Y, Yang M, Guo H, Yang J, Shen X, Wang J

pubmed logopapersSep 26 2025
Denoising Diffusion Probabilistic Models (DDPM) have emerged as a promising generative framework for sample synthesis, yet their limitations in detail preservation hinder practical applications in computed tomography (CT) image reconstruction. To address these technical constraints and enhance reconstruction quality from compromised CT projection data, this study proposes the Projection Hybrid Inverse Reconstruction Framework (PHIRF) - a novel paradigm integrating conventional reconstruction methodologies with DDPM architecture. The framework implements a dual-phase approach: Initially, conventional CT reconstruction algorithms (e.g., Filtered back projection(FBP), Algebraic Reconstruction Technique(ART), Maximum-Likelihood Expectation Maximization (ML-EM)) are employed to generate preliminary reconstructions from incomplete projections, establishing low-dimensional feature representations. These features are subsequently parameterized and embedded as conditional constraints in the reverse diffusion process of DDPM, thereby guiding the generative model to synthesize enhanced tomographic images with improved structural fidelity. Comprehensive evaluations were conducted on three representative ill-posed projection scenarios: limited-angle projections, sparse-view acquisitions, and low-dose measurements. Experimental results demonstrate that PHIRF achieves state-of-the-art performance across all compromised data conditions, particularly in preserving fine anatomical details and suppressing reconstruction artifacts. Quantitative metrics and visual assessments confirm the framework's consistent superiority over existing deep learning-based reconstruction approaches, substantiating its adaptability to diverse projection degradation patterns. This hybrid architecture establishes a new paradigm for combining physical prior knowledge with data-driven generative models in medical image reconstruction tasks.

Intratumoral heterogeneity score enhances invasiveness prediction in pulmonary ground-glass nodules via stacking ensemble machine learning.

Zuo Z, Zeng Y, Deng J, Lin S, Qi W, Fan X, Feng Y

pubmed logopapersSep 26 2025
The preoperative differentiation of adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma using computed tomography (CT) is crucial for guiding clinical management decisions. However, accurately classifying ground-glass nodules poses a significant challenge. Incorporating quantitative intratumoral heterogeneity scores may improve the accuracy of this ternary classification. In this multicenter retrospective study, we developed ternary classification models by leveraging insights from both base and stacking ensemble machine learning models, incorporating intratumoral heterogeneity scores along with clinical-radiological features to distinguish adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma. The machine learning models were trained, and final model selection depended on maximizing the macro-average area under the curve (macro-AUC) in both the internal and external validation sets. Data from 802 patients from three centers were divided into a training set (n = 477) and an internal test set (n = 205), in a 7:3 ratio, with an additional external validation set comprising 120 patients. The stacking classifier exhibited superior performance relative to the other models, achieving macro-AUC values of 0.7850 and 0.7717 for the internal and external validation sets, respectively. Moreover, an interpretability analysis utilizing the Shapley Additive Explanation identified four key features of this ternary classification: intratumoral heterogeneity score, nodule size, nodule type, and age. The stacking classifier, recognized as the optimal algorithm for integrating the intratumoral heterogeneity score and clinical-radiological features, effectively served as a ternary classification model for assessing the invasiveness of lung adenocarcinoma in chest CT images. Our stacking classifier integrated intratumoral heterogeneity scores and clinical-radiological features to improve the ternary classification of lung adenocarcinoma invasiveness (adenocarcinomas in situ/minimally invasive adenocarcinoma/invasive adenocarcinoma), aiding in precise diagnosis and clinical decision-making for ground-glass nodules. The intratumoral heterogeneity score effectively assessed the invasiveness of lung adenocarcinoma. The stacking classifier outperformed other methods for this ternary classification task. Intratumoral heterogeneity score, nodule size, nodule type, and age predict invasiveness.

Theranostics in nuclear medicine: the era of precision oncology.

Gandhi N, Alaseem AM, Deshmukh R, Patel A, Alsaidan OA, Fareed M, Alasiri G, Patel S, Prajapati B

pubmed logopapersSep 26 2025
Theranostics represents a transformative advancement in nuclear medicine by integrating molecular imaging and targeted radionuclide therapy within the paradigm of personalized oncology. This review elucidates the historical evolution and contemporary clinical applications of theranostics, emphasizing its pivotal role in precision cancer management. The theranostic approach involves the coupling of diagnostic and therapeutic radionuclides that target identical molecular biomarkers, enabling simultaneous visualization and treatment of malignancies such as neuroendocrine tumors (NETs), prostate cancer, and differentiated thyroid carcinoma. Key theranostic radiopharmaceutical pairs, including Gallium-68-labeled DOTA-Tyr3-octreotate (Ga-68-DOTATATE) with Lutetium-177-labeled DOTA-Tyr3-octreotate (Lu-177-DOTATATE), and Gallium-68-labeled Prostate-Specific Membrane Antigen (Ga-68-PSMA) with Lutetium-177-labeled Prostate-Specific Membrane Antigen (Lu-177-PSMA), exemplify the "see-and-treat" principle central to this modality. This article further explores critical molecular targets such as somatostatin receptor subtype 2, prostate-specific membrane antigen, human epidermal growth factor receptor 2, CD20, and C-X-C chemokine receptor type 4, along with design principles for radiopharmaceuticals that optimize target specificity while minimizing off-target toxicity. Advances in imaging platforms, including positron emission tomography/computed tomography (PET/CT), single-photon emission computed tomography/CT (SPECT/CT), and hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), have been instrumental in accurate dosimetry, therapeutic response assessment, and adaptive treatment planning. Integration of artificial intelligence (AI) and radiomics holds promise for enhanced image segmentation, predictive modeling, and individualized dosimetric planning. The review also addresses regulatory, manufacturing, and economic considerations, including guidelines from the United States Food and Drug Administration (USFDA) and European Medicines Agency (EMA), Good Manufacturing Practice (GMP) standards, and reimbursement frameworks, which collectively influence global adoption of theranostics. In summary, theranostics is poised to become a cornerstone of next-generation oncology, catalyzing a paradigm shift toward biologically driven, real-time personalized cancer care that seamlessly links diagnosis and therapy.

Automatic Body Region Classification in CT Scans Using Deep Learning.

Golzan M, Lee H, Ngatched TMN, Zhang L, Michalak M, Chow V, Beg MF, Popuri K

pubmed logopapersSep 26 2025
Accurate classification of anatomical regions in computed tomography (CT) scans is essential for optimizing downstream diagnostic and analytic workflows in medical imaging. We demonstrate the high performance that deep learning (DL) algorithms can achieve in the classification of whole-body parts in CT images acquired under various protocols. Our model was trained using a dataset consisting of 5485 anonymized neuroimaging informatics technology initiative (NIFTI) CT scans collected from 45 different health centers. The dataset was split into 3290 scans for training, 1097 scans for validation, and 1098 scans for testing. Each body CT scan was classified into six distinct classes covering the whole body: chest, abdomen, pelvis, chest and abdomen, abdomen and pelvis, and chest and abdomen and pelvis. The performance of the DL model stood at an accuracy, precision, recall, and F1-score of 97.53% (95% CI: 96.62%, 98.45%), 97.56% (95% CI: 96.6%, 98.4%), 97.6% (95% CI: 96.7%, 98.5%), and 97.56% (96.6%, 98.4%), respectively, in identifying different body parts. These findings demonstrate the strength of our approach in annotating CT images through a wide variation in both acquisition protocols and patient demographics. This study underlines the potential that DL holds for medical imaging and, in particular, for the automation of body region classification in CT. Our findings confirm that these models could be implemented in clinical routines to improve diagnostic efficiency and harmony.

Leveraging multi-modal foundation model image encoders to enhance brain MRI-based headache classification.

Rafsani F, Sheth D, Che Y, Shah J, Siddiquee MMR, Chong CD, Nikolova S, Ross K, Dumkrieger G, Li B, Wu T, Schwedt TJ

pubmed logopapersSep 26 2025
Headaches are a nearly universal human experience traditionally diagnosed based solely on symptoms. Recent advances in imaging techniques and artificial intelligence (AI) have enabled the development of automated headache detection systems, which can enhance clinical diagnosis, especially when symptom-based evaluations are insufficient. Current AI models often require extensive data, limiting their clinical applicability where data availability is low. However, deep learning models, particularly pre-trained ones and fine-tuned with smaller, targeted datasets can potentially overcome this limitation. By leveraging BioMedCLIP, a pre-trained foundational model combining a vision transformer (ViT) image encoder with PubMedBERT text encoder, we fine-tuned the pre-trained ViT model for the specific purpose of classifying headaches and detecting biomarkers from brain MRI data. The dataset consisted of 721 individuals: 424 healthy controls (HC) from the IXI dataset and 297 local participants, including migraine sufferers (n = 96), individuals with acute post-traumatic headache (APTH, n = 48), persistent post-traumatic headache (PPTH, n = 49), and additional HC (n = 104). The model achieved high accuracy across multiple balanced test sets, including 89.96% accuracy for migraine versus HC, 88.13% for APTH versus HC, and 83.13% for PPTH versus HC, all validated through five-fold cross-validation for robustness. Brain regions identified by Gradient-weighted Class Activation Mapping analysis as responsible for migraine classification included the postcentral cortex, supramarginal gyrus, superior temporal cortex, and precuneus cortex; for APTH, rostral middle frontal and precentral cortices; and, for PPTH, cerebellar cortex and precentral cortex. To our knowledge, this is the first study to leverage a multimodal biomedical foundation model in the context of headache classification and biomarker detection using structural MRI, offering complementary insights into the causes and brain changes associated with headache disorders.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

NextGen lung disease diagnosis with explainable artificial intelligence.

Veeramani N, S A RS, S SP, S S, Jayaraman P

pubmed logopapersSep 26 2025
The COVID-19 pandemic has been the most catastrophic global health emergency of the [Formula: see text] century, resulting in hundreds of millions of reported cases and five million deaths. Chest X-ray (CXR) images are highly valuable for early detection of lung diseases in monitoring and investigating pulmonary disorders such as COVID-19, pneumonia, and tuberculosis. These CXR images offer crucial features about the lung's health condition and can assist in making accurate diagnoses. Manual interpretation of CXR images is challenging even for expert radiologists due to the overlapping radiological features. Therefore, Artificial Intelligence (AI) based image processing took over the charge in healthcare. But still it is uncertain to trust the prediction results by an AI model. However, this can be resolved by implementing explainable artificial intelligence (XAI) tools that transform a black-box AI into a glass-box model. In this research article, we have proposed a novel XAI-TRANS model with inception based transfer learning addressing the challenge of overlapping features in multiclass classification of CXR images. Also, we proposed an improved U-Net Lung segmentation dedicated to obtaining the radiological features for classification. The proposed approach achieved a maximum precision of 98% and accuracy of 97% in multiclass lung disease classification. By leveraging XAI techniques with the evident improvement of 4.75%, specifically LIME and Grad-CAM, to provide detailed and accurate explanations for the model's prediction.
Page 3 of 3793788 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.