Sort by:
Page 4 of 3813802 results

Enhanced CoAtNet based hybrid deep learning architecture for automated tuberculosis detection in human chest X-rays.

Siddharth G, Ambekar A, Jayakumar N

pubmed logopapersSep 26 2025
Tuberculosis (TB) is a serious infectious disease that remains a global health challenge. While chest X-rays (CXRs) are widely used for TB detection, manual interpretation can be subjective and time-consuming. Automated classification of CXRs into TB and non-TB cases can significantly support healthcare professionals in timely and accurate diagnosis. This paper introduces a hybrid deep learning approach for classifying CXR images. The solution is based on the CoAtNet framework, which combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). The model is pre-trained on the large-scale ImageNet dataset to ensure robust generalization across diverse images. The evaluation is conducted on the IN-CXR tuberculosis dataset from ICMR-NIRT, which contains a comprehensive collection of CXR images of both normal and abnormal categories. The hybrid model achieves a binary classification accuracy of 86.39% and an ROC-AUC score of 93.79%, outperforming tested baseline models that rely exclusively on either CNNs or ViTs when trained on this dataset. Furthermore, the integration of Local Interpretable Model-agnostic Explanations (LIME) enhances the interpretability of the model's predictions. This combination of reliable performance and transparent, interpretable results strengthens the model's role in AI-driven medical imaging research. Code will be made available upon request.

A Framework for Guiding DDPM-Based Reconstruction of Damaged CT Projections Using Traditional Methods.

Zhang Z, Yang Y, Yang M, Guo H, Yang J, Shen X, Wang J

pubmed logopapersSep 26 2025
Denoising Diffusion Probabilistic Models (DDPM) have emerged as a promising generative framework for sample synthesis, yet their limitations in detail preservation hinder practical applications in computed tomography (CT) image reconstruction. To address these technical constraints and enhance reconstruction quality from compromised CT projection data, this study proposes the Projection Hybrid Inverse Reconstruction Framework (PHIRF) - a novel paradigm integrating conventional reconstruction methodologies with DDPM architecture. The framework implements a dual-phase approach: Initially, conventional CT reconstruction algorithms (e.g., Filtered back projection(FBP), Algebraic Reconstruction Technique(ART), Maximum-Likelihood Expectation Maximization (ML-EM)) are employed to generate preliminary reconstructions from incomplete projections, establishing low-dimensional feature representations. These features are subsequently parameterized and embedded as conditional constraints in the reverse diffusion process of DDPM, thereby guiding the generative model to synthesize enhanced tomographic images with improved structural fidelity. Comprehensive evaluations were conducted on three representative ill-posed projection scenarios: limited-angle projections, sparse-view acquisitions, and low-dose measurements. Experimental results demonstrate that PHIRF achieves state-of-the-art performance across all compromised data conditions, particularly in preserving fine anatomical details and suppressing reconstruction artifacts. Quantitative metrics and visual assessments confirm the framework's consistent superiority over existing deep learning-based reconstruction approaches, substantiating its adaptability to diverse projection degradation patterns. This hybrid architecture establishes a new paradigm for combining physical prior knowledge with data-driven generative models in medical image reconstruction tasks.

Intratumoral heterogeneity score enhances invasiveness prediction in pulmonary ground-glass nodules via stacking ensemble machine learning.

Zuo Z, Zeng Y, Deng J, Lin S, Qi W, Fan X, Feng Y

pubmed logopapersSep 26 2025
The preoperative differentiation of adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma using computed tomography (CT) is crucial for guiding clinical management decisions. However, accurately classifying ground-glass nodules poses a significant challenge. Incorporating quantitative intratumoral heterogeneity scores may improve the accuracy of this ternary classification. In this multicenter retrospective study, we developed ternary classification models by leveraging insights from both base and stacking ensemble machine learning models, incorporating intratumoral heterogeneity scores along with clinical-radiological features to distinguish adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma. The machine learning models were trained, and final model selection depended on maximizing the macro-average area under the curve (macro-AUC) in both the internal and external validation sets. Data from 802 patients from three centers were divided into a training set (n = 477) and an internal test set (n = 205), in a 7:3 ratio, with an additional external validation set comprising 120 patients. The stacking classifier exhibited superior performance relative to the other models, achieving macro-AUC values of 0.7850 and 0.7717 for the internal and external validation sets, respectively. Moreover, an interpretability analysis utilizing the Shapley Additive Explanation identified four key features of this ternary classification: intratumoral heterogeneity score, nodule size, nodule type, and age. The stacking classifier, recognized as the optimal algorithm for integrating the intratumoral heterogeneity score and clinical-radiological features, effectively served as a ternary classification model for assessing the invasiveness of lung adenocarcinoma in chest CT images. Our stacking classifier integrated intratumoral heterogeneity scores and clinical-radiological features to improve the ternary classification of lung adenocarcinoma invasiveness (adenocarcinomas in situ/minimally invasive adenocarcinoma/invasive adenocarcinoma), aiding in precise diagnosis and clinical decision-making for ground-glass nodules. The intratumoral heterogeneity score effectively assessed the invasiveness of lung adenocarcinoma. The stacking classifier outperformed other methods for this ternary classification task. Intratumoral heterogeneity score, nodule size, nodule type, and age predict invasiveness.

Theranostics in nuclear medicine: the era of precision oncology.

Gandhi N, Alaseem AM, Deshmukh R, Patel A, Alsaidan OA, Fareed M, Alasiri G, Patel S, Prajapati B

pubmed logopapersSep 26 2025
Theranostics represents a transformative advancement in nuclear medicine by integrating molecular imaging and targeted radionuclide therapy within the paradigm of personalized oncology. This review elucidates the historical evolution and contemporary clinical applications of theranostics, emphasizing its pivotal role in precision cancer management. The theranostic approach involves the coupling of diagnostic and therapeutic radionuclides that target identical molecular biomarkers, enabling simultaneous visualization and treatment of malignancies such as neuroendocrine tumors (NETs), prostate cancer, and differentiated thyroid carcinoma. Key theranostic radiopharmaceutical pairs, including Gallium-68-labeled DOTA-Tyr3-octreotate (Ga-68-DOTATATE) with Lutetium-177-labeled DOTA-Tyr3-octreotate (Lu-177-DOTATATE), and Gallium-68-labeled Prostate-Specific Membrane Antigen (Ga-68-PSMA) with Lutetium-177-labeled Prostate-Specific Membrane Antigen (Lu-177-PSMA), exemplify the "see-and-treat" principle central to this modality. This article further explores critical molecular targets such as somatostatin receptor subtype 2, prostate-specific membrane antigen, human epidermal growth factor receptor 2, CD20, and C-X-C chemokine receptor type 4, along with design principles for radiopharmaceuticals that optimize target specificity while minimizing off-target toxicity. Advances in imaging platforms, including positron emission tomography/computed tomography (PET/CT), single-photon emission computed tomography/CT (SPECT/CT), and hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), have been instrumental in accurate dosimetry, therapeutic response assessment, and adaptive treatment planning. Integration of artificial intelligence (AI) and radiomics holds promise for enhanced image segmentation, predictive modeling, and individualized dosimetric planning. The review also addresses regulatory, manufacturing, and economic considerations, including guidelines from the United States Food and Drug Administration (USFDA) and European Medicines Agency (EMA), Good Manufacturing Practice (GMP) standards, and reimbursement frameworks, which collectively influence global adoption of theranostics. In summary, theranostics is poised to become a cornerstone of next-generation oncology, catalyzing a paradigm shift toward biologically driven, real-time personalized cancer care that seamlessly links diagnosis and therapy.

Automatic Body Region Classification in CT Scans Using Deep Learning.

Golzan M, Lee H, Ngatched TMN, Zhang L, Michalak M, Chow V, Beg MF, Popuri K

pubmed logopapersSep 26 2025
Accurate classification of anatomical regions in computed tomography (CT) scans is essential for optimizing downstream diagnostic and analytic workflows in medical imaging. We demonstrate the high performance that deep learning (DL) algorithms can achieve in the classification of whole-body parts in CT images acquired under various protocols. Our model was trained using a dataset consisting of 5485 anonymized neuroimaging informatics technology initiative (NIFTI) CT scans collected from 45 different health centers. The dataset was split into 3290 scans for training, 1097 scans for validation, and 1098 scans for testing. Each body CT scan was classified into six distinct classes covering the whole body: chest, abdomen, pelvis, chest and abdomen, abdomen and pelvis, and chest and abdomen and pelvis. The performance of the DL model stood at an accuracy, precision, recall, and F1-score of 97.53% (95% CI: 96.62%, 98.45%), 97.56% (95% CI: 96.6%, 98.4%), 97.6% (95% CI: 96.7%, 98.5%), and 97.56% (96.6%, 98.4%), respectively, in identifying different body parts. These findings demonstrate the strength of our approach in annotating CT images through a wide variation in both acquisition protocols and patient demographics. This study underlines the potential that DL holds for medical imaging and, in particular, for the automation of body region classification in CT. Our findings confirm that these models could be implemented in clinical routines to improve diagnostic efficiency and harmony.

Leveraging multi-modal foundation model image encoders to enhance brain MRI-based headache classification.

Rafsani F, Sheth D, Che Y, Shah J, Siddiquee MMR, Chong CD, Nikolova S, Ross K, Dumkrieger G, Li B, Wu T, Schwedt TJ

pubmed logopapersSep 26 2025
Headaches are a nearly universal human experience traditionally diagnosed based solely on symptoms. Recent advances in imaging techniques and artificial intelligence (AI) have enabled the development of automated headache detection systems, which can enhance clinical diagnosis, especially when symptom-based evaluations are insufficient. Current AI models often require extensive data, limiting their clinical applicability where data availability is low. However, deep learning models, particularly pre-trained ones and fine-tuned with smaller, targeted datasets can potentially overcome this limitation. By leveraging BioMedCLIP, a pre-trained foundational model combining a vision transformer (ViT) image encoder with PubMedBERT text encoder, we fine-tuned the pre-trained ViT model for the specific purpose of classifying headaches and detecting biomarkers from brain MRI data. The dataset consisted of 721 individuals: 424 healthy controls (HC) from the IXI dataset and 297 local participants, including migraine sufferers (n = 96), individuals with acute post-traumatic headache (APTH, n = 48), persistent post-traumatic headache (PPTH, n = 49), and additional HC (n = 104). The model achieved high accuracy across multiple balanced test sets, including 89.96% accuracy for migraine versus HC, 88.13% for APTH versus HC, and 83.13% for PPTH versus HC, all validated through five-fold cross-validation for robustness. Brain regions identified by Gradient-weighted Class Activation Mapping analysis as responsible for migraine classification included the postcentral cortex, supramarginal gyrus, superior temporal cortex, and precuneus cortex; for APTH, rostral middle frontal and precentral cortices; and, for PPTH, cerebellar cortex and precentral cortex. To our knowledge, this is the first study to leverage a multimodal biomedical foundation model in the context of headache classification and biomarker detection using structural MRI, offering complementary insights into the causes and brain changes associated with headache disorders.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

NextGen lung disease diagnosis with explainable artificial intelligence.

Veeramani N, S A RS, S SP, S S, Jayaraman P

pubmed logopapersSep 26 2025
The COVID-19 pandemic has been the most catastrophic global health emergency of the [Formula: see text] century, resulting in hundreds of millions of reported cases and five million deaths. Chest X-ray (CXR) images are highly valuable for early detection of lung diseases in monitoring and investigating pulmonary disorders such as COVID-19, pneumonia, and tuberculosis. These CXR images offer crucial features about the lung's health condition and can assist in making accurate diagnoses. Manual interpretation of CXR images is challenging even for expert radiologists due to the overlapping radiological features. Therefore, Artificial Intelligence (AI) based image processing took over the charge in healthcare. But still it is uncertain to trust the prediction results by an AI model. However, this can be resolved by implementing explainable artificial intelligence (XAI) tools that transform a black-box AI into a glass-box model. In this research article, we have proposed a novel XAI-TRANS model with inception based transfer learning addressing the challenge of overlapping features in multiclass classification of CXR images. Also, we proposed an improved U-Net Lung segmentation dedicated to obtaining the radiological features for classification. The proposed approach achieved a maximum precision of 98% and accuracy of 97% in multiclass lung disease classification. By leveraging XAI techniques with the evident improvement of 4.75%, specifically LIME and Grad-CAM, to provide detailed and accurate explanations for the model's prediction.

Secure and fault tolerant cloud based framework for medical image storage and retrieval in a distributed environment.

Amaithi Rajan A, V V, M A, R PK

pubmed logopapersSep 26 2025
In the evolving field of healthcare, centralized cloud-based medical image retrieval faces challenges related to security, availability, and adversarial threats. Existing deep learning-based solutions improve retrieval but remain vulnerable to adversarial attacks and quantum threats, necessitating a shift to more secure distributed cloud solutions. This article proposes SFMedIR, a secure and fault tolerant medical image retrieval framework that contains an adversarial attack-resistant federated learning for hashcode generation, utilizing a ConvNeXt-based model to improve accuracy and generalizability. The framework integrates quantum-chaos-based encryption for security, dynamic threshold-based shadow storage for fault tolerance, and a distributed cloud architecture to mitigate single points of failure. Unlike conventional methods, this approach significantly improves security and availability in cloud-based medical image retrieval systems, providing a resilient and efficient solution for healthcare applications. The framework is validated on Brain MRI and Kidney CT datasets, achieving a 60-70% improvement in retrieval accuracy for adversarial queries and an overall 90% retrieval accuracy, outperforming existing models by 5-10%. The results demonstrate superior performance in terms of both security and retrieval efficiency, making this framework a valuable contribution to the future of secure medical image management.

A novel deep neural architecture for efficient and scalable multidomain image classification.

Nobel SMN, Tasir MAM, Noor H, Monowar MM, Hamid MA, Sayeed MS, Islam MR, Mridha MF, Dey N

pubmed logopapersSep 26 2025
Deep learning has significantly advanced the field of computer vision; however, developing models that generalize effectively across diverse image domains remains a major research challenge. In this study, we introduce DeepFreqNet, a novel deep neural architecture specifically designed for high-performance multi-domain image classification. The innovative aspect of DeepFreqNet lies in its combination of three powerful components: multi-scale feature extraction for capturing patterns at different resolutions, depthwise separable convolutions for enhanced computational efficiency, and residual connections to maintain gradient flow and accelerate convergence. This hybrid design improves the architecture's ability to learn discriminative features and ensures scalability across domains with varying data complexities. Unlike traditional transfer learning models, DeepFreqNet adapts seamlessly to diverse datasets without requiring extensive reconfiguration. Experimental results from nine benchmark datasets, including MRI tumor classification, blood cell classification, and sign language recognition, demonstrate superior performance, achieving classification accuracies between 98.96% and 99.97%. These results highlight the effectiveness and versatility of DeepFreqNet, showcasing a significant improvement over existing state-of-the-art methods and establishing it as a robust solution for real-world image classification challenges.
Page 4 of 3813802 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.