Sort by:
Page 8 of 99982 results

Theranostics in nuclear medicine: the era of precision oncology.

Gandhi N, Alaseem AM, Deshmukh R, Patel A, Alsaidan OA, Fareed M, Alasiri G, Patel S, Prajapati B

pubmed logopapersSep 26 2025
Theranostics represents a transformative advancement in nuclear medicine by integrating molecular imaging and targeted radionuclide therapy within the paradigm of personalized oncology. This review elucidates the historical evolution and contemporary clinical applications of theranostics, emphasizing its pivotal role in precision cancer management. The theranostic approach involves the coupling of diagnostic and therapeutic radionuclides that target identical molecular biomarkers, enabling simultaneous visualization and treatment of malignancies such as neuroendocrine tumors (NETs), prostate cancer, and differentiated thyroid carcinoma. Key theranostic radiopharmaceutical pairs, including Gallium-68-labeled DOTA-Tyr3-octreotate (Ga-68-DOTATATE) with Lutetium-177-labeled DOTA-Tyr3-octreotate (Lu-177-DOTATATE), and Gallium-68-labeled Prostate-Specific Membrane Antigen (Ga-68-PSMA) with Lutetium-177-labeled Prostate-Specific Membrane Antigen (Lu-177-PSMA), exemplify the "see-and-treat" principle central to this modality. This article further explores critical molecular targets such as somatostatin receptor subtype 2, prostate-specific membrane antigen, human epidermal growth factor receptor 2, CD20, and C-X-C chemokine receptor type 4, along with design principles for radiopharmaceuticals that optimize target specificity while minimizing off-target toxicity. Advances in imaging platforms, including positron emission tomography/computed tomography (PET/CT), single-photon emission computed tomography/CT (SPECT/CT), and hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), have been instrumental in accurate dosimetry, therapeutic response assessment, and adaptive treatment planning. Integration of artificial intelligence (AI) and radiomics holds promise for enhanced image segmentation, predictive modeling, and individualized dosimetric planning. The review also addresses regulatory, manufacturing, and economic considerations, including guidelines from the United States Food and Drug Administration (USFDA) and European Medicines Agency (EMA), Good Manufacturing Practice (GMP) standards, and reimbursement frameworks, which collectively influence global adoption of theranostics. In summary, theranostics is poised to become a cornerstone of next-generation oncology, catalyzing a paradigm shift toward biologically driven, real-time personalized cancer care that seamlessly links diagnosis and therapy.

MedIENet: medical image enhancement network based on conditional latent diffusion model.

Yuan W, Feng Y, Wen T, Luo G, Liang J, Sun Q, Liang S

pubmed logopapersSep 26 2025
Deep learning necessitates a substantial amount of data, yet obtaining sufficient medical images is difficult due to concerns about patient privacy and high collection costs. To address this issue, we propose a conditional latent diffusion model-based medical image enhancement network, referred to as the Medical Image Enhancement Network (MedIENet). To meet the rigorous standards required for image generation in the medical imaging field, a multi-attention module is incorporated in the encoder of the denoising U-Net backbone. Additionally Rotary Position Embedding (RoPE) is integrated into the self-attention module to effectively capture positional information, while cross-attention is utilised to embed integrate class information into the diffusion process. MedIENet is evaluated on three datasets: Chest CT-Scan images, Chest X-Ray Images (Pneumonia), and Tongue dataset. Compared to existing methods, MedIENet demonstrates superior performance in both fidelity and diversity of the generated images. Experimental results indicate that for downstream classification tasks using ResNet50, the Area Under the Receiver Operating Characteristic curve (AUROC) achieved with real data alone is 0.76 for the Chest CT-Scan images dataset, 0.87 for the Chest X-Ray Images (Pneumonia) dataset, and 0.78 for the Tongue Dataset. When using mixed data consisting of real data and generated data, the AUROC improves to 0.82, 0.94, and 0.82, respectively, reflecting increases of approximately 6%, 7%, and 4%. These findings indicate that the images generated by MedIENet can enhance the performance of downstream classification tasks, providing an effective solution to the scarcity of medical image training data.

Secure and fault tolerant cloud based framework for medical image storage and retrieval in a distributed environment.

Amaithi Rajan A, V V, M A, R PK

pubmed logopapersSep 26 2025
In the evolving field of healthcare, centralized cloud-based medical image retrieval faces challenges related to security, availability, and adversarial threats. Existing deep learning-based solutions improve retrieval but remain vulnerable to adversarial attacks and quantum threats, necessitating a shift to more secure distributed cloud solutions. This article proposes SFMedIR, a secure and fault tolerant medical image retrieval framework that contains an adversarial attack-resistant federated learning for hashcode generation, utilizing a ConvNeXt-based model to improve accuracy and generalizability. The framework integrates quantum-chaos-based encryption for security, dynamic threshold-based shadow storage for fault tolerance, and a distributed cloud architecture to mitigate single points of failure. Unlike conventional methods, this approach significantly improves security and availability in cloud-based medical image retrieval systems, providing a resilient and efficient solution for healthcare applications. The framework is validated on Brain MRI and Kidney CT datasets, achieving a 60-70% improvement in retrieval accuracy for adversarial queries and an overall 90% retrieval accuracy, outperforming existing models by 5-10%. The results demonstrate superior performance in terms of both security and retrieval efficiency, making this framework a valuable contribution to the future of secure medical image management.

Performance of artificial intelligence in automated measurement of patellofemoral joint parameters: a systematic review.

Zhan H, Zhao Z, Liang Q, Zheng J, Zhang L

pubmed logopapersSep 26 2025
The evaluation of patellofemoral joint parameters is essential for diagnosing patellar dislocation, yet manual measurements exhibit poor reproducibility and demonstrate significant variability dependent on clinician expertise. This systematic review aimed to evaluate the performance of artificial intelligence (AI) models in automatically measuring patellofemoral joint parameters. A comprehensive literature search of PubMed, Web of Science, Cochrane Library, and Embase databases was conducted from database inception through June 15, 2025. Two investigators independently performed study screening and data extraction, with methodological quality assessment based on the modified MINORS checklist. This systematic review is registered with PROSPERO. A narrative review was conducted to summarize the findings of the included studies. A total of 19 studies comprising 10,490 patients met the inclusion and exclusion criteria, with a mean age of 51.3 years and a mean female proportion of 56.8%. Among these, six studies developed AI models based on radiographic series, nine on CT imaging, and four on MRI. The results demonstrated excellent reliability, with intraclass correlation coefficients (ICCs) ranging from 0.900 to 0.940 for femoral anteversion angle, 0.910-0.920 for trochlear groove depth and 0.930-0.950 for tibial tuberosity-trochlear groove distance. Additionally, good reliability was observed for patellar height (ICCs: 0.880-0.985), sulcus angle (ICCs: 0.878-0.980), and patellar tilt angle (ICCs: 0.790-0.990). Notably, the AI system successfully detected trochlear dysplasia, achieving 88% accuracy, 79% sensitivity, 96% specificity, and an AUC of 0.88. AI-based measurement of patellofemoral joint parameters demonstrates methodological robustness and operational efficiency, showing strong agreement with expert manual measurements. To further establish clinical utility, multicenter prospective studies incorporating rigorous external validation protocols are needed. Such validation would strengthen the model's generalizability and facilitate its integration into clinical decision support systems. This systematic review was registered in PROSPERO (CRD420251075068).

Robust Disease Prognosis via Diagnostic Knowledge Preservation: A Sequential Learning Approach

Rajamohan, H. R., Xu, Y., Zhu, W., Kijowski, R., Cho, K., Geras, K., Razavian, N., Deniz, C. M.

medrxiv logopreprintSep 25 2025
Accurate disease prognosis is essential for patient care but is often hindered by the lack of long-term data. This study explores deep learning training strategies that utilize large, accessible diagnostic datasets to pretrain models aimed at predicting future disease progression in knee osteoarthritis (OA), Alzheimers disease (AD), and breast cancer (BC). While diagnostic pretraining improves prognostic task performance, naive fine-tuning for prognosis can cause catastrophic forgetting, where the models original diagnostic accuracy degrades, a significant patient safety concern in real-world settings. To address this, we propose a sequential learning strategy with experience replay. We used cohorts with knee radiographs, brain MRIs, and digital mammograms to predict 4-year structural worsening in OA, 2-year cognitive decline in AD, and 5-year cancer diagnosis in BC. Our results showed that diagnostic pretraining on larger datasets improved prognosis model performance compared to standard baselines, boosting both the Area Under the Receiver Operating Characteristic curve (AUROC) (e.g., Knee OA external: 0.77 vs 0.747; Breast Cancer: 0.874 vs 0.848) and the Area Under the Precision-Recall Curve (AUPRC) (e.g., Alzheimers Disease: 0.752 vs 0.683). Additionally, a sequential learning approach with experience replay achieved prognostic performance comparable to dedicated single-task models (e.g., Breast Cancer AUROC 0.876 vs 0.874) while also preserving diagnostic ability. This method maintained high diagnostic accuracy (e.g., Breast Cancer Balanced Accuracy 50.4% vs 50.9% for a dedicated diagnostic model), unlike simpler multitask methods prone to catastrophic forgetting (e.g., 37.7%). Our findings show that leveraging large diagnostic datasets is a reliable and data-efficient way to enhance prognostic models while maintaining essential diagnostic skills. Author SummaryIn our research, we addressed a common problem in medical AI: how to accurately predict the future course of a disease when long-term patient data is rare. We focused on knee osteoarthritis, Alzheimers disease, and breast cancer. We found that we could significantly improve a models ability to predict disease progression by first training it on a much larger, more common type of data - diagnostic images used to assess a patients current disease state. We then developed a specialized training method that allows a single AI model to perform both diagnosis and prognosis tasks effectively. A key challenge is that models often "forget" their original diagnostic skills when they learn a new prognostic task. In a clinical setting, this poses a safety risk, as it could lead to missed diagnoses. We utilize experience replay to overcome this by continually refreshing the models diagnostic knowledge. This creates a more robust and efficient model that mirrors a clinicians workflow, offering the potential to improve patient care with limited amount of hard-to-get longitudinal data.

Acute myeloid leukemia classification using ReLViT and detection with YOLO enhanced by adversarial networks on bone marrow images.

Hameed M, Raja MAZ, Zameer A, Dar HS, Alluhaidan AS, Aziz R

pubmed logopapersSep 25 2025
Acute myeloid leukemia (AML) is recognized as a highly aggressive cancer that affects the bone marrow and blood, making it the most lethal type of leukemia. The detection of AML through medical imaging is challenging due to the complex structural and textural variations inherent in bone marrow images. These challenges are further intensified by the overlapping intensity between leukemia and non-leukemia regions, which reduces the effectiveness of traditional predictive models. This study presents a novel artificial intelligence framework that utilizes residual block merging vision transformers, convolutions, and advanced object detection techniques to address the complexities of bone marrow images and enhance the accuracy of AML detection. The framework integrates residual learning-based vision transformer (ReLViT) blocks within a bottleneck architecture, harnessing the combined strengths of residual learning and transformer mechanisms to improve feature representation and computational efficiency. Tailored data pre-processing strategies are employed to manage the textural and structural complexities associated with low-quality images and tumor shapes. The framework's performance is further optimized through a strategic weight-sharing technique to minimize computational overhead. Additionally, a generative adversarial network (GAN) is employed to enhance image quality across all AML imaging modalities, and when combined with a You Only Look Once (YOLO) object detector, it accurately localizes tumor formations in bone marrow images. Extensive and comparative evaluations have demonstrated the superiority of the proposed framework over existing deep convolutional neural networks (CNN) and object detection methods. The model achieves an F1-score of 99.15%, precision of 99.02%, and recall of 99.16%, marking a significant advancement in the field of medical imaging.

Knowledge distillation and teacher-student learning in medical imaging: Comprehensive overview, pivotal role, and future directions.

Li X, Li L, Li M, Yan P, Feng T, Luo H, Zhao Y, Yin S

pubmed logopapersSep 25 2025
Knowledge Distillation (KD) is a technique to transfer the knowledge from a complex model to a simplified model. It has been widely used in natural language processing and computer vision and has achieved advanced results. Recently, the research of KD in medical image analysis has grown rapidly. The definition of knowledge has been further expanded by combining with the medical field, and its role is not limited to simplifying the model. This paper attempts to comprehensively review the development and application of KD in the medical imaging field. Specifically, we first introduce the basic principles, explain the definition of knowledge and the classical teacher-student network framework. Then, the research progress in medical image classification, segmentation, detection, reconstruction, registration, radiology report generation, privacy protection and other application scenarios is presented. In particular, the introduction of application scenarios is based on the role of KD. We summarize eight main roles of KD techniques in medical image analysis, including model compression, semi-supervised method, weakly supervised method, class balancing, etc. The performance of these roles in all application scenarios is analyzed. Finally, we discuss the challenges in this field and propose potential solutions. KD is still in a rapid development stage in the medical imaging field, we give five potential development directions and research hotspots. A comprehensive literature list of this survey is available at https://github.com/XiangQA-Q/KD-in-MIA.

Conditional Virtual Imaging for Few-Shot Vascular Image Segmentation.

He Y, Ge R, Tang H, Liu Y, Su M, Coatrieux JL, Shu H, Chen Y, He Y

pubmed logopapersSep 25 2025
In the field of medical image processing, vascular image segmentation plays a crucial role in clinical diagnosis, treatment planning, prognosis, and medical decision-making. Accurate and automated segmentation of vascular images can assist clinicians in understanding the vascular network structure, leading to more informed medical decisions. However, manual annotation of vascular images is time-consuming and challenging due to the fine and low-contrast vascular branches, especially in the medical imaging domain where annotation requires specialized knowledge and clinical expertise. Data-driven deep learning models struggle to achieve good performance when only a small number of annotated vascular images are available. To address this issue, this paper proposes a novel Conditional Virtual Imaging (CVI) framework for few-shot vascular image segmentation learning. The framework combines limited annotated data with extensive unlabeled data to generate high-quality images, effectively improving the accuracy and robustness of segmentation learning. Our approach primarily includes two innovations: First, aligned image-mask pair generation, which leverages the powerful image generation capabilities of large pre-trained models to produce high-quality vascular images with complex structures using only a few training images; Second, the Dual-Consistency Learning (DCL) strategy, which simultaneously trains the generator and segmentation model, allowing them to learn from each other and maximize the utilization of limited data. Experimental results demonstrate that our CVI framework can generate high-quality medical images and effectively enhance the performance of segmentation models in few-shot scenarios. Our code will be made publicly available online.

Clinically Explainable Disease Diagnosis based on Biomarker Activation Map.

Zang P, Wang C, Hormel TT, Bailey ST, Hwang TS, Jia Y

pubmed logopapersSep 25 2025
Artificial intelligence (AI)-based disease classifiers have achieved specialist-level performances in several diagnostic tasks. However, real-world adoption of these classifiers remains challenging due to the black box issue. Here, we report a novel biomarker activation map (BAM) generation framework that can provide clinically meaningful explainability to current AI-based disease classifiers. We designed the framework based on the concept of residual counterfactual explanation by generating counterfactual outputs that could reverse the decision-making of the disease classifier. The BAM was generated as the difference map between the counterfactual output and original input with postprocessing. We evaluated the BAM on four different disease classifiers, including an age-related macular degeneration classier based on fundus photography, a diabetic retinopathy classifier based on optical coherence tomography angiography, a brain tumor classifier based on magnetic resonance imaging (MRI), and a breast cancer classifier based on computerized tomography (CT) scans. The highlighted regions in the BAM correlated highly with manually demarcated biomarkers of each disease. The BAM can improve the clinical applicability of an AI-based disease classifier by providing intuitive output clinicians can use to understand and verify the diagnostic decision.

PHASE: Personalized Head-based Automatic Simulation for Electromagnetic Properties in 7T MRI.

Lu Z, Liang H, Lu M, Martin D, Hardy BM, Dawant BM, Wang X, Yan X, Huo Y

pubmed logopapersSep 25 2025
Accurate and individualized human head models are becoming increasingly important for electromagnetic (EM) simulations. These simulations depend on precise anatomical representations to realistically model electric and magnetic field distributions, particularly when evaluating Specific Absorption Rate (SAR) within safety guidelines. State of the art simulations use the Virtual Population due to limited public resources and the impracticality of manually annotating patient data at scale. This paper introduces Personalized Head-based Automatic Simulation for EM properties (PHASE), an automated open-source toolbox that generates high-resolution, patient-specific head models for EM simulations using paired T1-weighted (T1w) magnetic resonance imaging (MRI) and computed tomography (CT) scans with 14 tissue labels. To evaluate the performance of PHASE models, we conduct semi-automated segmentation and EM simulations on 15 real human patients, serving as the gold standard reference. The PHASE model achieved comparable global SAR and localized SAR averaged over 10 grams of tissue (SAR-10 g), demonstrating its potential as a promising tool for generating large-scale human model datasets in the future. The code and models of PHASE toolbox have been made publicly available: https://github.com/hrlblab/PHASE.
Page 8 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.