Sort by:
Page 27 of 99986 results

UNICON: UNIfied CONtinual Learning for Medical Foundational Models

Mohammad Areeb Qazi, Munachiso S Nwadike, Ibrahim Almakky, Mohammad Yaqub, Numan Saeed

arxiv logopreprintAug 19 2025
Foundational models are trained on extensive datasets to capture the general trends of a domain. However, in medical imaging, the scarcity of data makes pre-training for every domain, modality, or task challenging. Continual learning offers a solution by fine-tuning a model sequentially on different domains or tasks, enabling it to integrate new knowledge without requiring large datasets for each training phase. In this paper, we propose UNIfied CONtinual Learning for Medical Foundational Models (UNICON), a framework that enables the seamless adaptation of foundation models to diverse domains, tasks, and modalities. Unlike conventional adaptation methods that treat these changes in isolation, UNICON provides a unified, perpetually expandable framework. Through careful integration, we show that foundation models can dynamically expand across imaging modalities, anatomical regions, and clinical objectives without catastrophic forgetting or task interference. Empirically, we validate our approach by adapting a chest CT foundation model initially trained for classification to a prognosis and segmentation task. Our results show improved performance across both additional tasks. Furthermore, we continually incorporated PET scans and achieved a 5\% improvement in Dice score compared to respective baselines. These findings establish that foundation models are not inherently constrained to their initial training scope but can evolve, paving the way toward generalist AI models for medical imaging.

Artificial Intelligence Approaches for Early Prediction of Parkinson's Disease.

Gond A, Kumar A, Kumar A, Kushwaha SKS

pubmed logopapersAug 18 2025
Parkinson's disease (PD) is a progressive neurodegenerative disorder that affects both motor and non-motor functions, primarily due to the gradual loss of dopaminergic neurons in the substantia nigra. Traditional diagnostic methods largely depend on clinical symptom evaluation, which often leads to delays in detection and treatment. However, in recent years, artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), have emerged as groundbreaking techniques for the diagnosis and management of PD. This review explores the emergent role of AI-driven techniques in early disease detection, continuous monitoring, and the development of personalized treatment strategies. Advanced AI applications, including medical imaging analysis, speech pattern recognition, gait assessment, and the identification of digital biomarkers, have shown remarkable potential in improving diagnostic accuracy and patient care. Additionally, AI-driven telemedicine solutions enable remote and real-time disease monitoring, addressing challenges related to accessibility and early intervention. Despite these promising advancements, several hurdles remain, such as concerns over data privacy, the interpretability of AI models, and the need for rigorous validation before clinical implementation. With PD cases expected to rise significantly by 2030, further research and interdisciplinary collaboration are crucial to refining AI technologies and ensuring their reliability in medical practice. By bridging the gap between technology and neurology, AI has the potential to revolutionize PD management, paving the way for precision medicine and better patient outcomes.

Multiphysics modelling enhanced by imaging and artificial intelligence for personalised cancer nanomedicine: Foundations for clinical digital twins.

Kashkooli FM, Bhandari A, Gu B, Kolios MC, Kohandel M, Zhan W

pubmed logopapersAug 18 2025
Nano-sized drug delivery systems have emerged as a more effective, versatile means for improving cancer treatment. However, the complexity of drug delivery to cancer involves intricate interactions between physiological and physicochemical processes across various temporal and spatial scales. Relying solely on experimental methods for developing and clinically translating nano-sized drug delivery systems is economically unfeasible. Multiphysics models, acting as open systems, offer a viable approach by allowing control over the individual and combined effects of various influencing factors on drug delivery outcomes. This provides an effective pathway for developing, optimising, and applying nano-sized drug delivery systems. These models are specifically designed to uncover the underlying mechanisms of drug delivery and to optimise effective delivery strategies. This review outlines the diverse applications of multiphysics simulations in advancing nanos-sized drug delivery systems for cancer treatment. The methods to develop these models and the integration of emerging technologies (i.e., medical imaging and artificial intelligence) are also addressed towards digital twins for personalised clinical translation of cancer nanomedicine. Multiphysics modelling tools are expected to become a powerful technology, expanding the scope of nano-sized drug delivery systems, thereby greatly enhancing cancer treatment outcomes and offering promising prospects for more effective patient care.

Multimodal large language models for medical image diagnosis: Challenges and opportunities.

Zhang A, Zhao E, Wang R, Zhang X, Wang J, Chen E

pubmed logopapersAug 18 2025
The integration of artificial intelligence (AI) into radiology has significantly improved diagnostic accuracy and workflow efficiency. Multimodal large language models (MLLMs), which combine natural language processing (NLP) and computer vision techniques, hold the potential to further revolutionize medical image analysis. Despite these advances, their widespread clinical adoption of MLLMs remains limited by challenges such as data quality, interpretability, ethical and regulatory compliance- including adherence to frameworks like the General Data Protection Regulation (GDPR) - computational demands, and generalizability across diverse patient populations. Addressing these interconnected challenges presents opportunities to enhance MLLM performance and reliability. Priorities for future research include improving model transparency, safeguarding data privacy through federated learning, optimizing multimodal fusion strategies, and establishing standardized evaluation frameworks. By overcoming these barriers, MLLMs can become essential tools in radiology, supporting clinical decision-making, and improving patient outcomes.

Overview of Multimodal Radiomics and Deep Learning in the Prediction of Axillary Lymph Node Status in Breast Cancer.

Zhao X, Wang M, Wei Y, Lu Z, Peng Y, Cheng X, Song J

pubmed logopapersAug 18 2025
Breast cancer is the most prevalent malignancy in women, with the status of axillary lymph nodes being a pivotal factor in treatment decision-making and prognostic evaluation. With the integration of deep learning algorithms, radiomics has become a transformative tool with increasingly extensive applications across multimodality, particularly in oncological imaging. Recent studies of radiomics and deep learning have demonstrated considerable potential for noninvasive diagnosis and prediction in breast cancer through multimodalities (mammography, ultrasonography, MRI and PET/CT), specifically for predicting axillary lymph node status. Although significant progress has been achieved in radiomics-based prediction of axillary lymph node metastasis in breast cancer, several methodological and technical challenges remain to be addressed. The comprehensive review incorporates a detailed analysis of radiomics workflow and model construction strategies. The objective of this review is to synthesize and evaluate current research findings, thereby providing valuable references for precision diagnosis and assessment of axillary lymph node metastasis in breast cancer, while promoting development and advancement in this evolving field.

Breaking Reward Collapse: Adaptive Reinforcement for Open-ended Medical Reasoning with Enhanced Semantic Discrimination

Yizhou Liu, Jingwei Wei, Zizhi Chen, Minghao Han, Xukun Zhang, Keliang Liu, Lihua Zhang

arxiv logopreprintAug 18 2025
Reinforcement learning (RL) with rule-based rewards has demonstrated strong potential in enhancing the reasoning and generalization capabilities of vision-language models (VLMs) and large language models (LLMs), while reducing computational overhead. However, its application in medical imaging remains underexplored. Existing reinforcement fine-tuning (RFT) approaches in this domain primarily target closed-ended visual question answering (VQA), limiting their applicability to real-world clinical reasoning. In contrast, open-ended medical VQA better reflects clinical practice but has received limited attention. While some efforts have sought to unify both formats via semantically guided RL, we observe that model-based semantic rewards often suffer from reward collapse, where responses with significant semantic differences receive similar scores. To address this, we propose ARMed (Adaptive Reinforcement for Medical Reasoning), a novel RL framework for open-ended medical VQA. ARMed first incorporates domain knowledge through supervised fine-tuning (SFT) on chain-of-thought data, then applies reinforcement learning with textual correctness and adaptive semantic rewards to enhance reasoning quality. We evaluate ARMed on six challenging medical VQA benchmarks. Results show that ARMed consistently boosts both accuracy and generalization, achieving a 32.64% improvement on in-domain tasks and an 11.65% gain on out-of-domain benchmarks. These results highlight the critical role of reward discriminability in medical RL and the promise of semantically guided rewards for enabling robust and clinically meaningful multimodal reasoning.

Toward ICE-XRF fusion: real-time pose estimation of the intracardiac echo probe in 2D X-ray using deep learning.

Severens A, Meijs M, Pai Raikar V, Lopata R

pubmed logopapersAug 18 2025
Valvular heart disease affects 2.5% of the general population and 10% of people aged over 75, with many patients untreated due to high surgical risks. Transcatheter valve therapies offer a safer, less invasive alternative but rely on ultrasound and X-ray image guidance. The current ultrasound technique for valve interventions, transesophageal echocardiography (TEE), requires general anesthesia and has poor visibility of the right side of the heart. Intracardiac echocardiography (ICE) provides improved 3D imaging without the need for general anesthesia but faces challenges in adoption due to device handling and operator training. To facilitate the use of ICE in the clinic, the fusion of ultrasound and X-ray is proposed. This study introduces a two-stage detection algorithm using deep learning to support ICE-XRF fusion. Initially, the ICE probe is coarsely detected using an object detection network. This is followed by 5-degree-of-freedom (DoF) pose estimation of the ICE probe using a regression network. Model validation using synthetic data and seven clinical cases showed that the framework provides accurate probe detection and 5-DoF pose estimation. For the object detection, an F1 score of 1.00 was achieved on synthetic data and high precision (0.97) and recall (0.83) for clinical cases. For the 5-DoF pose estimation, median position errors were found under 0.5mm and median rotation errors below <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>7</mn> <mo>.</mo> <msup><mn>2</mn> <mo>∘</mo></msup> </mrow> </math> . This real-time detection method supports image fusion of ICE and XRF during clinical procedures and facilitates the use of ICE in valve therapy.

Deep learning-based identification of necrosis and microvascular proliferation in adult diffuse gliomas from whole-slide images

Guo, Y., Huang, H., Liu, X., Zou, W., Qiu, F., Liu, Y., Chai, R., Jiang, T., Wang, J.

medrxiv logopreprintAug 16 2025
For adult diffuse gliomas (ADGs), most grading can be achieved through molecular subtyping, retaining only two key histopathological features for high-grade glioma (HGG): necrosis (NEC) and microvascular proliferation (MVP). We developed a deep learning (DL) framework to automatically identify and characterize these features. We trained patch-level models to detect and quantify NEC and MVP using a dataset that employed active learning, incorporating patches from 621 whole-slide images (WSIs) from the Chinese Glioma Genome Atlas (CGGA). Utilizing trained patch-level models, we effectively integrated the predicted outcomes and positions of individual patches within WSIs from The Cancer Genome Atlas (TCGA) cohort to form datasets. Subsequently, we introduced a patient-level model, named PLNet (Probability Localization Network), which was trained on these datasets to facilitate patient diagnosis. We also explored the subtypes of NEC and MVP based on the features extracted from patch-level models with clustering process applied on all positive patches. The patient-level models demonstrated exceptional performance, achieving an AUC of 0.9968, 0.9995 and AUPRC of 0.9788, 0.9860 for NEC and MVP, respectively. Compared to pathological reports, our patient-level models achieved the accuracy of 88.05% for NEC and 90.20% for MVP, along with a sensitivity of 73.68% and 77%. When sensitivity was set at 80%, the accuracy for NEC reached 79.28% and for MVP reached 77.55%. DL models enabled more efficient and accurate histopathological image analysis which will aid traditional glioma diagnosis. Clustering-based analyses utilizing features extracted from patch-level models could further investigate the subtypes of NEC and MVP.

VariMix: A variety-guided data mixing framework for explainable medical image classifications.

Xiong X, Sun Y, Liu X, Ke W, Lam CT, Gao Q, Tong T, Li S, Tan T

pubmed logopapersAug 16 2025
Modern deep neural networks are highly over-parameterized, necessitating the use of data augmentation techniques to prevent overfitting and enhance generalization. Generative adversarial networks (GANs) are popular for synthesizing visually realistic images. However, these synthetic images often lack diversity and may have ambiguous class labels. Recent data mixing strategies address some of these issues by mixing image labels based on salient regions. Since the main diagnostic information is not always contained within the salient regions, we aim to address the resulting label mismatches in medical image classifications. We propose a variety-guided data mixing framework (VariMix), which exploits an absolute difference map (ADM) to address the label mismatch problems of mixed medical images. VariMix generates ADM using the image-to-image (I2I) GAN across multiple classes and allows for bidirectional mixing operations between the training samples. The proposed VariMix achieves the highest accuracy of 99.30% and 94.60% with a SwinT V2 classifier on a Chest X-ray (CXR) dataset and a Retinal dataset, respectively. It also achieves the highest accuracy of 87.73%, 99.28%, 95.13%, and 95.81% with a ConvNeXt classifier on a Breast Ultrasound (US) dataset, a CXR dataset, a Retinal dataset, and a Maternal-Fetal US dataset, respectively. Furthermore, the medical expert evaluation on generated images shows the great potential of our proposed I2I GAN in improving the accuracy of medical image classifications. Extensive experiments demonstrate the superiority of VariMix compared with the existing GAN- and Mixup-based methods on four public datasets using Swin Transformer V2 and ConvNeXt architectures. Furthermore, by projecting the source image to the hyperplanes of the classifiers, the proposed I2I GAN can generate hyperplane difference maps between the source image and the hyperplane image, demonstrating its ability to interpret medical image classifications. The source code is provided in https://github.com/yXiangXiong/VariMix.

Developing biomarkers and methods of risk stratification: Consensus statements from the International Kidney Cancer Symposium North America 2024 Think Tank.

Shapiro DD, Abel EJ, Albiges L, Battle D, Berg SA, Campbell MT, Cella D, Coleman K, Garmezy B, Geynisman DM, Hall T, Henske EP, Jonasch E, Karam JA, La Rosa S, Leibovich BC, Maranchie JK, Master VA, Maughan BL, McGregor BA, Msaouel P, Pal SK, Perez J, Plimack ER, Psutka SP, Riaz IB, Rini BI, Shuch B, Simon MC, Singer EA, Smith A, Staehler M, Tang C, Tannir NM, Vaishampayan U, Voss MH, Zakharia Y, Zhang Q, Zhang T, Carlo MI

pubmed logopapersAug 16 2025
Accurate prognostication and personalized treatment selection remain major challenges in kidney cancer. This consensus initiative aimed to provide actionable expert guidance on the development and clinical integration of prognostic and predictive biomarkers and risk stratification tools to improve patient care and guide future research. A modified Delphi method was employed to develop consensus statements among a multidisciplinary panel of experts in urologic oncology, medical oncology, radiation oncology, pathology, molecular biology, radiology, outcomes research, biostatistics, industry, and patient advocacy. Over 3 rounds, including an in-person meeting 20 initial statements were evaluated, refined, and voted on. Consensus was defined a priori as a median Likert score ≥8. Nineteen final consensus statements were endorsed. These span key domains including biomarker prioritization (favoring prognostic biomarkers), rigorous methodology for subgroup and predictive analyses, the development of multi-institutional prospective registries, incorporation of biomarkers in trial design, and improvements in data/biospecimen access. The panel also identified high-priority biomarker types (e.g., AI-based image analysis, ctDNA) for future research. This is the first consensus statement specifically focused on biomarker and risk model development for kidney cancer using a structured Delphi process. The recommendations emphasize the need for rigorous methodology, collaborative infrastructure, prospective data collection, and focus on clinically translatable biomarkers. The resulting framework is intended to guide researchers, cooperative groups, and stakeholders in advancing personalized care for patients with kidney cancer.
Page 27 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.