Sort by:
Page 11 of 32311 results

Deep Learning for the Diagnosis and Treatment of Thyroid Cancer: A Review.

Gao R, Mai S, Wang S, Hu W, Chang Z, Wu G, Guan H

pubmed logopapersJul 30 2025
In recent years, the application of deep learning (DL) technology in the thyroid field has shown exponential growth, greatly promoting innovation in thyroid disease research. As the most common malignant tumor of the endocrine system, the precise diagnosis and treatment of thyroid cancer has been a key focus of clinical research. This article systematically reviews the latest research progress in DL research for the diagnosis and treatment of thyroid malignancies, focusing on the breakthrough application of advanced models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) in key areas such as ultrasound images analysis for thyroid nodules, automatic classification of pathological images, and assessment of extrathyroidal extension. Furthermore, the review highlights the great potential of DL techniques in the development of individualized treatment planning and prognosis prediction. In addition, it analyzes the technical bottlenecks and clinical challenges faced by current DL applications in thyroid cancer diagnosis and treatment and looks ahead to future directions for development. The aim of this review is to provide the latest research insights for clinical practitioners, promote further improvements in the precision diagnosis and treatment system for thyroid cancer, and ultimately achieve better diagnostic and therapeutic outcomes for thyroid cancer patients.

Continual learning in medical image analysis: A comprehensive review of recent advancements and future prospects.

Kumari P, Chauhan J, Bozorgpour A, Huang B, Azad R, Merhof D

pubmed logopapersJul 28 2025
Medical image analysis has witnessed remarkable advancements, even surpassing human-level performance in recent years, driven by the rapid development of advanced deep-learning algorithms. However, when the inference dataset slightly differs from what the model has seen during one-time training, the model performance is greatly compromised. The situation requires restarting the training process using both the old and the new data, which is computationally costly, does not align with the human learning process, and imposes storage constraints and privacy concerns. Alternatively, continual learning has emerged as a crucial approach for developing unified and sustainable deep models to deal with new classes, tasks, and the drifting nature of data in non-stationary environments for various application areas. Continual learning techniques enable models to adapt and accumulate knowledge over time, which is essential for maintaining performance on evolving datasets and novel tasks. Owing to its popularity and promising performance, it is an active and emerging research topic in the medical field and hence demands a survey and taxonomy to clarify the current research landscape of continual learning in medical image analysis. This systematic review paper provides a comprehensive overview of the state-of-the-art in continual learning techniques applied to medical image analysis. We present an extensive survey of existing research, covering topics including catastrophic forgetting, data drifts, stability, and plasticity requirements. Further, an in-depth discussion of key components of a continual learning framework, such as continual learning scenarios, techniques, evaluation schemes, and metrics, is provided. Continual learning techniques encompass various categories, including rehearsal, regularization, architectural, and hybrid strategies. We assess the popularity and applicability of continual learning categories in various medical sub-fields like radiology and histopathology. Our exploration considers unique challenges in the medical domain, including costly data annotation, temporal drift, and the crucial need for benchmarking datasets to ensure consistent model evaluation. The paper also addresses current challenges and looks ahead to potential future research directions.

The evolving role of multimodal imaging, artificial intelligence and radiomics in the radiologic assessment of immune related adverse events.

Das JP, Ma HY, DeJong D, Prendergast C, Baniasadi A, Braumuller B, Giarratana A, Khonji S, Paily J, Shobeiri P, Yeh R, Dercle L, Capaccione KM

pubmed logopapersJul 28 2025
Immunotherapy, in particular checkpoint blockade, has revolutionized the treatment of many advanced cancers. Imaging plays a critical role in assessing both treatment response and the development of immune toxicities. Both conventional imaging and molecular imaging techniques can be used to evaluate multisystemic immune related adverse events (irAEs), including thoracic, abdominal and neurologic irAEs. As artificial intelligence (AI) proliferates in medical imaging, radiologic assessment of irAEs will become more efficient, improving the diagnosis, prognosis, and management of patients affected by immune-related toxicities. This review addresses some of the advancements in medical imaging including the potential future role of radiomics in evaluating irAEs, which may facilitate clinical decision-making and improvements in patient care.

Towards trustworthy artificial intelligence in musculoskeletal medicine: A narrative review on uncertainty quantification.

Vahdani AM, Shariatnia M, Rajpurkar P, Pareek A

pubmed logopapersJul 28 2025
Deep learning (DL) models have achieved remarkable performance in musculoskeletal (MSK) medical imaging research, yet their clinical integration remains hindered by their black-box nature and the absence of reliable confidence measures. Uncertainty quantification (UQ) seeks to bridge this gap by providing each DL prediction with a calibrated estimate of uncertainty, thereby fostering clinician trust and safer deployment. We conducted a targeted narrative review, performing expert-driven searches in PubMed, Scopus, and arXiv and mining references from relevant publications in MSK imaging utilizing UQ, and a thematic synthesis was used to derive a cohesive taxonomy of UQ methodologies. UQ approaches encompass multi-pass methods (e.g., test-time augmentation, Monte Carlo dropout, and model ensembling) that infer uncertainty from variability across repeated inferences; single-pass methods (e.g., conformal prediction, and evidential deep learning) that augment each individual prediction with uncertainty metrics; and other techniques that leverage auxiliary information, such as inter-rater variability, hidden-layer activations, or generative reconstruction errors, to estimate confidence. Applications in MSK imaging, include highlighting uncertain areas in cartilage segmentation and identifying uncertain predictions in joint implant design detections; downstream applications include enhanced clinical utility and more efficient data annotation pipelines. Embedding UQ into DL workflows is essential for translating high-performance models into clinical practice. Future research should prioritize robust out-of-distribution handling, computational efficiency, and standardized evaluation metrics to accelerate the adoption of trustworthy AI in MSK medicine. Not applicable.

Contextual structured annotations on PACS: a futuristic vision for reporting routine oncologic imaging studies and its potential to transform clinical work and research.

Wong VK, Wang MX, Bethi E, Nagarakanti S, Morani AC, Marcal LP, Rauch GM, Brown JJ, Yedururi S

pubmed logopapersJul 26 2025
Radiologists currently have very limited and time-consuming options to annotate findings on the images and are mostly limited to arrows, calipers and lines to annotate any type of findings on most PACS systems. We propose a framework placing encoded, transferable, highly contextual structured text annotations directly on PACS images indicating the type of lesion, level of suspicion, location, lesion measurement, and TNM status for malignant lesions, along with automated integration of this information into the radiology report. This approach offers a one-stop solution to generate radiology reports that are easily understood by other radiologists, patient care providers, patients, and machines while reducing the effort needed to dictate a detailed radiology report and minimizing speech recognition errors. It also provides a framework for automated generation of large volume high quality annotated data sets for machine learning algorithms from daily work of radiologists. Enabling voice dictation of these contextual annotations directly into PACS similar to voice enabled Google search will further enhance the user experience. Wider adaptation of contextualized structured annotations in the future can facilitate studies understanding the temporal evolution of different tumor lesions across multiple lines of treatment and early detection of asynchronous response/areas of treatment failure. We present a futuristic vision, and solution with the potential to transform clinical work and research in oncologic imaging.

Carotid and femoral bifurcation plaques detected by ultrasound as predictors of cardiovascular events.

Blinc A, Nicolaides AN, Poredoš P, Paraskevas KI, Heiss C, Müller O, Rammos C, Stanek A, Jug B

pubmed logopapersJul 25 2025
<b></b>Risk factor-based algorithms give a good estimate of cardiovascular (CV) risk at the population level but are often inaccurate at the individual level. Detecting preclinical atherosclerotic plaques in the carotid and common femoral arterial bifurcations by ultrasound is a simple, non-invasive way of detecting atherosclerosis in the individual and thus more accurately estimating his/her risk of future CV events. The presence of plaques in these bifurcations is independently associated with increased risk of CV death and myocardial infarction, even after adjusting for traditional risk factors, while ultrasonographic characteristics of vulnerable plaque are mostly associated with increased risk for ipsilateral ischaemic stroke. The predictive value of carotid and femoral plaques for CV events increases in proportion to plaque burden and especially by plaque progression over time. Assessing the burden of carotid and/or common femoral bifurcation plaques enables reclassification of a significant number of individuals with low risk according risk factor-based algorithms into intermediate or high CV risk and intermediate risk individuals into the low- or high CV risk. Ongoing multimodality imaging studies, supplemented by clinical and genetic data, aided by machine learning/ artificial intelligence analysis are expected to advance our understanding of atherosclerosis progression from the asymptomatic into the symptomatic phase and personalize prevention.

Agentic AI in radiology: Emerging Potential and Unresolved Challenges.

Dietrich N

pubmed logopapersJul 24 2025
This commentary introduces agentic artificial intelligence (AI) as an emerging paradigm in radiology, marking a shift from passive, user-triggered tools to systems capable of autonomous workflow management, task planning, and clinical decision support. Agentic AI models may dynamically prioritize imaging studies, tailor recommendations based on patient history and scan context, and automate administrative follow-up tasks, offering potential gains in efficiency, triage accuracy, and cognitive support. While not yet widely implemented, early pilot studies and proof-of-concept applications highlight promising utility across high-volume and high-acuity settings. Key barriers, including limited clinical validation, evolving regulatory frameworks, and integration challenges, must be addressed to ensure safe, scalable deployment. Agentic AI represents a forward-looking evolution in radiology that warrants careful development and clinician-guided implementation.

Fetal neurobehavior and consciousness: a systematic review of 4D ultrasound evidence and ethical challenges.

Pramono MBA, Andonotopo W, Bachnas MA, Dewantiningrum J, Sanjaya INH, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersJul 23 2025
Recent advancements in four-dimensional (4D) ultrasonography have enabled detailed observation of fetal behavior <i>in utero</i>, including facial movements, limb gestures, and stimulus responses. These developments have prompted renewed inquiry into whether such behaviors are merely reflexive or represent early signs of integrated neural function. However, the relationship between fetal movement patterns and conscious awareness remains scientifically uncertain and ethically contested. A systematic review was conducted in accordance with PRISMA 2020 guidelines. Four databases (PubMed, Scopus, Embase, Web of Science) were searched for English-language articles published from 2000 to 2025, using keywords including "fetal behavior," "4D ultrasound," "neurodevelopment," and "consciousness." Studies were included if they involved human fetuses, used 4D ultrasound or functional imaging modalities, and offered interpretation relevant to neurobehavioral or ethical analysis. A structured appraisal using AMSTAR-2 was applied to assess study quality. Data were synthesized narratively to map fetal behaviors onto developmental milestones and evaluate their interpretive limits. Seventy-four studies met inclusion criteria, with 23 rated as high-quality. Fetal behaviors such as yawning, hand-to-face movement, and startle responses increased in complexity between 24-34 weeks gestation. These patterns aligned with known neurodevelopmental events, including thalamocortical connectivity and cortical folding. However, no study provided definitive evidence linking observed behaviors to conscious experience. Emerging applications of artificial intelligence in ultrasound analysis were found to enhance pattern recognition but lack external validation. Fetal behavior observed via 4D ultrasound may reflect increasing neural integration but should not be equated with awareness. Interpretations must remain cautious, avoiding anthropomorphic assumptions. Ethical engagement requires attention to scientific limits, sociocultural diversity, and respect for maternal autonomy as imaging technologies continue to evolve.

Hi ChatGPT, I am a Radiologist, How can you help me?

Bellini D, Ferrari R, Vicini S, Rengo M, Saletti CL, Carbone I

pubmed logopapersJul 23 2025
This review paper explores the integration of ChatGPT, a generative AI model developed by OpenAI, into radiological practices, focusing on its potential to enhance the operational efficiency of radiologists. ChatGPT operates on the GPT architecture, utilizing advanced machine learning techniques, including unsupervised pre-training and reinforcement learning, to generate human-like text responses. While AI applications in radiology predominantly focus on imaging acquisition, reconstruction, and interpretation-commonly embedded directly within hardware-the accessibility and functional breadth of ChatGPT make it a unique tool. This interview-based review should not be intended as a detailed evaluation of all ChatGPT features. Instead, it aims to test its utility in everyday radiological tasks through real-world examples. ChatGPT demonstrated strong capabilities in structuring radiology reports according to international guidelines (e.g., PI-RADS, CT reporting for diverticulitis), designing a complete research protocol, and performing advanced statistical analysis from Excel datasets, including ROC curve generation and intergroup comparison. Although not capable of directly interpreting DICOM images, ChatGPT provided meaningful assistance in image post-processing and interpretation when images were converted to standard formats. These findings highlight its current strengths and limitations as a supportive tool for radiologists.

Harmonization in Magnetic Resonance Imaging: A Survey of Acquisition, Image-level, and Feature-level Methods

Qinqin Yang, Firoozeh Shomal-Zadeh, Ali Gholipour

arxiv logopreprintJul 22 2025
Modern medical imaging technologies have greatly advanced neuroscience research and clinical diagnostics. However, imaging data collected across different scanners, acquisition protocols, or imaging sites often exhibit substantial heterogeneity, known as "batch effects" or "site effects". These non-biological sources of variability can obscure true biological signals, reduce reproducibility and statistical power, and severely impair the generalizability of learning-based models across datasets. Image harmonization aims to eliminate or mitigate such site-related biases while preserving meaningful biological information, thereby improving data comparability and consistency. This review provides a comprehensive overview of key concepts, methodological advances, publicly available datasets, current challenges, and future directions in the field of medical image harmonization, with a focus on magnetic resonance imaging (MRI). We systematically cover the full imaging pipeline, and categorize harmonization approaches into prospective acquisition and reconstruction strategies, retrospective image-level and feature-level methods, and traveling-subject-based techniques. Rather than providing an exhaustive survey, we focus on representative methods, with particular emphasis on deep learning-based approaches. Finally, we summarize the major challenges that remain and outline promising avenues for future research.
Page 11 of 32311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.