Sort by:
Page 5 of 22220 results

Artificial intelligence in precision medicine: transforming disease subtyping, medical imaging, and pharmacogenomics.

Rodriguez-Martinez A, Kothalawala D, Carrillo-Larco RM, Poulakakis-Daktylidis A

pubmed logopapersAug 20 2025
Precision medicine marks a transformative shift towards a patient-centric treatment approach, aiming to match 'the right patients with the right drugs at the right time'. The exponential growth of data from diverse omics modalities, electronic health records, and medical imaging has created unprecedented opportunities for precision medicine. This explosion of data requires advanced processing and analytical tools. At the forefront of this revolution is artificial intelligence (AI), which excels at uncovering hidden patterns within these high-dimensional and complex datasets. AI facilitates the integration and analysis of diverse data types, unlocking unparalleled potential to characterise complex diseases, improve prognosis, and predict treatment response. Despite the enormous potential of AI, challenges related to interpretability, reliability, generalisability, and ethical considerations emerge when translating these tools from research settings into clinical practice.

Evolution and integration of artificial intelligence across the cancer continuum in women: advances in risk assessment, prevention, and early detection.

Desai M, Desai B

pubmed logopapersAug 20 2025
Artificial Intelligence (AI) is revolutionizing the prevention and control of breast cancer by improving risk assessment, prevention, and early diagnosis. Considering an emphasis on AI applications across the women's breast cancer spectrum, this review summarizes developments, existing applications, and future potential prospects. We conducted an in-depth review of the literature on AI applications in breast cancer risk prediction, prevention, and early detection from 2000 to 2025, with particular emphasis on Explainable AI (XAI), deep learning (DL), and machine learning (ML). We examined algorithmic fairness, model transparency, dataset representation, and clinical performance indicators. As compared to traditional methods, AI-based models continuously enhanced risk categorization, screening sensitivity, and early detection (AUCs ranging from 0.65 to 0.975). However, challenges remain in algorithmic bias, underrepresentation of minority populations, and limited external validation. Remarkably, 58% of public datasets focused on mammography, leaving gaps in modalities such as tomosynthesis and histopathology. AI technologies have an enormous number of opportunities for enhancing the diagnosis and treatment of breast cancer. However, transparent models, inclusive datasets, and standardized frameworks for explainability and external validation should be given the greatest attention in subsequent studies to ensure equitable and effective implementation.

Physician-in-the-Loop Active Learning in Radiology Artificial Intelligence Workflows: Opportunities, Challenges, and Future Directions.

Luo M, Yousefirizi F, Rouzrokh P, Jin W, Alberts I, Gowdy C, Bouchareb Y, Hamarneh G, Klyuzhin I, Rahmim A

pubmed logopapersAug 20 2025
Artificial intelligence (AI) is being explored for a growing range of applications in radiology, including image reconstruction, image segmentation, synthetic image generation, disease classification, worklist triage, and examination scheduling. However, training accurate AI models typically requires substantial amounts of expert-labeled data, which can be time-consuming and expensive to obtain. Active learning offers a potential strategy for mitigating the impacts of such labeling requirements. In contrast with other machine-learning approaches used for data-limited situations, active learning aims to produce labeled datasets by identifying the most informative or uncertain data for human annotation, thereby reducing labeling burden to improve model performance under constrained datasets. This Review explores the application of active learning to radiology AI, focusing on the role of active learning in reducing the resources needed to train radiology AI models while enhancing physician-AI interaction and collaboration. We discuss how active learning can be incorporated into radiology workflows to promote physician-in-the-loop AI systems, presenting key active learning concepts and use cases for radiology-based tasks, including through literature-based examples. Finally, we provide summary recommendations for the integration of active learning in radiology workflows while highlighting relevant opportunities, challenges, and future directions.

Machine Learning in Venous Thromboembolism - Why and What Next?

Gurumurthy G, Kisiel F, Reynolds L, Thomas W, Othman M, Arachchillage DJ, Thachil J

pubmed logopapersAug 19 2025
Venous thromboembolism (VTE) remains a leading cause of cardiovascular morbidity and mortality, despite advances in imaging and anticoagulation. VTE arises from diverse and overlapping risk factors, such as inherited thrombophilia, immobility, malignancy, surgery or trauma, pregnancy, hormonal therapy, obesity, chronic medical conditions (e.g., heart failure, inflammatory disease), and advancing age. Clinicians, therefore, face challenges in balancing the benefits of thromboprophylaxis against the bleeding risk. Existing clinical risk scores often exhibit only modest discrimination and calibration across heterogeneous patient populations. Machine learning (ML) has emerged as a promising tool to address these limitations. In imaging, convolutional neural networks and hybrid algorithms can detect VTE on CT pulmonary angiography with areas under the curves (AUCs) of 0.85 to 0.96. In surgical cohorts, gradient-boosting models outperform traditional risk scores, achieving AUCs between 0.70 and 0.80 in predicting postoperative VTE. In cancer-associated venous thrombosis, advanced ML models demonstrate AUCs between 0.68 and 0.82. However, concerns about bias and external validation persist. Bleeding risk prediction models remain challenging in extended anticoagulation settings, often matching conventional models. Predicting recurrent VTE using neural networks showed AUCs of 0.93 to 0.99 in initial studies. However, these lack transparency and prospective validation. Most ML models suffer from limited external validation, "black box" algorithms, and integration hurdles within clinical workflows. Future efforts should focus on standardized reporting (e.g., Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis [TRIPOD]-ML), transparent model interpretation, prospective impact assessments, and seamless incorporation into electronic health records to realize the full potential of ML in VTE.

Interpreting convolutional neural network explainability for head-and-neck cancer radiotherapy organ-at-risk segmentation.

Strijbis VIJ, Gurney-Champion OJ, Grama DI, Slotman BJ, Verbakel WFAR

pubmed logopapersAug 19 2025
Convolutional neural networks (CNNs) have emerged to reduce clinical resources and standardize auto-contouring of organs-at-risk (OARs). Although CNNs perform adequately for most patients, understanding when the CNN might fail is critical for effective and safe clinical deployment. However, the limitations of CNNs are poorly understood because of their black-box nature. Explainable artificial intelligence (XAI) can expose CNNs' inner mechanisms for classification. Here, we investigate the inner mechanisms of CNNs for segmentation and explore a novel, computational approach to a-priori flag potentially insufficient parotid gland (PG) contours. First, 3D UNets were trained in three PG segmentation situations using (1) synthetic cases; (2) 1925 clinical computed tomography (CT) scans with typical and (3) more consistent contours curated through a previously validated auto-curation step. Then, we generated attribution maps for seven XAI methods, and qualitatively assessed them for congruency between simulated and clinical contours, and how much XAI agreed with expert reasoning. To objectify observations, we explored persistent homology intensity filtrations to capture essential topological characteristics of XAI attributions. Principal component (PC) eigenvalues of Euler characteristic profiles were correlated with spatial agreement (Dice-Sørensen similarity coefficient; DSC). Evaluation was done using sensitivity, specificity and the area under receiver operating characteristic (AUROC) curve on an external AAPM dataset, where as proof-of-principle, we regard the lowest 15% DSC as insufficient. PatternNet attributions (PNet-A) focused on soft-tissue structures, whereas guided backpropagation (GBP) highlighted both soft-tissue and high-density structures (e.g. mandible bone), which was congruent with synthetic situations. Both methods typically had higher/denser activations in better auto-contoured medial and anterior lobes. Curated models produced "cleaner" gradient class-activation mapping (GCAM) attributions. Quantitative analysis showed that PCλ<sub>1</sub> of guided GCAM's (GGCAM) Euler characteristic (EC) profile had good predictive value (sensitivity>0.85, specificity>0.90) of DSC for AAPM cases, with AUROC = 0.66, 0.74, 0.94, 0.83 for GBP, GCAM, GGCAM and PNet-A. For for λ<sub>1</sub> < -1.8e3 of GGCAM's EC-profile, 87% of cases were insufficient. GBP and PNet-A qualitatively agreed most with expert reasoning on directly (structure borders) and indirectly (proxies used for identifying structure borders) important features for PG segmentation. Additionally, this work investigated as proof-of-principle how topological data analysis could be used for quantitative XAI signal analysis to a-priori mark potentially inadequate CNN-segmentations, using only features from inside the predicted PG. This work used PG as a well-understood segmentation paradigm and may extend to target volumes and other organs-at-risk.

State of Abdominal CT Datasets: A Critical Review of Bias, Clinical Relevance, and Real-world Applicability

Saeide Danaei, Zahra Dehghanian, Elahe Meftah, Nariman Naderi, Seyed Amir Ahmad Safavi-Naini, Faeze Khorasanizade, Hamid R. Rabiee

arxiv logopreprintAug 19 2025
This systematic review critically evaluates publicly available abdominal CT datasets and their suitability for artificial intelligence (AI) applications in clinical settings. We examined 46 publicly available abdominal CT datasets (50,256 studies). Across all 46 datasets, we found substantial redundancy (59.1\% case reuse) and a Western/geographic skew (75.3\% from North America and Europe). A bias assessment was performed on the 19 datasets with >=100 cases; within this subset, the most prevalent high-risk categories were domain shift (63\%) and selection bias (57\%), both of which may undermine model generalizability across diverse healthcare environments -- particularly in resource-limited settings. To address these challenges, we propose targeted strategies for dataset improvement, including multi-institutional collaboration, adoption of standardized protocols, and deliberate inclusion of diverse patient populations and imaging technologies. These efforts are crucial in supporting the development of more equitable and clinically robust AI models for abdominal imaging.

Emerging modalities for neuroprognostication in neonatal encephalopathy: harnessing the potential of artificial intelligence.

Chawla V, Cizmeci MN, Sullivan KM, Gritz EC, Q Cardona V, Menkiti O, Natarajan G, Rao R, McAdams RM, Dizon ML

pubmed logopapersAug 19 2025
Neonatal Encephalopathy (NE) from presumed hypoxic-ischemic encephalopathy (pHIE) is a leading cause of morbidity and mortality in infants worldwide. Recent advancements in HIE research have introduced promising tools for improved screening of high-risk infants, time to diagnosis, and accuracy of assessment of neurologic injury to guide management and predict outcomes, some of which integrate artificial intelligence (AI) and machine learning (ML). This review begins with an overview of AI/ML before examining emerging prognostic approaches for predicting outcomes in pHIE. It explores various modalities including placental and fetal biomarkers, gene expression, electroencephalography, brain magnetic resonance imaging and other advanced neuroimaging techniques, clinical video assessment tools, and transcranial magnetic stimulation paired with electromyography. Each of these approaches may come to play a crucial role in predicting outcomes in pHIE. We also discuss the application of AI/ML to enhance these emerging prognostic tools. While further validation is needed for widespread clinical adoption, these tools and their multimodal integration hold the potential to better leverage neuroplasticity windows of affected infants. IMPACT: This article provides an overview of placental pathology, biomarkers, gene expression, electroencephalography, motor assessments, brain imaging, and transcranial magnetic stimulation tools for long-term neurodevelopmental outcome prediction following neonatal encephalopathy, that lend themselves to augmentation by artificial intelligence/machine learning (AI/ML). Emerging AI/ML tools may create opportunities for enhanced prognostication through multimodal analyses.

Multimodal large language models for medical image diagnosis: Challenges and opportunities.

Zhang A, Zhao E, Wang R, Zhang X, Wang J, Chen E

pubmed logopapersAug 18 2025
The integration of artificial intelligence (AI) into radiology has significantly improved diagnostic accuracy and workflow efficiency. Multimodal large language models (MLLMs), which combine natural language processing (NLP) and computer vision techniques, hold the potential to further revolutionize medical image analysis. Despite these advances, their widespread clinical adoption of MLLMs remains limited by challenges such as data quality, interpretability, ethical and regulatory compliance- including adherence to frameworks like the General Data Protection Regulation (GDPR) - computational demands, and generalizability across diverse patient populations. Addressing these interconnected challenges presents opportunities to enhance MLLM performance and reliability. Priorities for future research include improving model transparency, safeguarding data privacy through federated learning, optimizing multimodal fusion strategies, and establishing standardized evaluation frameworks. By overcoming these barriers, MLLMs can become essential tools in radiology, supporting clinical decision-making, and improving patient outcomes.

A systematic review of comparisons of AI and radiologists in the diagnosis of HCC in multiphase CT: implications for practice.

Younger J, Morris E, Arnold N, Athulathmudali C, Pinidiyapathirage J, MacAskill W

pubmed logopapersAug 18 2025
This systematic review aims to examine the literature of artificial intelligence (AI) algorithms in the diagnosis of hepatocellular carcinoma (HCC) among focal liver lesions compared to radiologists on multiphase CT images, focusing on performance metrics that include sensitivity and specificity as a minimum. We searched Embase, PubMed and Web of Science for studies published from January 2018 to May 2024. Eligible studies evaluated AI algorithms for diagnosing HCC using multiphase CT, with radiologist interpretation as a comparator. The performance of AI models and radiologists was recorded using sensitivity and specificity from each study. TRIPOD + AI was used for quality appraisal and PROBAST was used to assess the risk of bias. Seven studies out of the 3532 reviewed were included in the review. All seven studies analysed the performance of AI models and radiologists. Two studies additionally assessed performance with and without supplementary clinical information to assist the AI model in diagnosis. Three studies additionally evaluated the performance of radiologists with assistance of the AI algorithm in diagnosis. The AI algorithms demonstrated a sensitivity ranging from 63.0 to 98.6% and a specificity of 82.0-98.6%. In comparison, junior radiologists (with less than 10 years of experience) exhibited a sensitivity of 41.2-92.0% and a specificity of 72.2-100%, while senior radiologists (with more than 10 years of experience) achieved a sensitivity between 63.9% and 93.7% and a specificity ranging from 71.9 to 99.9%. AI algorithms demonstrate adequate performance in the diagnosis of HCC from focal liver lesions on multiphase CT images. Across geographic settings, AI could help streamline workflows and improve access to timely diagnosis. However, thoughtful implementation strategies are still needed to mitigate bias and overreliance.

Artificial Intelligence Approaches for Early Prediction of Parkinson's Disease.

Gond A, Kumar A, Kumar A, Kushwaha SKS

pubmed logopapersAug 18 2025
Parkinson's disease (PD) is a progressive neurodegenerative disorder that affects both motor and non-motor functions, primarily due to the gradual loss of dopaminergic neurons in the substantia nigra. Traditional diagnostic methods largely depend on clinical symptom evaluation, which often leads to delays in detection and treatment. However, in recent years, artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), have emerged as groundbreaking techniques for the diagnosis and management of PD. This review explores the emergent role of AI-driven techniques in early disease detection, continuous monitoring, and the development of personalized treatment strategies. Advanced AI applications, including medical imaging analysis, speech pattern recognition, gait assessment, and the identification of digital biomarkers, have shown remarkable potential in improving diagnostic accuracy and patient care. Additionally, AI-driven telemedicine solutions enable remote and real-time disease monitoring, addressing challenges related to accessibility and early intervention. Despite these promising advancements, several hurdles remain, such as concerns over data privacy, the interpretability of AI models, and the need for rigorous validation before clinical implementation. With PD cases expected to rise significantly by 2030, further research and interdisciplinary collaboration are crucial to refining AI technologies and ensuring their reliability in medical practice. By bridging the gap between technology and neurology, AI has the potential to revolutionize PD management, paving the way for precision medicine and better patient outcomes.
Page 5 of 22220 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.