Sort by:
Page 122 of 2412410 results

Multiparametric ultrasound techniques are superior to AI-assisted ultrasound for assessment of solid thyroid nodules: a prospective study.

Li Y, Li X, Yan L, Xiao J, Yang Z, Zhang M, Luo Y

pubmed logopapersJul 10 2025
To evaluate the diagnostic performance of multiparametric ultrasound (mpUS) and AI-assisted B-mode ultrasound (AI-US), and their potential to reduce unnecessary biopsies to B-mode for solid thyroid nodules. This prospective study enrolled 226 solid thyroid nodules with 145 malignant and 81 benign pathological results from 189 patients (35 men and 154 women; age range, 19-73 years; mean age, 45 years). Each nodule was examined using B-mode, microvascular flow imaging (MVFI), elastography with elasticity contrast index (ECI), and an AI system. Image data were recorded for each modality. Ten readers with different experience levels independently evaluated the B-mode images of each nodule to make a "benign" or "malignant" diagnosis in both an unblinded and blinded manner to the AI reports. The most accurate ECI value and MVFI mode were selected and combined with the dichotomous prediction of all readers. Descriptive statistics and AUCs were used to evaluate the diagnostic performances of mpUS and AI-US. Triple mpUS with B-mode, MVFI, and ECI exhibited the highest diagnostic performance (average AUC = 0.811 vs. 0.677 for B-mode, p = 0.001), followed by AI-US (average AUC = 0.718, p = 0.315). Triple mpUS significantly reduced the unnecessary biopsy rate by up to 12% (p = 0.007). AUC and specificity were significantly higher for triple mpUS than for AI-US mode (both p < 0.05). Compared to AI-US, triple mpUS (B-mode, MVFI, and ECI) exhibited better diagnostic performance for thyroid cancer diagnosis, and resulted in a significant reduction in unnecessary biopsy rate. AI systems are expected to take advantage of multi-modal information to facilitate diagnoses.

Understanding Dataset Bias in Medical Imaging: A Case Study on Chest X-rays

Ethan Dack, Chengliang Dai

arxiv logopreprintJul 10 2025
Recent works have revisited the infamous task ``Name That Dataset'', demonstrating that non-medical datasets contain underlying biases and that the dataset origin task can be solved with high accuracy. In this work, we revisit the same task applied to popular open-source chest X-ray datasets. Medical images are naturally more difficult to release for open-source due to their sensitive nature, which has led to certain open-source datasets being extremely popular for research purposes. By performing the same task, we wish to explore whether dataset bias also exists in these datasets. To extend our work, we apply simple transformations to the datasets, repeat the same task, and perform an analysis to identify and explain any detected biases. Given the importance of AI applications in medical imaging, it's vital to establish whether modern methods are taking shortcuts or are focused on the relevant pathology. We implement a range of different network architectures on the datasets: NIH, CheXpert, MIMIC-CXR and PadChest. We hope this work will encourage more explainable research being performed in medical imaging and the creation of more open-source datasets in the medical domain. Our code can be found here: https://github.com/eedack01/x_ray_ds_bias.

Understanding Dataset Bias in Medical Imaging: A Case Study on Chest X-rays

Ethan Dack, Chengliang Dai

arxiv logopreprintJul 10 2025
Recent works have revisited the infamous task ``Name That Dataset'', demonstrating that non-medical datasets contain underlying biases and that the dataset origin task can be solved with high accuracy. In this work, we revisit the same task applied to popular open-source chest X-ray datasets. Medical images are naturally more difficult to release for open-source due to their sensitive nature, which has led to certain open-source datasets being extremely popular for research purposes. By performing the same task, we wish to explore whether dataset bias also exists in these datasets. To extend our work, we apply simple transformations to the datasets, repeat the same task, and perform an analysis to identify and explain any detected biases. Given the importance of AI applications in medical imaging, it's vital to establish whether modern methods are taking shortcuts or are focused on the relevant pathology. We implement a range of different network architectures on the datasets: NIH, CheXpert, MIMIC-CXR and PadChest. We hope this work will encourage more explainable research being performed in medical imaging and the creation of more open-source datasets in the medical domain. Our code can be found here: https://github.com/eedack01/x_ray_ds_bias.

A two-stage dual-task learning strategy for early prediction of pathological complete response to neoadjuvant chemotherapy for breast cancer using dynamic contrast-enhanced magnetic resonance images.

Jing B, Wang J

pubmed logopapersJul 10 2025
Early prediction of treatment response can facilitate personalized treatment for breast cancer patients. Studies on the I-SPY 2 clinical trial demonstrate that multi-time point dynamic contrast-enhanced magnetic resonance (DCEMR) imaging improves the accuracy of predicting pathological complete response (pCR) to chemotherapy. However, previous image-based prediction models usually rely on mid- or post-treatment images to ensure the accuracy of prediction, which may outweigh the benefit of response-based adaptive treatment strategy. Accurately predicting the pCR at the early time point is desired yet remains challenging. To improve prediction accuracy at the early time point of treatment, we proposed a two-stage dual-task learning strategy to train a deep neural network for early prediction using only early-treatment data. We developed and evaluated our proposed method using the I-SPY 2 dataset, which included DCEMR images acquired at three time points: pretreatment (T0), after 3 weeks (T1) and 12 weeks of treatment (T2). At the first stage, we trained a convolutional long short-term memory (LSTM) model using all the data to predict pCR and extract the latent space image representation at T2. At the second stage, we trained a dual-task model to simultaneously predict pCR and the image representation at T2 using images from T0 and T1. This allowed us to predict pCR earlier without using images from T2. By using the conventional single-stage single-task strategy, the area under the receiver operating characteristic curve (AUROC) was 0.799. By using the proposed two-stage dual-task learning strategy, the AUROC was improved to 0.820. Our proposed two-stage dual-task learning strategy can improve model performance significantly (p=0.0025) for predicting pCR at the early time point (3rd week) of neoadjuvant chemotherapy for high-risk breast cancer patients. The early prediction model can potentially help physicians to intervene early and develop personalized plans at the early stage of chemotherapy.

Objective assessment of diagnostic image quality in CT scans: what radiologists and researchers need to know.

Hoeijmakers EJI, Martens B, Wildberger JE, Flohr TG, Jeukens CRLPN

pubmed logopapersJul 10 2025
Quantifying diagnostic image quality (IQ) is not straightforward but essential for optimizing the balance between IQ and radiation dose, and for ensuring consistent high-quality images in CT imaging. This review provides a comprehensive overview of advanced objective reference-free IQ assessment methods for CT scans, beyond standard approaches. A literature search was performed in PubMed and Web of Science up to June 2024 to identify studies using advanced objective image quality methods on clinical CT scans. Only reference-free methods, which do not require a predefined reference image, were included. Traditional methods relying on the standard deviation of the Hounsfield units, the signal-to-noise ratio or contrast-to-noise ratio, all within a manually selected region-of-interest, were excluded. Eligible results were categorized by IQ metric (i.e., noise, contrast, spatial resolution and other) and assessment method (manual, automated, and artificial intelligence (AI)-based). Thirty-five studies were included that proposed or employed reference-free IQ methods, identifying 12 noise assessment methods, 4 contrast assessment methods, 14 spatial resolution assessment methods and 7 others, based on manual, automated or AI-based approaches. This review emphasizes the transition from manual to fully automated approaches for IQ assessment, including the potential of AI-based methods, and it provides a reference tool for researchers and radiologists who need to make a well-considered choice in how to evaluate IQ in CT imaging. This review examines the challenge of quantifying diagnostic CT image quality, essential for optimization studies and ensuring consistent high-quality images, by providing an overview of objective reference-free diagnostic image quality assessment methods beyond standard methods. Quantifying diagnostic CT image quality remains a key challenge. This review summarizes objective diagnostic image quality assessment techniques beyond standard metrics. A decision tree is provided to help select optimal image quality assessment techniques.

Attention-based multimodal deep learning for interpretable and generalizable prediction of pathological complete response in breast cancer.

Nishizawa T, Maldjian T, Jiao Z, Duong TQ

pubmed logopapersJul 10 2025
Accurate prediction of pathological complete response (pCR) to neoadjuvant chemotherapy has significant clinical utility in the management of breast cancer treatment. Although multimodal deep learning models have shown promise for predicting pCR from medical imaging and other clinical data, their adoption has been limited due to challenges with interpretability and generalizability across institutions. We developed a multimodal deep learning model combining post contrast-enhanced whole-breast MRI at pre- and post-treatment timepoints with non-imaging clinical features. The model integrates 3D convolutional neural networks and self-attention to capture spatial and cross-modal interactions. We utilized two public multi-institutional datasets to perform internal and external validation of the model. For model training and validation, we used data from the I-SPY 2 trial (N = 660). For external validation, we used the I-SPY 1 dataset (N = 114). Of the 660 patients in I-SPY 2, 217 patients achieved pCR (32.88%). Of the 114 patients in I-SPY 1, 29 achieved pCR (25.44%). The attention-based multimodal model yielded the best predictive performance with an AUC of 0.73 ± 0.04 on the internal data and an AUC of 0.71 ± 0.02 on the external dataset. The MRI-only model (internal AUC = 0.68 ± 0.03, external AUC = 0.70 ± 0.04) and the non-MRI clinical features-only model (internal AUC = 0.66 ± 0.08, external AUC = 0.71 ± 0.03) trailed in performance, indicating the combination of both modalities is most effective. We present a robust and interpretable deep learning framework for pCR prediction in breast cancer patients undergoing NAC. By combining imaging and clinical data with attention-based fusion, the model achieves strong predictive performance and generalizes across institutions.

Non-invasive identification of TKI-resistant NSCLC: a multi-model AI approach for predicting EGFR/TP53 co-mutations.

Li J, Xu R, Wang D, Liang Z, Li Y, Wang Q, Bi L, Qi Y, Zhou Y, Li W

pubmed logopapersJul 10 2025
To investigate the value of multi-model based on preoperative CT scans in predicting EGFR/TP53 co-mutation status. We retrospectively included 2171 patients with non-small cell lung cancer (NSCLC) with pre-treatment computed tomography (CT) scans and predicting epidermal growth factor receptor (EGFR) gene sequencing from West China Hospital between January 2013 and April 2024. The deep-learning model was built for predicting EGFR / tumor protein 53 (TP53) co-occurrence status. The model performance was evaluated by area under the curve (AUC) and Kaplan-Meier analysis. We further compared multi-dimension model with three one-dimension models separately, and we explored the value of combining clinical factors with machine-learning factors. Additionally, we investigated 546 patients with 56-panel next-generation sequencing and low-dose computed tomography (LDCT) to explore the biological mechanisms of radiomics. In our cohort of 2171 patients (1,153 males, 1,018 females; median age 60 years), single-dimensional models were developed using data from 1,055 eligible patients. The multi-dimensional model utilizing a Random Forest classifier achieved superior performance, yielding the highest AUC of 0.843 for predicting EGFR/TP53 co-mutations in the test set. The multi-dimensional model demonstrates promising potential for non-invasive prediction of EGFR and TP53 co-mutations, facilitating early and informed clinical decision-making in NSCLC patients at risk of treatment resistance.

The potential of machine learning to personalized medicine in Neurogenetics: Current trends and future directions.

Ghorbian M, Ghorbian S

pubmed logopapersJul 10 2025
Neurogenetic disorders (NeD) are a group of neurological conditions resulting from inherited genetic defects. By affecting the normal functioning of the nervous system, these diseases lead to serious problems in movement, cognition, and other body functions. In recent years, machine learning (ML) approaches have proven highly effective, enabling the analysis and processing of vast amounts of medical data. By analyzing genetic data, medical imaging, and other clinical data, these techniques can contribute to early diagnosis and more effective treatment of NeD. However, using these approaches is challenged by issues including data variability, model explainability, and the requirement for interdisciplinary collaboration. This paper investigates the impact of ML on healthcare diagnosis and care of common NeD, such as Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD), and Multiple Sclerosis disease (MSD). The purpose of this research is to determine the opportunities and challenges of using these techniques in the field of neurogenetic medicine. Our findings show that using ML can increase the detection accuracy by 85 % and reduce the detection time by 60 %. Additionally, the use of these techniques in predicting patient prognosis has been 70 % more accurate than traditional methods. Ultimately, this research will enable medical professionals and researchers to leverage ML approaches in advancing the diagnostic and therapeutic processes of NeD by identifying the opportunities and challenges.

Predicting Thoracolumbar Vertebral Osteoporotic Fractures: Value Assessment of Chest CT-Based Machine Learning.

Chen Y, Che M, Yang H, Yu M, Yang Z, Qin J

pubmed logopapersJul 10 2025
To assess the value of a chest CT-based machine learning model in predicting osteoporotic vertebral fractures (OVFs) of the thoracolumbar vertebral bodies. We monitored 8910 patients aged ≥50 who underwent chest CT (2021-2024), identifying 54 incident OVFs cases. Using propensity score matching, 108 controls were selected. The 162 patients were randomly assigned to training (n=113) and testing (n=49) cohorts. Clinical models were developed through logistic regression. Radiomics features were extracted from the thoracolumbar vertebral bodies (T11-L2), with top 10 features selected via minimum-redundancy maximum-relevancy and the least absolute shrinkage and selection operator to construct a Radscore model. Nomogram model was established combining clinical and radiomics features, evaluated using receiver operating characteristic curves, decision curve analysis (DCA) and calibration plots. Volumetric bone mineral density (vBMD) (OR=0.95, 95%CI=0.93-0.97) and hemoglobin (HGB) (OR=0.96, 95%CI=0.94-0.98) were selected as independent risk factors for clinical model. From 2288 radiomics features, 10 were selected for Radscore calculation. The Nomogram model (Radscore + vBMD + HGB) achieved area under the curve (AUC) of 0.938/0.906 in training/testing cohorts, outperforming both Radscore (AUC=0.902/0.871) and clinical (AUC=0.802/0.820) models. DCA and calibration plots confirmed the Nomogram model's superior prediction capability. Nomogram model combined with radiomics and clinical features has high predictive performance, and its predictive results for thoracolumbar OVFs can provide reference for clinical decision making.

Label-Efficient Chest X-ray Diagnosis via Partial CLIP Adaptation

Heet Nitinkumar Dalsania

arxiv logopreprintJul 9 2025
Modern deep learning implementations for medical imaging usually rely on large labeled datasets. These datasets are often difficult to obtain due to privacy concerns, high costs, and even scarcity of cases. In this paper, a label-efficient strategy is proposed for chest X-ray diagnosis that seeks to reflect real-world hospital scenarios. The experiments use the NIH Chest X-ray14 dataset and a pre-trained CLIP ViT-B/32 model. The model is adapted via partial fine-tuning of its visual encoder and then evaluated using zero-shot and few-shot learning with 1-16 labeled examples per disease class. The tests demonstrate that CLIP's pre-trained vision-language features can be effectively adapted to few-shot medical imaging tasks, achieving over 20\% improvement in mean AUC score as compared to the zero-shot baseline. The key aspect of this work is to attempt to simulate internal hospital workflows, where image archives exist but annotations are sparse. This work evaluates a practical and scalable solution for both common and rare disease diagnosis. Additionally this research is intended for academic and experimental purposes only and has not been peer reviewed yet. All code is found at https://github.com/heet007-code/CLIP-disease-xray.
Page 122 of 2412410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.