Sort by:
Page 147 of 2432424 results

Virtual lung screening trial (VLST): An in silico study inspired by the national lung screening trial for lung cancer detection.

Tushar FI, Vancoillie L, McCabe C, Kavuri A, Dahal L, Harrawood B, Fryling M, Zarei M, Sotoudeh-Paima S, Ho FC, Ghosh D, Harowicz MR, Tailor TD, Luo S, Segars WP, Abadi E, Lafata KJ, Lo JY, Samei E

pubmed logopapersJul 1 2025
Clinical imaging trials play a crucial role in advancing medical innovation but are often costly, inefficient, and ethically constrained. Virtual Imaging Trials (VITs) present a solution by simulating clinical trial components in a controlled, risk-free environment. The Virtual Lung Screening Trial (VLST), an in silico study inspired by the National Lung Screening Trial (NLST), illustrates the potential of VITs to expedite clinical trials, minimize risks to participants, and promote optimal use of imaging technologies in healthcare. This study aimed to show that a virtual imaging trial platform could investigate some key elements of a major clinical trial, specifically the NLST, which compared Computed tomography (CT) and chest radiography (CXR) for lung cancer screening. With simulated cancerous lung nodules, a virtual patient cohort of 294 subjects was created using XCAT human models. Each virtual patient underwent both CT and CXR imaging, with deep learning models, the AI CT-Reader and AI CXR-Reader, acting as virtual readers to perform recall patients with suspicion of lung cancer. The primary outcome was the difference in diagnostic performance between CT and CXR, measured by the Area Under the Curve (AUC). The AI CT-Reader showed superior diagnostic accuracy, achieving an AUC of 0.92 (95 % CI: 0.90-0.95) compared to the AI CXR-Reader's AUC of 0.72 (95 % CI: 0.67-0.77). Furthermore, at the same 94 % CT sensitivity reported by the NLST, the VLST specificity of 73 % was similar to the NLST specificity of 73.4 %. This CT performance highlights the potential of VITs to replicate certain aspects of clinical trials effectively, paving the way toward a safe and efficient method for advancing imaging-based diagnostics.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.

MDAL: Modality-difference-based active learning for multimodal medical image analysis via contrastive learning and pointwise mutual information.

Wang H, Jin Q, Du X, Wang L, Guo Q, Li H, Wang M, Song Z

pubmed logopapersJul 1 2025
Multimodal medical images reveal different characteristics of the same anatomy or lesion, offering significant clinical value. Deep learning has achieved widespread success in medical image analysis with large-scale labeled datasets. However, annotating medical images is expensive and labor-intensive for doctors, and the variations between different modalities further increase the annotation cost for multimodal images. This study aims to minimize the annotation cost for multimodal medical image analysis. We proposes a novel active learning framework MDAL based on modality differences for multimodal medical images. MDAL quantifies the sample-wise modality differences through pointwise mutual information estimated by multimodal contrastive learning. We hypothesize that samples with larger modality differences are more informative for annotation and further propose two sampling strategies based on these differences: MaxMD and DiverseMD. Moreover, MDAL could select informative samples in one shot without initial labeled data. We evaluated MDAL on public brain glioma and meningioma segmentation datasets and an in-house ovarian cancer classification dataset. MDAL outperforms other advanced active learning competitors. Besides, when using only 20%, 20%, and 15% of labeled samples in these datasets, MDAL reaches 99.6%, 99.9%, and 99.3% of the performance of supervised training with full labeled dataset, respectively. The results show that our proposed MDAL could significantly reduce the annotation cost for multimodal medical image analysis. We expect MDAL could be further extended to other multimodal medical data for lower annotation costs.

Coronary p-Graph: Automatic classification and localization of coronary artery stenosis from Cardiac CTA using DSA-based annotations.

Zhang Y, Zhang X, He Y, Zang S, Liu H, Liu T, Zhang Y, Chen Y, Shu H, Coatrieux JL, Tang H, Zhang L

pubmed logopapersJul 1 2025
Coronary artery disease (CAD) is a prevalent cardiovascular condition with profound health implications. Digital subtraction angiography (DSA) remains the gold standard for diagnosing vascular disease, but its invasiveness and procedural demands underscore the need for alternative diagnostic approaches. Coronary computed tomography angiography (CCTA) has emerged as a promising non-invasive method for accurately classifying and localizing coronary artery stenosis. However, the complexity of CCTA images and their dependence on manual interpretation highlight the essential role of artificial intelligence in supporting clinicians in stenosis detection. This paper introduces a novel framework, Coronaryproposal-based Graph Convolutional Networks (Coronary p-Graph), designed for the automated detection of coronary stenosis from CCTA scans. The framework transforms CCTA data into curved multi-planar reformation (CMPR) images that delineate the coronary artery centerline. After aligning the CMPR volume along this centerline, the entire vasculature is analyzed using a convolutional neural network (CNN) for initial feature extraction. Based on predefined criteria informed by prior knowledge, the model generates candidate stenotic segments, termed "proposals," which serve as graph nodes. The spatial relationships between nodes are then modeled as edges, constructing a graph representation that is processed using a graph convolutional network (GCN) for precise classification and localization of stenotic segments. All CCTA images were rigorously annotated by three expert radiologists, using DSA reports as the reference standard. This novel methodology offers diagnostic performance equivalent to invasive DSA based solely on non-invasive CCTA, potentially reducing the need for invasive procedures. The proposed method was evaluated on a retrospective dataset comprising 259 cases, each with paired CCTA and corresponding DSA reports. Quantitative analyses demonstrated the superior performance of our approach compared to existing methods, with the following metrics: accuracy of 0.844, specificity of 0.910, area under the receiver operating characteristic curve (AUC) of 0.74, and mean absolute error (MAE) of 0.157.

Deep Learning Based on Ultrasound Images Differentiates Parotid Gland Pleomorphic Adenomas and Warthin Tumors.

Li Y, Zou M, Zhou X, Long X, Liu X, Yao Y

pubmed logopapersJul 1 2025
Exploring the clinical significance of employing deep learning methodologies on ultrasound images for the development of an automated model to accurately identify pleomorphic adenomas and Warthin tumors in salivary glands. A retrospective study was conducted on 91 patients who underwent ultrasonography examinations between January 2016 and December 2023 and were subsequently diagnosed with pleomorphic adenoma or Warthin's tumor based on postoperative pathological findings. A total of 526 ultrasonography images were collected for analysis. Convolutional neural network (CNN) models, including ResNet18, MobileNetV3Small, and InceptionV3, were trained and validated using these images for the differentiation of pleomorphic adenoma and Warthin's tumor. Performance evaluation metrics such as receiver operating characteristic (ROC) curves, area under the curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value were utilized. Two ultrasound physicians, with varying levels of expertise, conducted independent evaluations of the ultrasound images. Subsequently, a comparative analysis was performed between the diagnostic outcomes of the ultrasound physicians and the results obtained from the best-performing model. Inter-rater agreement between routine ultrasonography interpretation by the two expert ultrasonographers and the automatic identification diagnosis of the best model in relation to pathological results was assessed using kappa tests. The deep learning models achieved favorable performance in differentiating pleomorphic adenoma from Warthin's tumor. The ResNet18, MobileNetV3Small, and InceptionV3 models exhibited diagnostic accuracies of 82.4% (AUC: 0.932), 87.0% (AUC: 0.946), and 77.8% (AUC: 0.811), respectively. Among these models, MobileNetV3Small demonstrated the highest performance. The experienced ultrasonographer achieved a diagnostic accuracy of 73.5%, with sensitivity, specificity, positive predictive value, and negative predictive value of 73.7%, 73.3%, 77.8%, and 68.8%, respectively. The less-experienced ultrasonographer achieved a diagnostic accuracy of 69.0%, with sensitivity, specificity, positive predictive value, and negative predictive value of 66.7%, 71.4%, 71.4%, and 66.7%, respectively. The kappa test revealed strong consistency between the best-performing deep learning model and postoperative pathological diagnoses (kappa value: .778, <i>p</i>-value < .001). In contrast, the less-experienced ultrasonographer demonstrated poor consistency in image interpretations (kappa value: .380, <i>p</i>-value < .05). The diagnostic accuracy of the best deep learning model was significantly higher than that of the ultrasonographers, and the experienced ultrasonographer exhibited higher diagnostic accuracy than the less-experienced one. This study demonstrates the promising performance of a deep learning-based method utilizing ultrasonography images for the differentiation of pleomorphic adenoma and Warthin's tumor. The approach reduces subjective errors, provides decision support for clinicians, and improves diagnostic consistency.

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.

Development and validation of a nomogram for predicting bone marrow involvement in lymphoma patients based on <sup>18</sup>F-FDG PET radiomics and clinical factors.

Lu D, Zhu X, Mu X, Huang X, Wei F, Qin L, Liu Q, Fu W, Deng Y

pubmed logopapersJul 1 2025
This study aimed to develop and validate a nomogram combining <sup>18</sup>F-FDG PET radiomics and clinical factors to non-invasively predict bone marrow involvement (BMI) in patients with lymphoma. A radiomics nomogram was developed using monocentric data, randomly divided into a training set (70%) and a test set (30%). Bone marrow biopsy (BMB) served as the gold standard for BMI diagnosis. Independent clinical risk factors were identified through univariate and multivariate logistic regression analyses to construct a clinical model. Radiomics features were extracted from PET and CT images and selected using least absolute shrinkage and selection operator (LASSO) regression, yielding a radiomics score (Rad<sub>score</sub>) for each patient. Models based on clinical factors, CT Rad<sub>score</sub>, and PET Rad<sub>score</sub> were established and evaluated using eight machine learning algorithms to identify the optimal prediction model. A combined model was constructed and presented as a nomogram. Model performance was assessed using the area under the receiver operating characteristic curve (AUC), calibration curves, and decision curve analysis (DCA). A total of 160 patients were included, of whom 70 had BMI based on BMB results. The training group comprised 112 patients (BMI: 56, without BMI: 56), while the test group included 48 patients (BMI: 14, without BMI: 34). Independent risk factors, including the number of extranodal involvements and B symptoms, were incorporated into the clinical model. In the clinical model, CT Rad<sub>score</sub>, and PET Rad<sub>score</sub>, the AUCs in the test set were 0.820 (95% CI: 0.705-0.935), 0.538 (95% CI: 0.351-0.723), and 0.836 (95% CI: 0.686-0.986). Due to the limited diagnostic performance of CT Rad<sub>score</sub>, the nomogram was constructed using PET Rad<sub>score</sub> and the clinical model. The radiomics nomogram achieved AUCs of 0.916 (95% CI: 0.865-0.967) in the training set and 0.863 (95% CI: 0.763-0.964) in the test set. Calibration curves and DCA confirmed the nomogram's discrimination, calibration, and clinical utility in both sets. By integrating PET Rad<sub>score</sub>, the number of extranodal involvements, and B symptoms, this <sup>18</sup>F-FDG PET radiomics-based nomogram offers a non-invasive method to predict bone marrow status in lymphoma patients, providing nuclear medicine physicians with valuable decision support for pre-treatment evaluation.

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

A deep-learning model to predict the completeness of cytoreductive surgery in colorectal cancer with peritoneal metastasis☆.

Lin Q, Chen C, Li K, Cao W, Wang R, Fichera A, Han S, Zou X, Li T, Zou P, Wang H, Ye Z, Yuan Z

pubmed logopapersJul 1 2025
Colorectal cancer (CRC) with peritoneal metastasis (PM) is associated with poor prognosis. The Peritoneal Cancer Index (PCI) is used to evaluate the extent of PM and to select Cytoreductive Surgery (CRS). However, PCI score is not accurate to guide patient's selection for CRS. We have developed a novel AI framework of decoupling feature alignment and fusion (DeAF) by deep learning to aid selection of PM patients and predict surgical completeness of CRS. 186 CRC patients with PM recruited from four tertiary hospitals were enrolled. In the training cohort, deep learning was used to train the DeAF model using Simsiam algorithms by contrast CT images and then fuse clinicopathological parameters to increase performance. The accuracy, sensitivity, specificity, and AUC by ROC were evaluated both in the internal validation cohort and three external cohorts. The DeAF model demonstrated a robust accuracy to predict the completeness of CRS with AUC of 0.9 (95 % CI: 0.793-1.000) in internal validation cohort. The model can guide selection of suitable patients and predict potential benefits from CRS. The high predictive performance in predicting CRS completeness were validated in three external cohorts with AUC values of 0.906(95 % CI: 0.812-1.000), 0.960(95 % CI: 0.885-1.000), and 0.933 (95 % CI: 0.791-1.000), respectively. The novel DeAF framework can aid surgeons to select suitable PM patients for CRS and predict the completeness of CRS. The model can change surgical decision-making and provide potential benefits for PM patients.
Page 147 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.