Sort by:
Page 142 of 3993984 results

Identifying signatures of image phenotypes to track treatment response in liver disease.

Perkonigg M, Bastati N, Ba-Ssalamah A, Mesenbrink P, Goehler A, Martic M, Zhou X, Trauner M, Langs G

pubmed logopapersJul 21 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary in a randomized controlled trial cohort of patients with nonalcoholic steatohepatitis. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method in a separate replication cohort to demonstrate the applicability of the proposed method.

DREAM: A framework for discovering mechanisms underlying AI prediction of protected attributes

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.

Prediction of OncotypeDX recurrence score using H&E stained WSI images

Cohen, S., Shamai, G., Sabo, E., Cretu, A., Barshack, I., Goldman, T., Bar-Sela, G., Pearson, A. T., Huo, D., Howard, F. M., Kimmel, R., Mayer, C.

medrxiv logopreprintJul 21 2025
The OncotypeDX 21-gene assay is a widely adopted tool for estimating recurrence risk and informing chemotherapy decisions in early-stage, hormone receptor-positive, HER2-negative breast cancer. Although informative, its high cost and long turnaround time limit accessibility and delay treatment in low- and middle-income countries, creating a need for alternative solutions. This study presents a deep learning-based approach for predicting OncotypeDX recurrence scores directly from hematoxylin and eosin-stained whole slide images. Our approach leverages a deep learning foundation model pre-trained on 171,189 slides via self-supervised learning, which is fine-tuned for our task. The model was developed and validated using five independent cohorts, out of which three are external. On the two external cohorts that include OncotypeDX scores, the model achieved an AUC of 0.825 and 0.817, and identified 21.9% and 25.1% of the patients as low-risk with sensitivity of 0.97 and 0.95 and negative predictive value of 0.97 and 0.96, showing strong generalizability despite variations in staining protocols and imaging devices. Kaplan-Meier analysis demonstrated that patients classified as low-risk by the model had a significantly better prognosis than those classified as high-risk, with a hazard ratio of 4.1 (P<0.001) and 2.0 (P<0.01) on the two external cohorts that include patient outcomes. This artificial intelligence-driven solution offers a rapid, cost-effective, and scalable alternative to genomic testing, with the potential to enhance personalized treatment planning, especially in resource-constrained settings.

Lysophospholipid metabolism, clinical characteristics, and artificial intelligence-based quantitative assessments of chest CT in patients with stable COPD and healthy smokers.

Zhou Q, Xing L, Ma M, Qiongda B, Li D, Wang P, Chen Y, Liang Y, ChuTso M, Sun Y

pubmed logopapersJul 21 2025
The specific role of lysophospholipids (LysoPLs) in the pathogenesis of chronic obstructive pulmonary disease (COPD) is not yet fully understood. We determined serum LysoPLs in 20 patients with stable COPD and 20 healthy smokers using liquid chromatography-mass spectrometry (LC-MS) and matching with the lipidIMMS library, and integrated these data with spirometry, systemic inflammation markers, and quantitative chest CT generated by an automated 3D-U-Net artificial intelligence algorithm model. Our findings identified three differential LysoPLs, lysophosphatidylcholine (LPC) (18:0), LPC (18:1), and LPC (18:2), which were significantly lower in the COPD group than in healthy smokers. Significant negative correlations were observed between these LPCs and the inflammatory markers C-reactive protein and Interleukin-6. LPC (18:0) and (18:2) correlated with higher post-bronchodilator FEV1, and the latter also correlated with FEV1% predicted, forced vital capacity (FVC), and FEV1/FVC ratio. Additionally, these three LPCs were negatively correlated with the volume and percentage of low attenuation areas (LAA), high-attenuation areas (HAA), honeycombing, reticular patterns, ground-glass opacities (GGO), and consolidation on CT imaging. In the patients with COPD, the three LPCs were most significantly associated with HAA and GGO. In conclusion, patients with stable COPD exhibited a unique LysoPL metabolism profile, with LPC (18:0), LPC (18:1), and LPC (18:2) being the most significantly altered lipid molecules. The reduction in these three LPCs was associated with impaired pulmonary function and were also linked to a greater extent of emphysema and interstitial lung abnormalities.

Establishment of AI-assisted diagnosis of the infraorbital posterior ethmoid cells based on deep learning.

Ni T, Qian X, Zeng Q, Ma Y, Xie Z, Dai Y, Che Z

pubmed logopapersJul 21 2025
To construct an artificial intelligence (AI)-assisted model for identifying the infraorbital posterior ethmoid cells (IPECs) based on deep learning using sagittal CT images. Sagittal CT images of 277 samples with and 142 samples without IPECs were retrospectively collected. An experienced radiologist engaged in the relevant aspects picked a sagittal CT image that best showed IPECs. The images were randomly assigned to the training and test sets, with 541 sides in the training set and 97 sides in the test set. The training set was used to perform a five-fold cross-validation, and the results of each fold were used to predict the test set. The model was built using nnUNet, and its performance was evaluated using Dice and standard classification metrics. The model achieved a Dice coefficient of 0.900 in the training set and 0.891 in the additional set. Precision was 0.965 for the training set and 1.000 for the additional set, while sensitivity was 0.981 and 0.967, respectively. A comparison of the diagnostic efficacy between manual outlining by a less-experienced radiologist and AI-assisted outlining showed a significant improvement in detection efficiency (P < 0.05). The AI model aided correctly in identifying and outlining all IPECs, including 12 sides that the radiologist should improve portraying. AI models can help radiologists identify the IPECs, which can further prompt relevant clinical interventions.

Deep learning using nasal endoscopy and T2-weighted MRI for prediction of sinonasal inverted papilloma-associated squamous cell carcinoma: an exploratory study.

Ren J, Ren Z, Zhang D, Yuan Y, Qi M

pubmed logopapersJul 21 2025
Detecting malignant transformation of sinonasal inverted papilloma (SIP) into squamous cell carcinoma (SIP-SCC) before surgery is a clinical need. We aimed to explore the value of deep learning (DL) that leverages nasal endoscopy and T2-weighted magnetic resonance imaging (T2W-MRI) for automated tumor segmentation and differentiation between SIP and SIP-SCC. We conducted a retrospective analysis of 174 patients diagnosed with SIPs, who were divided into a training cohort (n = 121) and a testing cohort (n = 53). Three DL architectures were utilized to train automated segmentation models for endoscopic and T2W-MRI images. DL scores predicting SIP-SCC were generated using DenseNet121 from both modalities and combined to create a dual-modality DL nomogram. The diagnostic performance of the DL models was assessed alongside two radiologists, evaluated through the area under the receiver operating characteristic curve (AUROC), with comparisons made using the Delong method. In the testing cohort, the FCN_ResNet101 and VNet exhibited superior performance in automated segmentation, achieving mean dice similarity coefficients of 0.95 ± 0.03 for endoscopy and 0.93 ± 0.02 for T2W-MRI, respectively. The dual-modality DL nomogram based on automated segmentation demonstrated the highest predictive performance for SIP-SCC (AUROC 0.865), outperforming the radiology resident (AUROC 0.672, p = 0.071) and the attending radiologist (AUROC 0.707, p = 0.066), with a trend toward significance. Notably, both radiologists improved their diagnostic performance with the assistance of the DL nomogram (AUROCs 0.734 and 0.834). The DL framework integrating endoscopy and T2W-MRI offers a fully automated predictive tool for SIP-SCC. The integration of endoscopy and T2W-MRI within a well-established DL framework enables fully automated prediction of SIP-SSC, potentially improving decision-making for patients with suspicious SIP. Detecting the transformation of SIP into SIP-SCC before surgery is both critical and challenging. Endoscopy and T2W-MRI were integrated using DL for predicting SIP-SCC. The dual-modality DL nomogram outperformed two radiologists. The nomogram may improve decision-making for patients with suspicious SIP.

Fully automated pedicle screw manufacturer identification in plain radiograph with deep learning methods.

Waranusast R, Riyamongkol P, Weerakul S, Chaibhuddanugul N, Laoruengthana A, Mahatthanatrakul A

pubmed logopapersJul 21 2025
Pedicle screw manufacturer identification is crucial for revision surgery planning; however, this information is occasionally unavailable. We developed a deep learning-based algorithm to identify the pedicle screw manufacturer from plain radiographs. We collected anteroposterior (AP) and lateral radiographs from 276 patients who had thoracolumbar spine surgery with pedicle screws from three international manufacturers. The samples were randomly assigned to training sets (178), validation sets (40), and test sets (58). The algorithm incorporated a convolutional neural network (CNN) model to classify the radiograph as AP and lateral, followed by YOLO object detection to locate the pedicle screw. Another CNN classifier model then identified the manufacturer of each pedicle screw in AP and lateral views. The voting scheme determined the final classification. For comparison, two spine surgeons independently evaluated the same test set, and the accuracy was compared. The mean age of the patients was 59.5 years, with 1,887 pedicle screws included. The algorithm achieved a perfect accuracy of 100% for the AP radiograph, 98.9% for the lateral radiograph, and 100% when both views were considered. By comparison, the spine surgeons achieved 97.1% accuracy. Statistical analysis revealed near-perfect agreement between the algorithm and the surgeons. We have successfully developed an algorithm for pedicle screw manufacturer identification, which demonstrated excellent accuracy and was comparable to experienced spine surgeons.

Advances in IPMN imaging: deep learning-enhanced HASTE improves lesion assessment.

Kolck J, Pivetta F, Hosse C, Cao H, Fehrenbach U, Malinka T, Wagner M, Walter-Rittel T, Geisel D

pubmed logopapersJul 21 2025
The prevalence of asymptomatic pancreatic cysts is increasing due to advances in imaging techniques. Among these, intraductal papillary mucinous neoplasms (IPMNs) are most common, with potential for malignant transformation, often necessitating close follow-up. This study evaluates novel MRI techniques for the assessment of IPMN. From May to December 2023, 59 patients undergoing abdominal MRI were retrospectively enrolled. Examinations were conducted on 3-Tesla scanners using a Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) and standard HASTE (HASTE<sub>S</sub>) sequence. Two readers assessed minimum detectable lesion size and lesion-to-parenchyma contrast quantitatively, and qualitative assessments focused on image quality. Statistical analyses included the Wilcoxon signed-rank and chi-squared tests. HASTE<sub>DL</sub> demonstrated superior overall image quality (p < 0.001), with higher sharpness and contrast ratings (p < 0.001, p = 0.112). HASTE<sub>DL</sub> showed enhanced conspicuity of IPMN (p < 0.001) and lymph nodes (p < 0.001), with more frequent visualization of IPMN communication with the pancreatic duct (p < 0.001). Visualization of complex features (dilated pancreatic duct, septa, and mural nodules) was superior in HASTE<sub>DL</sub> (p < 0.001). The minimum detectable cyst size was significantly smaller for HASTE<sub>DL</sub> (4.17 mm ± 3.00 vs. 5.51 mm ± 4.75; p < 0.001). Inter-reader agreement was for (к 0.936) for HASTE<sub>DL</sub>, slightly lower (к 0.885) for HASTE<sub>S</sub>. HASTE<sub>DL</sub> in IPMN imaging provides superior image quality and significantly reduced scan times. Given the increasing prevalence of IPMN and the ensuing clinical need for fast and precise imaging, HASTE<sub>DL</sub> improves the availability and quality of patient care. Question Are there advantages of deep-learning-accelerated MRI in imaging and assessing intraductal papillary mucinous neoplasms (IPMN)? Findings Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) demonstrated superior image quality, improved conspicuity of "worrisome features" and detection of smaller cysts, with significantly reduced scan times. Clinical relevance HASTEDL provides faster, high-quality MRI imaging, enabling improved diagnostic accuracy and timely risk stratification for IPMN, potentially enhancing patient care and addressing the growing clinical demand for efficient imaging of IPMN.

PXseg: automatic tooth segmentation, numbering and abnormal morphology detection based on CBCT and panoramic radiographs.

Wang R, Cheng F, Dai G, Zhang J, Fan C, Yu J, Li J, Jiang F

pubmed logopapersJul 21 2025
PXseg, a novel approach for tooth segmentation, numbering and abnormal morphology detection in panoramic X-ray (PX), was designed and promoted through optimizing annotation and applying pre-training. Derived from multicenter, ctPXs generated from cone beam computed tomography (CBCT) with accurate 3D labels were utilized for pre-training, while conventional PXs (cPXs) with 2D labels were input for training. Visual and statistical analyses were conducted using the internal dataset to assess segmentation and numbering performances of PXseg and compared with the model without ctPX pre-training, while the accuracy of PXseg detecting abnormal teeth was evaluated using the external dataset consisting of cPXs with complex dental diseases. Besides, a diagnostic testing was performed to contrast diagnostic efficiency with and without PXseg's assistance. The DSC and F1-score of PXseg in tooth segmentation reached 0.882 and 0.902, which increased by 4.6% and 4.0% compared to the model without pre-training. For tooth numbering, the F1-score of PXseg reached 0.943 and increased by 2.2%. Based on the promotion in segmentation, the accuracy of abnormal tooth morphology detection exceeded 0.957 and was 4.3% higher. A website was constructed to assist in PX interpretation, and the diagnostic efficiency was greatly enhanced with the assistance of PXseg. The application of accurate labels in ctPX increased the pre-training weight of PXseg and improved the training effect, achieving promotions in tooth segmentation, numbering and abnormal morphology detection. Rapid and accurate results provided by PXseg streamlined the workflow of PX diagnosis, possessing significant clinical application prospect.

Trueness of artificial intelligence-based, manual, and global thresholding segmentation protocols for human mandibles.

Hernandez AKT, Dutra V, Chu TG, Yang CC, Lin WS

pubmed logopapersJul 21 2025
To compare the trueness of artificial intelligence (AI)-based, manual, and global segmentation protocols by superimposing the resulting segmented 3D models onto reference gold standard surface scan models. Twelve dry human mandibles were used. A cone beam computed tomography (CBCT) scanner was used to scan the mandibles, and the acquired digital imaging and communications in medicine (DICOM) files were segmented using three protocols: global thresholding, manual, and AI-based segmentation (Diagnocat; Diagnocat, San Francisco, CA). The segmented files were exported as study 3D models. A structured light surface scanner (GoSCAN Spark; Creaform 3D, Levis, Canada) was used to scan all mandibles, and the resulting reference 3D models were exported. The study 3D models were compared with the respective reference 3D models by using a mesh comparison software (Geomagic Design X; 3D Systems Inc, Rock Hill, SC). Root mean square (RMS) error values were recorded to measure the magnitude of deviation (trueness), and color maps were obtained to visualize the differences. Comparisons of the trueness of three segmentation methods for differences in RMS were made using repeated measures analysis of variance (ANOVA). A two-sided 5% significance level was used for all tests in the software program. AI-based segmentations had significantly higher RMS values than manual segmentations for the entire mandible (p < 0.001), alveolar process (p < 0.001), and body of the mandible (p < 0.001). AI-based segmentations had significantly lower RMS values than manual segmentations for the condyles (p = 0.018) and ramus (p = 0.013). No significant differences were found between the AI-based and manual segmentations for the coronoid process (p = 0.275), symphysis (p = 0.346), and angle of the mandible (p = 0.344). Global thresholding had significantly higher RMS values than manual segmentations for the alveolus (p < 0.001), angle of the mandible (p < 0.001), body of the mandible (p < 0.001), condyles (p < 0.001), coronoid (p = 0.002), entire mandible (p < 0.001), ramus (p < 0.001), and symphysis (p < 0.001). Global thresholding had significantly higher RMS values than AI-based segmentation for the alveolar process (p = 0.002), angle of the mandible (p < 0.001), body of the mandible (p < 0.001), condyles (p < 0.001), coronoid (p = 0.017), mandible (p < 0.001), ramus (p < 0.001), and symphysis (p < 0.001). AI-based segmentations produced lower RMS values, indicating truer 3D models, compared to global thresholding, and showed no significant differences in some areas compared to manual segmentation. Thus, AI-based segmentation offers a level of segmentation trueness acceptable for use as an alternative to manual or global thresholding segmentation protocols.
Page 142 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.