Sort by:
Page 179 of 1961951 results

A comparison of performance of DeepSeek-R1 model-generated responses to musculoskeletal radiology queries against ChatGPT-4 and ChatGPT-4o - A feasibility study.

Uldin H, Saran S, Gandikota G, Iyengar KP, Vaishya R, Parmar Y, Rasul F, Botchu R

pubmed logopapersMay 12 2025
Artificial Intelligence (AI) has transformed society and chatbots using Large Language Models (LLM) are playing an increasing role in scientific research. This study aims to assess and compare the efficacy of newer DeepSeek R1 and ChatGPT-4 and 4o models in answering scientific questions about recent research. We compared output generated from ChatGPT-4, ChatGPT-4o, and DeepSeek-R1 in response to ten standardized questions in the setting of musculoskeletal (MSK) radiology. These were independently analyzed by one MSK radiologist and one final-year MSK radiology trainee and graded using a Likert scale from 1 to 5 (1 being inaccurate to 5 being accurate). Five DeepSeek answers were significantly inaccurate and provided fictitious references only on prompting. All ChatGPT-4 and 4o answers were well-written with good content, the latter including useful and comprehensive references. ChatGPT-4o generates structured research answers to questions on recent MSK radiology research with useful references in all our cases, enabling reliable usage. DeepSeek-R1 generates articles that, on the other hand, may appear authentic to the unsuspecting eye but contain a higher amount of falsified and inaccurate information in the current version. Further iterations may improve these accuracies.

LiteMIL: A Computationally Efficient Transformer-Based MIL for Cancer Subtyping on Whole Slide Images.

Kussaibi, H.

medrxiv logopreprintMay 12 2025
PurposeAccurate cancer subtyping is crucial for effective treatment; however, it presents challenges due to overlapping morphology and variability among pathologists. Although deep learning (DL) methods have shown potential, their application to gigapixel whole slide images (WSIs) is often hindered by high computational demands and the need for efficient, context-aware feature aggregation. This study introduces LiteMIL, a computationally efficient transformer-based multiple instance learning (MIL) network combined with Phikon, a pathology-tuned self-supervised feature extractor, for robust and scalable cancer subtyping on WSIs. MethodsInitially, patches were extracted from TCGA-THYM dataset (242 WSIs, six subtypes) and subsequently fed in real-time to Phikon for feature extraction. To train MILs, features were arranged into uniform bags using a chunking strategy that maintains tissue context while increasing training data. LiteMIL utilizes a learnable query vector within an optimized multi-head attention module for effective feature aggregation. The models performance was evaluated against established MIL methods on the Thymic Dataset and three additional TCGA datasets (breast, lung, and kidney cancer). ResultsLiteMIL achieved 0.89 {+/-} 0.01 F1 score and 0.99 AUC on Thymic dataset, outperforming other MILs. LiteMIL demonstrated strong generalizability across the external datasets, scoring the best on breast and kidney cancer datasets. Compared to TransMIL, LiteMIL significantly reduces training time and GPU memory usage. Ablation studies confirmed the critical role of the learnable query and layer normalization in enhancing performance and stability. ConclusionLiteMIL offers a resource-efficient, robust solution. Its streamlined architecture, combined with the compact Phikon features, makes it suitable for integrating into routine histopathological workflows, particularly in resource-limited settings.

Automated scout-image-based estimation of contrast agent dosing: a deep learning approach

Schirrmeister, R., Taleb, L., Friemel, P., Reisert, M., Bamberg, F., Weiss, J., Rau, A.

medrxiv logopreprintMay 12 2025
We developed and tested a deep-learning-based algorithm for the approximation of contrast agent dosage based on computed tomography (CT) scout images. We prospectively enrolled 817 patients undergoing clinically indicated CT imaging, predominantly of the thorax and/or abdomen. Patient weight was collected by study staff prior to the examination 1) with a weight scale and 2) as self-reported. Based on the scout images, we developed an EfficientNet convolutional neural network pipeline to estimate the optimal contrast agent dose based on patient weight and provide a browser-based user interface as a versatile open-source tool to account for different contrast agent compounds. We additionally analyzed the body-weight-informative CT features by synthesizing representative examples for different weights using in-context learning and dataset distillation. The cohort consisted of 533 thoracic, 70 abdominal and 229 thoracic-abdominal CT scout scans. Self-reported patient weight was statistically significantly lower than manual measurements (75.13 kg vs. 77.06 kg; p < 10-5, Wilcoxon signed-rank test). Our pipeline predicted patient weight with a mean absolute error of 3.90 {+/-} 0.20 kg (corresponding to a roughly 4.48 - 11.70 ml difference in contrast agent depending on the agent) in 5-fold cross-validation and is publicly available at https://tinyurl.com/ct-scout-weight. Interpretability analysis revealed that both larger anatomical shape and higher overall attenuation were predictive of body weight. Our open-source deep learning pipeline allows for the automatic estimation of accurate contrast agent dosing based on scout images in routine CT imaging studies. This approach has the potential to streamline contrast agent dosing workflows, improve efficiency, and enhance patient safety by providing quick and accurate weight estimates without additional measurements or reliance on potentially outdated records. The models performance may vary depending on patient positioning and scout image quality and the approach requires validation on larger patient cohorts and other clinical centers. Author SummaryAutomation of medical workflows using AI has the potential to increase reproducibility while saving costs and time. Here, we investigated automating the estimation of the required contrast agent dosage for CT examinations. We trained a deep neural network to predict the body weight from the initial 2D CT Scout images that are required prior to the actual CT examination. The predicted weight is then converted to a contrast agent dosage based on contrast-agent-specific conversion factors. To facilitate application in clinical routine, we developed a user-friendly browser-based user interface that allows clinicians to select a contrast agent or input a custom conversion factor to receive dosage suggestions, with local data processing in the browser. We also investigate what image characteristics predict body weight and find plausible relationships such as higher attenuation and larger anatomical shapes correlating with higher body weights. Our work goes beyond prior work by implementing a single model for a variety of anatomical regions, providing an accessible user interface and investigating the predictive characteristics of the images.

Automatic Quantification of Ki-67 Labeling Index in Pediatric Brain Tumors Using QuPath

Spyretos, C., Pardo Ladino, J. M., Blomstrand, H., Nyman, P., Snodahl, O., Shamikh, A., Elander, N. O., Haj-Hosseini, N.

medrxiv logopreprintMay 12 2025
AO_SCPLOWBSTRACTC_SCPLOWThe quantification of the Ki-67 labeling index (LI) is critical for assessing tumor proliferation and prognosis in tumors, yet manual scoring remains a common practice. This study presents an automated workflow for Ki-67 scoring in whole slide images (WSIs) using an Apache Groovy code script for QuPath, complemented by a Python-based post-processing script, providing cell density maps and summary tables. The tissue and cell segmentation are performed using StarDist, a deep learning model, and adaptive thresholding to classify Ki-67 positive and negative nuclei. The pipeline was applied to a cohort of 632 pediatric brain tumor cases with 734 Ki-67-stained WSIs from the Childrens Brain Tumor Network. Medulloblastoma showed the highest Ki-67 LI (median: 19.84), followed by atypical teratoid rhabdoid tumor (median: 19.36). Moderate values were observed in brainstem glioma-diffuse intrinsic pontine glioma (median: 11.50), high-grade glioma (grades 3 & 4) (median: 9.50), and ependymoma (median: 5.88). Lower indices were found in meningioma (median: 1.84), while the lowest were seen in low-grade glioma (grades 1 & 2) (median: 0.85), dysembryoplastic neuroepithelial tumor (median: 0.63), and ganglioglioma (median: 0.50). The results aligned with the consensus of the oncology, demonstrating a significant correlation in Ki-67 LI across most of the tumor families/types, with high malignancy tumors showing the highest proliferation indices and lower malignancy tumors exhibiting lower Ki-67 LI. The automated approach facilitates the assessment of large amounts of Ki-67 WSIs in research settings.

Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).

Yan W, Xu Y, Yan S

pubmed logopapersMay 11 2025
BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.

Learning-based multi-material CBCT image reconstruction with ultra-slow kV switching.

Ma C, Zhu J, Zhang X, Cui H, Tan Y, Guo J, Zheng H, Liang D, Su T, Sun Y, Ge Y

pubmed logopapersMay 11 2025
ObjectiveThe purpose of this study is to perform multiple (<math xmlns="http://www.w3.org/1998/Math/MathML"><mo>≥</mo><mn>3</mn></math>) material decomposition with deep learning method for spectral cone-beam CT (CBCT) imaging based on ultra-slow kV switching.ApproachIn this work, a novel deep neural network called SkV-Net is developed to reconstruct multiple material density images from the ultra-sparse spectral CBCT projections acquired using the ultra-slow kV switching technique. In particular, the SkV-Net has a backbone structure of U-Net, and a multi-head axial attention module is adopted to enlarge the perceptual field. It takes the CT images reconstructed from each kV as input, and output the basis material images automatically based on their energy-dependent attenuation characteristics. Numerical simulations and experimental studies are carried out to evaluate the performance of this new approach.Main ResultsIt is demonstrated that the SkV-Net is able to generate four different material density images, i.e., fat, muscle, bone and iodine, from five spans of kV switched spectral projections. Physical experiments show that the decomposition errors of iodine and CaCl<math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mrow></mrow><mn>2</mn></msub></math> are less than 6<math xmlns="http://www.w3.org/1998/Math/MathML"><mi>%</mi></math>, indicating high precision of this novel approach in distinguishing materials.SignificanceSkV-Net provides a promising multi-material decomposition approach for spectral CBCT imaging systems implemented with the ultra-slow kV switching scheme.

Altered intrinsic ignition dynamics linked to Amyloid-β and tau pathology in Alzheimer's disease

Patow, G. A., Escrichs, A., Martinez-Molina, N., Ritter, P., Deco, G.

biorxiv logopreprintMay 11 2025
Alzheimer's disease (AD) progressively alters brain structure and function, yet the associated changes in large-scale brain network dynamics remain poorly understood. We applied the intrinsic ignition framework to resting-state functional MRI (rs-fMRI) data from AD patients, individuals with mild cognitive impairment (MCI), and cognitively healthy controls (HC) to elucidate how AD shapes intrinsic brain activity. We assessed node-metastability at the whole-brain level and in 7 canonical resting-state networks (RSNs). Our results revealed a progressive decline in dynamical complexity across the disease continuum. HC exhibited the highest node-metastability, whereas it was substantially reduced in MCI and AD patients. The cortical hierarchy of information processing was also disrupted, indicating that rich-club hubs may be selectively affected in AD progression. Furthermore, we used linear mixed-effects models to evaluate the influence of Amyloid-{beta} (A{beta}) and tau pathology on brain dynamics at both regional and whole-brain levels. We found significant associations between both protein burdens and alterations in node metastability. Lastly, a machine learning classifier trained on brain dynamics, A{beta}, and tau burden features achieved high accuracy in discriminating between disease stages. Together, our findings highlight the progressive disruption of intrinsic ignition across whole-brain and RSNs in AD and support the use of node-metastability in conjunction with proteinopathy as a novel framework for tracking disease progression.

Study on predicting breast cancer Ki-67 expression using a combination of radiomics and deep learning based on multiparametric MRI.

Wang W, Wang Z, Wang L, Li J, Pang Z, Qu Y, Cui S

pubmed logopapersMay 11 2025
To develop a multiparametric breast MRI radiomics and deep learning-based multimodal model for predicting preoperative Ki-67 expression status in breast cancer, with the potential to advance individualized treatment and precision medicine for breast cancer patients. We included 176 invasive breast cancer patients who underwent breast MRI and had Ki-67 results. The dataset was randomly split into training (70 %) and test (30 %) sets. Features from T1-weighted imaging (T1WI), diffusion-weighted imaging (DWI), T2-weighted imaging (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were fused. Separate models were created for each sequence: T1, DWI, T2, and DCE. A multiparametric MRI (mp-MRI) model was then developed by combining features from all sequences. Models were trained using five-fold cross-validation and evaluated on the test set with receiver operating characteristic (ROC) curve area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. Delong's test compared the mp-MRI model with the other models, with P < 0.05 indicating statistical significance. All five models demonstrated good performance, with AUCs of 0.83 for the T1 model, 0.85 for the DWI model, 0.90 for the T2 model, 0.92 for the DCE model, and 0.96 for the mp-MRI model. Delong's test indicated statistically significant differences between the mp-MRI model and the other four models, with P values < 0.05. The multiparametric breast MRI radiomics and deep learning-based multimodal model performs well in predicting preoperative Ki-67 expression status in breast cancer.

A systematic review and meta-analysis of the utility of quantitative, imaging-based approaches to predict radiation-induced toxicity in lung cancer patients.

Tong D, Midroni J, Avison K, Alnassar S, Chen D, Parsa R, Yariv O, Liu Z, Ye XY, Hope A, Wong P, Raman S

pubmed logopapersMay 11 2025
To conduct a systematic review and meta-analysis of the performance of radiomics, dosiomics and machine learning in generating toxicity prediction in thoracic radiotherapy. An electronic database search was conducted and dual-screened by independent authors to identify eligible studies for systematic review and meta-analysis. Data was extracted and study quality was assessed using TRIPOD for machine learning studies, RQS for Radiomics and RoB for dosiomics. 10,703 studies were identified, and 5252 entered screening. 106 studies including 23,373 patients were eligible for systematic review. Primary toxicity predicted was radiation pneumonitis (81), followed by esophagitis (12) and lymphopenia (4). Fourty-two studies studying radiation pneumonitis were eligible for meta-analysis, with pooled area-under-curve (AUC) of 0.82 (95% CI 0.79-0.85). Studies with machine learning had the best performance, with classical and deep learning models having similar performance. There is a trend towards an improvement of the performance of models with the year of publication. There is variability in study quality among the three study categories and dosiomic studies scored the highest among these. Publication bias was not observed. The majority of existing literature using radiomics, dosiomics and machine learning has focused on radiation pneumonitis prediction. Future research should focus on toxicity prediction of other organs at risk and the adoption of these models into clinical practice.

Creation of an Open-Access Lung Ultrasound Image Database For Deep Learning and Neural Network Applications

Kumar, A., Nandakishore, P., Gordon, A. J., Baum, E., Madhok, J., Duanmu, Y., Kugler, J.

medrxiv logopreprintMay 11 2025
BackgroundLung ultrasound (LUS) offers advantages over traditional imaging for diagnosing pulmonary conditions, with superior accuracy compared to chest X-ray and similar performance to CT at lower cost. Despite these benefits, widespread adoption is limited by operator dependency, moderate interrater reliability, and training requirements. Deep learning (DL) could potentially address these challenges, but development of effective algorithms is hindered by the scarcity of comprehensive image repositories with proper metadata. MethodsWe created an open-source dataset of LUS images derived a multi-center study involving N=226 adult patients presenting with respiratory symptoms to emergency departments between March 2020 and April 2022. Images were acquired using a standardized scanning protocol (12-zone or modified 8-zone) with various point-of-care ultrasound devices. Three blinded researchers independently analyzed each image following consensus guidelines, with disagreements adjudicated to provide definitive interpretations. Videos were pre-processed to remove identifiers, and frames were extracted and resized to 128x128 pixels. ResultsThe dataset contains 1,874 video clips comprising 303,977 frames. Half of the participants (50%) had COVID-19 pneumonia. Among all clips, 66% contained no abnormalities, 18% contained B-lines, 4.5% contained consolidations, 6.4% contained both B-lines and consolidations, and 5.2% had indeterminate findings. Pathological findings varied significantly by lung zone, with anterior zones more frequently normal and less likely to show consolidations compared to lateral and posterior zones. DiscussionThis dataset represents one of the largest annotated LUS repositories to date, including both COVID-19 and non-COVID-19 patients. The comprehensive metadata and expert interpretations enhance its utility for DL applications. Despite limitations including potential device-specific characteristics and COVID-19 predominance, this repository provides a valuable resource for developing AI tools to improve LUS acquisition and interpretation.
Page 179 of 1961951 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.