Sort by:
Page 222 of 6546537 results

Tripathi, A. G., Waqas, A., Schabath, M. B., Yilmaz, Y., Rasool, G.

medrxiv logopreprintAug 27 2025
HONeYBEE (Harmonized ONcologY Biomedical Embedding Encoder) is an open-source framework that integrates multimodal biomedical data for oncology applications. It processes clinical data (structured and unstructured), whole-slide images, radiology scans, and molecular profiles to generate unified patient-level embeddings using domain-specific foundation models and fusion strategies. These embeddings enable survival prediction, cancer-type classification, patient similarity retrieval, and cohort clustering. Evaluated on 11,400+ patients across 33 cancer types from The Cancer Genome Atlas (TCGA), clinical embeddings showed the strongest single-modality performance with 98.5% classification accuracy and 96.4% precision@10 in patient retrieval. They also achieved the highest survival prediction concordance indices across most cancer types. Multimodal fusion provided complementary benefits for specific cancers, improving overall survival prediction beyond clinical features alone. Comparative evaluation of four large language models revealed that general-purpose models like Qwen3 outperformed specialized medical models for clinical text representation, though task-specific fine-tuning improved performance on heterogeneous data such as pathology reports.

Rasheed M, Jaffar MA, Akram A, Rashid J, Alshalali TAN, Irshad A, Sarwar N

pubmed logopapersAug 27 2025
Brain tumors have a big effect on a person's health by letting abnormal cells grow unchecked in the brain. This means that early and correct diagnosis is very important for effective treatment. Many of the current diagnostic methods are time-consuming, rely primarily on hand interpretation, and frequently yield unsatisfactory results. This work finds brain tumors in MRI data using DenseNet121 architecture with transfer learning. Model training made use of the Kaggle dataset. By preprocessing the stage, resizing the MRI pictures to minimize noise would help the model perform better. From one MRI scan, the proposed approach divides brain tissues into four groups: benign tumors, gliomas, meningiomas, and pituitary gland malignancies. The designed DenseNet121 architecture precisely classifies brain cancers. We assessed the models' performance in terms of accuracy, precision, recall, and F1-score. The suggested approach proved successful in the multi-class categorization of brain tumors since it attained an average accuracy improvement of 96.90%. Unlike previous diagnostic techniques, such as eye inspection and other machine learning models, the proposed DenseNet121-based approach is more accurate, takes less time to analyze, and requires less human input. Although the automated method ensures consistent and predictable results, human error sometimes causes more unpredictability in conventional methods. Based on MRI-based detection and transfer learning, this paper proposes an automated method for the classification of brain cancers. The method improves the precision and speed of brain tumor diagnosis, which benefits both MRI-based classification research and clinical use. The development of deep-learning models may even further improve tumor identification and prognosis prediction.

Kim GY, Yang HS, Hwang J, Lee K, Choi JW, Jung WS, Kim REY, Kim D, Lee M

pubmed logopapersAug 27 2025
Volumetric estimation of affected brain volumes using computed tomography perfusion (CTP) is crucial in the management of acute ischemic stroke (AIS) and relies on commercial software, which has limitations such as variations in results due to image quality. To predict affected brain volume accurately and robustly, we propose a hybrid approach that integrates singular value decomposition (SVD), deep learning (DL), and machine learning (ML) techniques. We included 449 CTP images of patients with AIS with manually annotated vessel landmarks provided by expert radiologists, collected between 2021 and 2023. We developed a CNN-based approach for predicting eight vascular landmarks from CTP images, integrating ML components. We then used SVD-related methods to generate perfusion maps and compared the results with those of the RapidAI software (RapidAI, Menlo Park, California). The proposed CNN model achieved an average Euclidean distance error of 4.63 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 2.00 mm on the vessel localization. Without the ML components, compared to RapidAI, our method yielded concordance correlation coefficient (CCC) scores of 0.898 for estimating volumes with cerebral blood flow (CBF) < 30% and 0.715 for Tmax > 6 s. Using the ML method, it achieved CCC scores of 0.905 for CBF < 30% and 0.879 for Tmax > 6 s. For the data assessment, it achieved 0.8 accuracy. We developed a robust hybrid model combining DL and ML techniques for volumetric estimation of affected brain volumes using CTP in patients with AIS, demonstrating improved accuracy and robustness compared to existing commercial solutions.

Timpano G, Veltri P, Vizza P, Cascini GL, Manti F

pubmed logopapersAug 27 2025
Automated segmentation of skeletal muscle from computed tomography (CT) images is essential for large-scale quantitative body composition analysis. However, manual segmentation is time-consuming and impractical for routine or high-throughput use. This study presents a systematic comparison of two-dimensional (2D) and three-dimensional (3D) deep learning architectures for segmenting skeletal muscle at the anatomically standardized level of the third lumbar vertebra (L3) in low-dose computed tomography (LDCT) scans. We implemented and evaluated the DeepLabv3+ (2D) and UNet3+ (3D) architectures on a curated dataset of 537 LDCT scans, applying preprocessing protocols, L3 slice selection, and region of interest extraction. The model performance was evaluated using a comprehensive set of evaluation metrics, including Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95). DeepLabv3+ achieved the highest segmentation accuracy (DSC = 0.982 ± 0.010, HD95 = 1.04 ± 0.46 mm), while UNet3+ showed competitive performance (DSC = 0.967 ± 0.013, HD95 = 1.27 ± 0.58 mm) with 26 times fewer parameters (1.27 million vs. 33.6 million) and lower inference time. Both models exceeded or matched results reported in the recent CT-based muscle segmentation literature. This work offers practical insights into architecture selection for automated LDCT-based muscle segmentation workflows, with a focus on the L3 vertebral level, which remains the gold standard in muscle quantification protocols.

Tian L, Lu Y, Fei X, Lu J

pubmed logopapersAug 27 2025
This study aims to identify common errors in head and neck CTA reports using GPT-4, ERNIE Bot, and SparkDesk, evaluating their potential for supporting quality control in Chinese radiological reports. This study collected 10,000 head and neck CTA imaging reports from Xuanwu Hospital (Dataset 1) and 5000 multi-center reports (Dataset 2). We identified six common types of errors and detected them using three large language models: GPT-4, ERNIE Bot, and SparkDesk. The overall quality of the reports was assessed using a 5-point Likert scale. We conducted a Wilcoxon rank-sum test and Friedman test to compare error detection rates and evaluate the models' performance on different error types and overall scores. For Dataset 2, after manual review, we annotated the six error types and provided overall scoring, while also recording the time taken for manual scoring and model detection. Model performance was evaluated using accuracy, precision, recall, and F1 score. The intraclass correlation coefficient measured consistency between manual and model scores, and ANOVA compared evaluation times. In Dataset 1, the error detection rates for final reports were significantly lower than those for preliminary reports across all three model types. Friedman's test indicated significant differences in error rates among the three models. In Dataset 2, the detection accuracy of the three LLMs for the six error types was above 95%. GPT-4 had a moderate consistency with manual scores (ICC = 0.517), while ERNIE Bot and SparkDesk showed slightly lower consistency (ICC = 0.431 and 0.456, respectively; P < 0.001). The models evaluated one hundred radiology reports significantly faster than human reviewers. LLMs can differentiate the quality of radiology reports and identify error types, significantly enhancing the efficiency of quality control reviews and providing substantial research and practical value in this field.

Xie Z, Yang X, Zhang S, Yang J, Zhu Y, Zhang A, Sun H, Dai Q, Li L, Liu H, Ming W, Dou M

pubmed logopapersAug 27 2025
To explore the potential of quantum computing in advancing transformer-based deep learning models for breast cancer screening, this study introduces the Quantum-Enhanced Swin Transformer (QEST). This model integrates a Variational Quantum Circuit (VQC) to replace the fully connected layer responsible for classification in the Swin Transformer architecture. In simulations, QEST exhibited competitive accuracy and generalization performance compared to the original Swin Transformer, while also demonstrating an effect in mitigating overfitting. Specifically, in 16-qubit simulations, the VQC reduced the parameter count by 62.5% compared with the replaced fully connected layer and improved the Balanced Accuracy (BACC) by 3.62% in external validation. Furthermore, validation experiments conducted on an actual quantum computer have corroborated the effectiveness of QEST.

Sauter AP, Thalhammer J, Meurer F, Dorosti T, Sasse D, Ritter J, Leonhardt Y, Pfeiffer F, Schaff F, Pfeiffer D

pubmed logopapersAug 27 2025
This retrospective study evaluates U-Net-based artifact reduction for dose-reduced sparse-sampling CT (SpSCT) in terms of image quality and diagnostic performance using a reader study and automated detection. CT pulmonary angiograms from 89 patients were used to generate SpSCT data with 16 to 512 views. Twenty patients were reserved for a reader study and test set, the remaining 69 were used to train (53) and validate (16) a dual-frame U-Net for artifact reduction. U-Net post-processed images were assessed for image quality, diagnostic performance, and automated pulmonary embolism (PE) detection using the top-performing network from the 2020 RSNA PE detection challenge. Statistical comparisons were made using two-sided Wilcoxon signed-rank and DeLong two-sided tests. Post-processing with the dual-frame U-Net significantly improved image quality in the internal test set, with a structural similarity index of 0.634/0.378/0.234/0.152 for FBP and 0.894/0.892/0.866/0.778 for U-Net at 128/64/32/16 views, respectively. The reader study showed significantly enhanced image quality (3.15 vs. 3.53 for 256 views, 0.00 vs. 2.52 for 32 views), increased diagnostic confidence (0.00 vs. 2.38 for 32 views), and fewer artifacts across all subsets (P < 0.05). Diagnostic performance, measured by the Sørensen-Dice coefficient, was significantly better for 64- and 32-view images (0.23 vs. 0.44 and 0.00 vs. 0.09, P < 0.05). Automated PE detection was better at fewer views (64 views: 0.77 vs. 0.80, 16 views: 0.59 vs. 0.80), although the differences were not statistically significant. U-Net-based post-processing of SpSCT data significantly enhances image quality and diagnostic performance, supporting substantial dose reduction in CT pulmonary angiography.

Shin C, Eom D, Lee SM, Park JE, Kim K, Lee KH

pubmed logopapersAug 27 2025
Large language models (LLMs) hold transformative potential for medical image labeling in radiology, addressing challenges posed by linguistic variability in reports. We developed a two-stage natural language processing pipeline that combines Bidirectional Encoder Representations from Transformers (BERT) and an LLM to analyze radiology reports. In the first stage (Entity Key Classification), BERT model identifies and classifies clinically relevant entities mentioned in the text. In the second stage (Relationship Mapping), the extracted entities are incorporated into the LLM to infer relationships between entity pairs, considering actual presence of entity. The pipeline targets lesion-location mapping in chest CT and diagnosis-episode mapping in brain MRI, both of which are clinically important for structuring radiologic findings and capturing temporal patterns of disease progression. Using over 400,000 reports from Seoul Asan Medical Center, our pipeline achieved a macro F1-score of 77.39 for chest CT and 70.58 for brain MRI. These results highlight the effectiveness of integrating BERT with an LLM to enhance diagnostic accuracy in radiology report analysis.

Chen C, Zhang L, Xing Y, Chen Z

pubmed logopapersAug 27 2025
While deep learning (DL) methods have exhibited promising results in mitigating streaking artifacts caused by limited-view computed tomography (CT), their generalization to practical applications remains challenging. To address this challenge, we aim to develop a novel approach that integrates DL priors with targeted-case data consistency for improved artifact suppression and robust reconstruction.&#xD;Approach: We propose an alternative Penalized Weighted Least Squares reconstruction framework by Strategic Optimization of a DL Model (PWLS-SOM). This framework combines data-driven DL priors with data consistency constraints in a three-stage process: (1) Group-level embedding: DL network parameters are optimized on a large-scale paired dataset to learn general artifact elimination. (2) Significance evaluation: A novel significance score quantifies the contribution of DL model parameters, guiding the subsequent strategic adaptation. (3) Individual-level consistency adaptation: PWLS-driven strategic optimization further adapts DL parameters for target-specific projection data.&#xD;Main Results: Experiments were conducted on sparse-view (90 views) circular trajectory CT data and a multi-segment linear trajectory CT scan with a mixed data missing problem. PWLS-SOM reconstruction demonstrated superior generalization across variations in patients, anatomical structures, and data distributions. It outperformed supervised DL methods in recovering contextual structures and adapting to practical CT scenarios. The method was validated with real experiments on a dead rat, showcasing its applicability to real-world CT scans.&#xD;Significance: PWLS-SOM reconstruction advances the field of limited-view CT reconstruction by uniting DL priors with PWLS adaptation. This approach facilitates robust and personalized imaging. The introduction of the significance score provides an efficient metric to evaluate generalization and guide the strategic optimization of DL parameters, enhancing adaptability across diverse data and practical imaging conditions.

Liu Y, Wang Y, Huang J, Pei S, Wang Y, Cui Y, Yan L, Yao M, Wang Y, Zhu Z, Huang C, Liu Z, Liang C, Shi J, Li Z, Pei X, Wu L

pubmed logopapersAug 27 2025
Noninvasive biomarkers that capture the longitudinal multiregional tumour burden in patients with breast cancer may improve the assessment of residual nodal disease and guide axillary surgery. Additionally, a significant barrier to the clinical translation of the current data-driven deep learning model is the lack of interpretability. This study aims to develop and validate an information shared-private (iShape) model to predict axillary pathological complete response in patients with axillary lymph node (ALN)-positive breast cancer receiving neoadjuvant therapy (NAT) by learning common and specific image representations from longitudinal primary tumour and ALN ultrasound images. A total of 1135 patients with biopsy-proven ALN-positive breast cancer who received NAT were included in this multicentre, retrospective study. The iShape was trained on a dataset of 371 patients and validated on three external validation sets (EVS1-3), with 295, 244, and 225 patients, respectively. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). The false-negative rates (FNRs) of iShape alone and in combination with sentinel lymph node biopsy (SLNB) were also evaluated. Imaging feature visualisation and RNA sequencing analysis were performed to explore the underlying basis of iShape. The iShape achieved AUCs of 0.950-0.971 for EVS 1-3, which were better than those of the clinical model and the image signatures derived from the primary tumour, longitudinal primary tumour, or ALN (P < 0.05, as per the DeLong test). The performance of iShape remained satisfactory in subgroup analyses stratified by age, menstrual status, T stage, molecular subtype, treatment regimens, and machine type (AUCs of 0.812-1.000). More importantly, the FNR of iShape was 7.7%-8.1% in the EVSs, and the FNR of SLNB decreased from 13.4% to 3.6% with the aid of iShape in patients receiving SLNB and ALN dissection. The decision-making process of iShape was explained by feature visualisation. Additionally, RNA sequencing analysis revealed that a lower deep learning score was associated with immune infiltration and tumour proliferation pathways. The iShape model demonstrated good performance for the precise quantification of ALN status in patients with ALN-positive breast cancer receiving NAT, potentially benefiting individualised decision-making, and avoiding unnecessary axillary lymph node dissection. This study was supported by (1) Noncommunicable Chronic Diseases-National Science and Technology Major Project (No. 2024ZD0531100); (2) Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006); (3) National Natural Science Foundation of China (No. 82472051, 82471947, 82271941, 82272088); (4) National Science Foundation for Young Scientists of China (No. 82402270, 82202095, 82302190); (5) Guangzhou Municipal Science and Technology Planning Project (No. 2025A04J4773, 2025A04J4774); (6) the Natural Science Foundation of Guangdong Province of China (No. 2025A1515011607); (7) Medical Scientific Research Foundation of Guangdong Province of China (No. A2024403); (8) Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); (9) Outstanding Youth Science Foundation of Yunnan Basic Research Project (No. 202401AY070001-316); (10) Innovative Research Team of Yunnan Province (No. 202505AS350013).
Page 222 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.