Sort by:
Page 104 of 2252246 results

Internal Target Volume Estimation for Liver Cancer Radiation Therapy Using an Ultra Quality 4-Dimensional Magnetic Resonance Imaging.

Liao YP, Xiao H, Wang P, Li T, Aguilera TA, Visak JD, Godley AR, Zhang Y, Cai J, Deng J

pubmed logopapersJun 1 2025
Accurate internal target volume (ITV) estimation is essential for effective and safe radiation therapy in liver cancer. This study evaluates the clinical value of an ultraquality 4-dimensional magnetic resonance imaging (UQ 4D-MRI) technique for ITV estimation. The UQ 4D-MRI technique maps motion information from a low spatial resolution dynamic volumetric MRI onto a high-resolution 3-dimensional MRI used for radiation treatment planning. It was validated using a motion phantom and data from 13 patients with liver cancer. ITV generated from UQ 4D-MRI (ITV<sub>4D</sub>) was compared with those obtained through isotropic expansions (ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub>) and those measured using conventional 4D-computed tomography (computed tomography-based ITV, ITV<sub>CT</sub>) for each patient. Phantom studies showed a displacement measurement difference of <5% between UQ 4D-MRI and single-slice 2-dimensional cine MRI. In patient studies, the maximum superior-inferior displacements of the tumor on UQ 4D-MRI showed no significant difference compared with single-slice 2-dimensional cine imaging (<i>P</i> = .985). Computed tomography-based ITV showed no significant difference (<i>P</i> = .72) with ITV<sub>4D</sub>, whereas ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub> significantly overestimated the volume by 29.0% (<i>P</i> = .002) and 120.7% (<i>P</i> < .001) compared with ITV<sub>4D</sub>, respectively. UQ 4D-MRI enables accurate motion assessment for liver tumors, facilitating precise ITV delineation for radiation treatment planning. Despite uncertainties from artificial intelligence-based delineation and variations in patients' respiratory patterns, UQ 4D-MRI excels at capturing tumor motion trajectories, potentially improving treatment planning accuracy and reducing margins in liver cancer radiation therapy.

Accuracy of an Automated Bone Scan Index Measurement System Enhanced by Deep Learning of the Female Skeletal Structure in Patients with Breast Cancer.

Fukai S, Daisaki H, Yamashita K, Kuromori I, Motegi K, Umeda T, Shimada N, Takatsu K, Terauchi T, Koizumi M

pubmed logopapersJun 1 2025
VSBONE<sup>®</sup> BSI (VSBONE), an automated bone scan index (BSI) measurement system was updated from version 2.1 (ver.2) to 3.0 (ver.3). VSBONE ver.3 incorporates deep learning of the skeletal structures of 957 new women, and it can be applied in patients with breast cancer. However, the performance of the updated VSBONE remains unclear. This study aimed to validate the diagnostic accuracy of the VSBONE system in patients with breast cancer. In total, 220 Japanese patients with breast cancer who underwent bone scintigraphy with single-photon emission computed tomography/computed tomography (SPECT/CT) were retrospectively analyzed. The patients were diagnosed with active bone metastases (<i>n</i> = 20) and non-bone metastases (<i>n</i> = 200) according to the physician's radiographic image interpretation. The patients were assessed using the VSBONE ver.2 and VSBONE ver.3, and the BSI findings were compared with the interpretation results by the physicians. The occurrence of segmentation errors, the association of BSI between VSBONE ver.2 and VSBONE ver.3, and the diagnostic accuracy of the systems were evaluated. VSBONE ver.2 and VSBONE ver.3 had segmentation errors in four and two patients. Significant positive linear correlations were confirmed in both versions of the BSI (<i>r</i> = 0.92). The diagnostic accuracy was 54.1% in VSBOBE ver.2, and 80.5% in VSBONE ver.3 <i>(P</i> < 0.001), respectively. The diagnostic accuracy of VSBONE was improved through deep learning of the female skeletal structures. The updated VSBONE ver.3 can be a reliable automated system for measuring BSI in patients with breast cancer.

Advanced Three-Dimensional Assessment and Planning for Hallux Valgus.

Forin Valvecchi T, Marcolli D, De Cesar Netto C

pubmed logopapersJun 1 2025
The article discusses advanced three-dimensional evaluation of hallux valgus deformity using weightbearing computed tomography. Conventional two-dimensional radiographs fall short in assessing the complexity of hallux valgus deformities, whereas weightbearing computed tomography provides detailed insights into bone alignment and joint stability in a weightbearing state. Recent studies have highlighted the significance of first ray hypermobility and intrinsic metatarsal rotation in hallux valgus, influencing surgical planning and outcomes. The integration of semiautomatic and artificial intelligence-assisted tools with weightbearing computed tomography is enhancing the precision of deformity assessment, leading to more personalized and effective hallux valgus management.

Large Language Models for Diagnosing Focal Liver Lesions From CT/MRI Reports: A Comparative Study With Radiologists.

Sheng L, Chen Y, Wei H, Che F, Wu Y, Qin Q, Yang C, Wang Y, Peng J, Bashir MR, Ronot M, Song B, Jiang H

pubmed logopapersJun 1 2025
Whether large language models (LLMs) could be integrated into the diagnostic workflow of focal liver lesions (FLLs) remains unclear. We aimed to investigate two generic LLMs (ChatGPT-4o and Gemini) regarding their diagnostic accuracies referring to the CT/MRI reports, compared to and combined with radiologists of different experience levels. From April 2022 to April 2024, this single-center retrospective study included consecutive adult patients who underwent contrast-enhanced CT/MRI for single FLL and subsequent histopathologic examination. The LLMs were prompted by clinical information and the "findings" section of radiology reports three times to provide differential diagnoses in the descending order of likelihood, with the first considered the final diagnosis. In the research setting, six radiologists (three junior and three middle-level) independently reviewed the CT/MRI images and clinical information in two rounds (first alone, then with LLM assistance). In the clinical setting, diagnoses were retrieved from the "impressions" section of radiology reports. Diagnostic accuracy was investigated against histopathology. 228 patients (median age, 59 years; 155 males) with 228 FLLs (median size, 3.6 cm) were included. Regarding the final diagnosis, the accuracy of two-step ChatGPT-4o (78.9%) was higher than single-step ChatGPT-4o (68.0%, p < 0.001) and single-step Gemini (73.2%, p = 0.004), similar to real-world radiology reports (80.0%, p = 0.34) and junior radiologists (78.9%-82.0%; p-values, 0.21 to > 0.99), but lower than middle-level radiologists (84.6%-85.5%; p-values, 0.001 to 0.02). No incremental diagnostic value of ChatGPT-4o was observed for any radiologist (p-values, 0.63 to > 0.99). Two-step ChatGPT-4o showed matching accuracies to real-world radiology reports and junior radiologists for diagnosing FLLs but was less accurate than middle-level radiologists and demonstrated little incremental diagnostic value.

Predicting lung cancer bone metastasis using CT and pathological imaging with a Swin Transformer model.

Li W, Zou X, Zhang J, Hu M, Chen G, Su S

pubmed logopapersJun 1 2025
Bone metastasis is a common and serious complication in lung cancer patients, leading to severe pain, pathological fractures, and reduced quality of life. Early prediction of bone metastasis can enable timely interventions and improve patient outcomes. In this study, we developed a multimodal Swin Transformer-based deep learning model for predicting bone metastasis risk in lung cancer patients by integrating CT imaging and pathological data. A total of 215 patients with confirmed lung cancer diagnoses, including those with and without bone metastasis, were included. The model was designed to process high-resolution CT images and digitized histopathological images, with the features extracted independently by two Swin Transformer networks. These features were then fused using decision-level fusion techniques to improve classification accuracy. The Swin-Dual Fusion Model achieved superior performance compared to single-modality models and conventional architectures such as ResNet50, with an AUC of 0.966 on the test data and 0.967 on the training data. This integrated model demonstrated high accuracy, sensitivity, and specificity, making it a promising tool for clinical application in predicting bone metastasis risk. The study emphasizes the potential of transformer-based models to revolutionize bone oncology through advanced multimodal analysis and early prediction of metastasis, ultimately improving patient care and treatment outcomes.

An Optimized Framework of QSM Mask Generation Using Deep Learning: QSMmask-Net.

Lee G, Jung W, Sakaie KE, Oh SH

pubmed logopapersJun 1 2025
Quantitative susceptibility mapping (QSM) provides the spatial distribution of magnetic susceptibility within tissues through sequential steps: phase unwrapping and echo combination, mask generation, background field removal, and dipole inversion. Accurate mask generation is crucial, as masks excluding regions outside the brain and without holes are necessary to minimize errors and streaking artifacts during QSM reconstruction. Variations in susceptibility values can arise from different mask generation methods, highlighting the importance of optimizing mask creation. In this study, we propose QSMmask-net, a deep neural network-based method for generating precise QSM masks. QSMmask-net achieved the highest Dice score compared to other mask generation methods. Mean susceptibility values using QSMmask-net masks showed the lowest differences from manual masks (ground truth) in simulations and healthy controls (no significant difference, p > 0.05). Linear regression analysis confirmed a strong correlation with manual masks for hemorrhagic lesions (slope = 0.9814 ± 0.007, intercept = 0.0031 ± 0.001, R<sup>2</sup> = 0.9992, p < 0.05). We have demonstrated that mask generation methods can affect the susceptibility value estimations. QSMmask-net reduces the labor required for mask generation while providing mask quality comparable to manual methods. The proposed method enables users without specialized expertise to create optimized masks, potentially broadening QSM applicability efficiently.

Toward Noninvasive High-Resolution In Vivo pH Mapping in Brain Tumors by <sup>31</sup>P-Informed deepCEST MRI.

Schüre JR, Rajput J, Shrestha M, Deichmann R, Hattingen E, Maier A, Nagel AM, Dörfler A, Steidl E, Zaiss M

pubmed logopapersJun 1 2025
The intracellular pH (pH<sub>i</sub>) is critical for understanding various pathologies, including brain tumors. While conventional pH<sub>i</sub> measurement through <sup>31</sup>P-MRS suffers from low spatial resolution and long scan times, <sup>1</sup>H-based APT-CEST imaging offers higher resolution with shorter scan times. This study aims to directly predict <sup>31</sup>P-pH<sub>i</sub> maps from CEST data by using a fully connected neuronal network. Fifteen tumor patients were scanned on a 3-T Siemens PRISMA scanner and received <sup>1</sup>H-based CEST and T1 measurement, as well as <sup>31</sup>P-MRS. A neural network was trained voxel-wise on CEST and T1 data to predict <sup>31</sup>P-pH<sub>i</sub> values, using data from 11 patients for training and 4 for testing. The predicted pH<sub>i</sub> maps were additionally down-sampled to the original the <sup>31</sup>P-pH<sub>i</sub> resolution, to be able to calculate the RMSE and analyze the correlation, while higher resolved predictions were compared with conventional CEST metrics. The results demonstrated a general correspondence between the predicted deepCEST pH<sub>i</sub> maps and the measured <sup>31</sup>P-pH<sub>i</sub> in test patients. However, slight discrepancies were also observed, with a RMSE of 0.04 pH units in tumor regions. High-resolution predictions revealed tumor heterogeneity and features not visible in conventional CEST data, suggesting the model captures unique pH information and is not simply a T1 segmentation. The deepCEST pH<sub>i</sub> neural network enables the APT-CEST hidden pH-sensitivity and offers pH<sub>i</sub> maps with higher spatial resolution in shorter scan time compared with <sup>31</sup>P-MRS. Although this approach is constrained by the limitations of the acquired data, it can be extended with additional CEST features for future studies, thereby offering a promising approach for 3D pH imaging in a clinical environment.

Fully automated image quality assessment based on deep learning for carotid computed tomography angiography: A multicenter study.

Fu W, Ma Z, Yang Z, Yu S, Zhang Y, Zhang X, Mei B, Meng Y, Ma C, Gong X

pubmed logopapersJun 1 2025
To develop and evaluate the performance of fully automated model based on deep learning and multiple logistics regression algorithm for image quality assessment (IQA) of carotid computed tomography angiography (CTA) images. This study retrospectively collected 840 carotid CTA images from four tertiary hospitals. Three radiologists independently assessed the image quality using a 3-point Likert scale, based on the degree of noise, vessel enhancement, arterial vessel contrast, vessel edge sharpness, and overall diagnostic acceptability. An automated assessment model was developed using a training dataset consisting of 600 carotid CTA images. The assessment steps included: (i) selection of objective representative slices; (ii) use of 3D Res U-net approach to extract objective indices from the representative slices and (iii) use of single objective index and multiple indices combinedly to develop logistic regression models for IQA. In the internal and external test datasets (n = 240), the performance of models was evaluated using sensitivity, specificity, precision, F-score, accuracy, the area under the receiver operating characteristic curve (AUC), and the IQA results of models was compared with radiologists' consensus. The representative slices were determined based on the same length model. The performance of multi-index model was excellent in internal and external test datasets with AUCs of 0.98 and 0.97. And the consistency between model and radiologists achieved 91.8% (95% CI: 87.0-96.5) and 92.6% (95 % CI: 86.9-98.4) in internal and external test datasets respectively. The fully automated multi-index model showed equivalent performance to the subjective perceptions of radiologists with greater efficiency for IQA.

Artificial intelligence in pediatric osteopenia diagnosis: evaluating deep network classification and model interpretability using wrist X-rays.

Harris CE, Liu L, Almeida L, Kassick C, Makrogiannis S

pubmed logopapersJun 1 2025
Osteopenia is a bone disorder that causes low bone density and affects millions of people worldwide. Diagnosis of this condition is commonly achieved through clinical assessment of bone mineral density (BMD). State of the art machine learning (ML) techniques, such as convolutional neural networks (CNNs) and transformer models, have gained increasing popularity in medicine. In this work, we employ six deep networks for osteopenia vs. healthy bone classification using X-ray imaging from the pediatric wrist dataset GRAZPEDWRI-DX. We apply two explainable AI techniques to analyze and interpret visual explanations for network decisions. Experimental results show that deep networks are able to effectively learn osteopenic and healthy bone features, achieving high classification accuracy rates. Among the six evaluated networks, DenseNet201 with transfer learning yielded the top classification accuracy at 95.2 %. Furthermore, visual explanations of CNN decisions provide valuable insight into the blackbox inner workings and present interpretable results. Our evaluation of deep network classification results highlights their capability to accurately differentiate between osteopenic and healthy bones in pediatric wrist X-rays. The combination of high classification accuracy and interpretable visual explanations underscores the promise of incorporating machine learning techniques into clinical workflows for the early and accurate diagnosis of osteopenia.

Broadening the Net: Overcoming Challenges and Embracing Novel Technologies in Lung Cancer Screening.

Czerlanis CM, Singh N, Fintelmann FJ, Damaraju V, Chang AEB, White M, Hanna N

pubmed logopapersJun 1 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide, with most cases diagnosed at advanced stages where curative treatment options are limited. Low-dose computed tomography (LDCT) for lung cancer screening (LCS) of individuals selected based on age and smoking history has shown a significant reduction in lung cancer-specific mortality. The number needed to screen to prevent one death from lung cancer is lower than that for breast cancer, cervical cancer, and colorectal cancer. Despite the substantial impact on reducing lung cancer-related mortality and proof that LCS with LDCT is effective, uptake of LCS has been low and LCS eligibility criteria remain imperfect. While LCS programs have historically faced patient recruitment challenges, research suggests that there are novel opportunities to both identify and improve screening for at-risk populations. In this review, we discuss the global obstacles to implementing LCS programs and strategies to overcome barriers in resource-limited settings. We explore successful approaches to promote LCS through robust engagement with community partners. Finally, we examine opportunities to enhance LCS in at-risk populations not captured by current eligibility criteria, including never smokers and individuals with a family history of lung cancer, with a focus on early detection through novel artificial intelligence technologies.
Page 104 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.