Sort by:
Page 81 of 99990 results

Large Language Models for Diagnosing Focal Liver Lesions From CT/MRI Reports: A Comparative Study With Radiologists.

Sheng L, Chen Y, Wei H, Che F, Wu Y, Qin Q, Yang C, Wang Y, Peng J, Bashir MR, Ronot M, Song B, Jiang H

pubmed logopapersJun 1 2025
Whether large language models (LLMs) could be integrated into the diagnostic workflow of focal liver lesions (FLLs) remains unclear. We aimed to investigate two generic LLMs (ChatGPT-4o and Gemini) regarding their diagnostic accuracies referring to the CT/MRI reports, compared to and combined with radiologists of different experience levels. From April 2022 to April 2024, this single-center retrospective study included consecutive adult patients who underwent contrast-enhanced CT/MRI for single FLL and subsequent histopathologic examination. The LLMs were prompted by clinical information and the "findings" section of radiology reports three times to provide differential diagnoses in the descending order of likelihood, with the first considered the final diagnosis. In the research setting, six radiologists (three junior and three middle-level) independently reviewed the CT/MRI images and clinical information in two rounds (first alone, then with LLM assistance). In the clinical setting, diagnoses were retrieved from the "impressions" section of radiology reports. Diagnostic accuracy was investigated against histopathology. 228 patients (median age, 59 years; 155 males) with 228 FLLs (median size, 3.6 cm) were included. Regarding the final diagnosis, the accuracy of two-step ChatGPT-4o (78.9%) was higher than single-step ChatGPT-4o (68.0%, p < 0.001) and single-step Gemini (73.2%, p = 0.004), similar to real-world radiology reports (80.0%, p = 0.34) and junior radiologists (78.9%-82.0%; p-values, 0.21 to > 0.99), but lower than middle-level radiologists (84.6%-85.5%; p-values, 0.001 to 0.02). No incremental diagnostic value of ChatGPT-4o was observed for any radiologist (p-values, 0.63 to > 0.99). Two-step ChatGPT-4o showed matching accuracies to real-world radiology reports and junior radiologists for diagnosing FLLs but was less accurate than middle-level radiologists and demonstrated little incremental diagnostic value.

Internal Target Volume Estimation for Liver Cancer Radiation Therapy Using an Ultra Quality 4-Dimensional Magnetic Resonance Imaging.

Liao YP, Xiao H, Wang P, Li T, Aguilera TA, Visak JD, Godley AR, Zhang Y, Cai J, Deng J

pubmed logopapersJun 1 2025
Accurate internal target volume (ITV) estimation is essential for effective and safe radiation therapy in liver cancer. This study evaluates the clinical value of an ultraquality 4-dimensional magnetic resonance imaging (UQ 4D-MRI) technique for ITV estimation. The UQ 4D-MRI technique maps motion information from a low spatial resolution dynamic volumetric MRI onto a high-resolution 3-dimensional MRI used for radiation treatment planning. It was validated using a motion phantom and data from 13 patients with liver cancer. ITV generated from UQ 4D-MRI (ITV<sub>4D</sub>) was compared with those obtained through isotropic expansions (ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub>) and those measured using conventional 4D-computed tomography (computed tomography-based ITV, ITV<sub>CT</sub>) for each patient. Phantom studies showed a displacement measurement difference of <5% between UQ 4D-MRI and single-slice 2-dimensional cine MRI. In patient studies, the maximum superior-inferior displacements of the tumor on UQ 4D-MRI showed no significant difference compared with single-slice 2-dimensional cine imaging (<i>P</i> = .985). Computed tomography-based ITV showed no significant difference (<i>P</i> = .72) with ITV<sub>4D</sub>, whereas ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub> significantly overestimated the volume by 29.0% (<i>P</i> = .002) and 120.7% (<i>P</i> < .001) compared with ITV<sub>4D</sub>, respectively. UQ 4D-MRI enables accurate motion assessment for liver tumors, facilitating precise ITV delineation for radiation treatment planning. Despite uncertainties from artificial intelligence-based delineation and variations in patients' respiratory patterns, UQ 4D-MRI excels at capturing tumor motion trajectories, potentially improving treatment planning accuracy and reducing margins in liver cancer radiation therapy.

3-D contour-aware U-Net for efficient rectal tumor segmentation in magnetic resonance imaging.

Lu Y, Dang J, Chen J, Wang Y, Zhang T, Bai X

pubmed logopapersJun 1 2025
Magnetic resonance imaging (MRI), as a non-invasive detection method, is crucial for the clinical diagnosis and treatment plan of rectal cancer. However, due to the low contrast of rectal tumor signal in MRI, segmentation is often inaccurate. In this paper, we propose a new three-dimensional rectal tumor segmentation method CAU-Net based on T2-weighted MRI images. The method adopts a convolutional neural network to extract multi-scale features from MRI images and uses a Contour-Aware decoder and attention fusion block (AFB) for contour enhancement. We also introduce adversarial constraint to improve augmentation performance. Furthermore, we construct a dataset of 108 MRI-T2 volumes for the segmentation of locally advanced rectal cancer. Finally, CAU-Net achieved a DSC of 0.7112 and an ASD of 2.4707, which outperforms other state-of-the-art methods. Various experiments on this dataset show that CAU-Net has high accuracy and efficiency in rectal tumor segmentation. In summary, proposed method has important clinical application value and can provide important support for medical image analysis and clinical treatment of rectal cancer. With further development and application, this method has the potential to improve the accuracy of rectal cancer diagnosis and treatment.

Habitat Radiomics Based on MRI for Predicting Metachronous Liver Metastasis in Locally Advanced Rectal Cancer: a Two‑center Study.

Shi S, Jiang T, Liu H, Wu Y, Singh A, Wang Y, Xie J, Li X

pubmed logopapersJun 1 2025
This study aimed to explore the feasibility of using habitat radiomics based on magnetic resonance imaging (MRI) to predict metachronous liver metastasis (MLM) in locally advanced rectal cancer (LARC) patients. A nomogram was developed by integrating multiple factors to enhance predictive accuracy. Retrospective data from 385 LARC patients across two centers were gathered. The data from Center 1 were split into a training set of 203 patients and an internal validation set of 87 patients, while Center 2 provided an external test set of 95 patients. K - means clustering was used on T2 - weighted images, and the region of interest was extended at different thicknesses. After feature extraction and selection, four machine - learning algorithms were utilized to build radiomics models. A nomogram was created by combining habitat radiomics, conventional radiomics, and clinical independent predictors. Model performance was evaluated by the AUC, and clinical utility was assessed through calibration curve and DCA. Habitat radiomics outperformed other single models in predicting MLM, with AUCs of 0.926, 0.864, and 0.851 in respective sets. The integrated nomogram achieved even higher AUCs of 0.959, 0.925, and 0.889. DCA and calibration curve analysis showed its high net benefit and good calibration. MRI - based habitat radiomics can effectively predict MLM in LARC patients. The integrated nomogram has optimal predictive performance and improves model accuracy significantly.

Significant reduction in manual annotation costs in ultrasound medical image database construction through step by step artificial intelligence pre-annotation.

Zheng F, XingMing L, JuYing X, MengYing T, BaoJian Y, Yan S, KeWei Y, ZhiKai L, Cheng H, KeLan Q, XiHao C, WenFei D, Ping H, RunYu W, Ying Y, XiaoHui B

pubmed logopapersJun 1 2025
This study investigates the feasibility of reducing manual image annotation costs in medical image database construction by utilizing a step by step approach where the Artificial Intelligence model (AI model) trained on a previous batch of data automatically pre-annotates the next batch of image data, taking ultrasound image of thyroid nodule annotation as an example. The study used YOLOv8 as the AI model. During the AI model training, in addition to conventional image augmentation techniques, augmentation methods specifically tailored for ultrasound images were employed to balance the quantity differences between thyroid nodule classes and enhance model training effectiveness. The study found that training the model with augmented data significantly outperformed training with raw images data. When the number of original images number was only 1,360, with 7 thyroid nodule classifications, pre-annotation using the AI model trained on augmented data could save at least 30% of the manual annotation workload for junior physicians. When the scale of original images number reached 6,800, the classification accuracy of the AI model trained on augmented data was very close with that of junior physicians, eliminating the need for manual preliminary annotation.

[Capabilities and Advances of Transrectal Ultrasound in 2025].

Kaufmann S, Kruck S

pubmed logopapersJun 1 2025
Transrectal ultrasound, particularly in the combination of high-frequency ultrasound and MR-TRUS fusion technologies, provides a highly precise and effective method for correlation and targeted biopsy of suspicious intraprostatic lesions detected by MRI. Advances in imaging technology, driven by 29 Mhz micro-ultrasound transducers, robotic-assisted systems, and the integration of AI-based analyses, promise further improvements in diagnostic accuracy and a reduction in unnecessary biopsies. Further technological advancements and improved TRUS training could contribute to a decentralized and cost-effective diagnostic evaluation of prostate cancer in the future.

CT-Based Deep Learning Predicts Prognosis in Esophageal Squamous Cell Cancer Patients Receiving Immunotherapy Combined with Chemotherapy.

Huang X, Huang Y, Li P, Xu K

pubmed logopapersJun 1 2025
Immunotherapy combined with chemotherapy has improved outcomes for some esophageal squamous cell carcinoma (ESCC) patients, but accurate pre-treatment risk stratification remains a critical gap. This study constructed a deep learning (DL) model to predict survival outcomes in ESCC patients receiving immunotherapy combined with chemotherapy. A DL model was developed to predict survival outcomes in ESCC patients receiving immunotherapy and chemotherapy. Retrospective data from 482 patients across three institutions were split into training (N=322), internal test (N=79), and external test (N=81) sets. Unenhanced computed tomography (CT) scans were processed to analyze tumor and peritumoral regions. The model evaluated multiple input configurations: original tumor regions of interest (ROIs), ROI subregions, and ROIs expanded by 1 and 3 pixels. Performance was assessed using Harrell's C-index and receiver operating characteristic (ROC) curves. A multimodal model combined DL-derived risk scores with five key clinical and laboratory features. The Shapley Additive Explanations (SHAP) method elucidated the contribution of individual features to model predictions. The DL model with 1-pixel peritumoral expansion achieved the best accuracy, yielding a C-index of 0.75 for the internal test set and 0.60 for the external test set. Hazard ratios for high-risk patients were 1.82 (95% CI: 1.19-2.46; P=0.02) in internal test set. The multimodal model achieved C-indices of 0.74 and 0.61 for internal and external test sets, respectively. Kaplan-Meier analysis revealed significant survival differences between high- and low-risk groups (P<0.05). SHAP analysis identified tumor response, risk score, and age as critical contributors to predictions. This DL model demonstrates efficacy in stratifying ESCC patients by survival risk, particularly when integrating peritumoral imaging and clinical features. The model could serve as a valuable pre-treatment tool to facilitate the implementation of personalized treatment strategies for ESCC patients undergoing immunotherapy and chemotherapy.

Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation-Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model.

Abel L, Wasserthal J, Meyer MT, Vosshenrich J, Yang S, Donners R, Obmann M, Boll D, Merkle E, Breit HC, Segeroth M

pubmed logopapersJun 1 2025
The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (- 0.58% [95% CI: - 0.58, - 0.57]) and muscles (- 0.33% [- 0.35, - 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; p < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator's AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (p = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.

A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images.

Yao N, Hu H, Chen K, Huang H, Zhao C, Guo Y, Li B, Nan J, Li Y, Han C, Zhu F, Zhou W, Tian L

pubmed logopapersJun 1 2025
This study developed and validated a deep learning-based diagnostic model with uncertainty estimation to aid radiologists in the preoperative differentiation of pathological subtypes of renal cell carcinoma (RCC) based on computed tomography (CT) images. Data from 668 consecutive patients with pathologically confirmed RCC were retrospectively collected from Center 1, and the model was trained using fivefold cross-validation to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation with 78 patients from Center 2 was conducted to evaluate the performance of the model. In the fivefold cross-validation, the area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI, 0.826-0.923), 0.846 (95% CI, 0.812-0.886), and 0.839 (95% CI, 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI, 0.838-0.882), 0.787 (95% CI, 0.757-0.818), and 0.793 (95% CI, 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. The model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence. The proposed approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence metrics, thereby promoting informed decision-making for patients with RCC.

Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.

Wang Q, He B, Yu J, Zhang B, Yang J, Liu J, Ma X, Wei S, Li S, Zheng H, Tang Z

pubmed logopapersJun 1 2025
Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.
Page 81 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.