Sort by:
Page 18 of 99990 results

Comparing respiratory-triggered T2WI MRI with an artificial intelligence-assisted technique and motion-suppressed respiratory-triggered T2WI in abdominal imaging.

Wang N, Liu Y, Ran J, An Q, Chen L, Zhao Y, Yu D, Liu A, Zhuang L, Song Q

pubmed logopapersSep 1 2025
Magnetic resonance imaging (MRI) plays a crucial role in the diagnosis of abdominal conditions. A comprehensive assessment, especially of the liver, requires multi-planar T2-weighted sequences. To mitigate the effect of respiratory motion on image quality, the combination of acquisition and reconstruction with motion suppression (ARMS) and respiratory triggering (RT) is commonly employed. While this method maintains image quality, it does so at the expense of longer acquisition times. We evaluated the effectiveness of free-breathing, artificial intelligence-assisted compressed-sensing respiratory-triggered T2-weighted imaging (ACS-RT T2WI) compared to conventional acquisition and reconstruction with motion-suppression respiratory-triggered T2-weighted imaging (ARMS-RT T2WI) in abdominal MRI, assessing both qualitative and quantitative measures of image quality and lesion detection. In this retrospective study, 334 patients with upper abdominal discomfort were examined on a 3.0T MRI system. Each patient underwent both ARMS-RT T2WI and ACS-RT T2WI. Image quality was analyzed by two independent readers using a five-point Likert scale. The quantitative measurements included the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and sharpness. Lesion detection rates and contrast ratios (CRs) were also evaluated for liver, biliary system, and pancreatic lesions. There ACS-RT T2WI protocol had a significantly reduced median scanning time compared to the ARMS-RT T2WI protocol (148.22±38.37 <i>vs.</i> 13.86±1.72 seconds). However, ARMS-RT T2WI had a higher PSNR than ACS-RT T2WI (39.87±2.72 <i>vs.</i> 38.69±3.00, P<0.05). Of the 201 liver lesions, ARMS-RT T2WI detected 193 (96.0%) and ACS-RT T2WI detected 192 (95.5%) (P=0.787). Of the 97 biliary system lesions, ARMS-RT T2WI detected 92 (94.8%) and ACS-RT T2WI detected 94 (96.9%) (P=0.721). Of the 110 pancreatic lesions, ARMS-RT T2WI detected 102 (92.7%) and ACS-RT T2WI detected 104 (94.5%) (P=0.784). The CR analysis showed the superior performance of ACS-RT T2WI in certain lesion types (hemangioma, 0.58±0.11 <i>vs.</i> 0.55±0.12; biliary tumor, 0.47±0.09 <i>vs.</i> 0.38±0.09; pancreatic cystic lesions, 0.59±0.12 <i>vs.</i> 0.48±0.14; pancreatic cancer, 0.48±0.18 <i>vs.</i> 0.43±0.17), but no significant difference was found in others like focal nodular hyperplasia (FNH), hepatapostema, hepatocellular carcinoma (HCC), cholangiocarcinoma, metastatic tumors, and biliary calculus. ACS-RT T2WI ensures clinical reliability with a substantial scan time reduction (>80%). Despite minor losses in detail and SNR reduction, ACS-RT T2WI does not impair lesion detection, marking its efficacy in abdominal imaging.

Analysis of intra- and inter-observer variability in 4D liver ultrasound landmark labeling.

Wulff D, Ernst F

pubmed logopapersSep 1 2025
Four-dimensional (4D) ultrasound imaging is widely used in clinics for diagnostics and therapy guidance. Accurate target tracking in 4D ultrasound is crucial for autonomous therapy guidance systems, such as radiotherapy, where precise tumor localization ensures effective treatment. Supervised deep learning approaches rely on reliable ground truth, making accurate labels essential. We investigate the reliability of expert-labeled ground truth data by evaluating intra- and inter-observer variability in landmark labeling for 4D ultrasound imaging in the liver. Eight 4D liver ultrasound sequences were labeled by eight expert observers, each labeling eight landmarks three times. Intra- and inter-observer variability was quantified, and observer survey and motion analysis were conducted to determine factors influencing labeling accuracy, such as ultrasound artifacts and motion amplitude. The mean intra-observer variability ranged from <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>1.58</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>0.90</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.05</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.22</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> depending on the observer. The inter-observer variability for the two observer groups was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.68</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.69</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>3.06</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.74</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . The observer survey and motion analysis revealed that ultrasound artifacts significantly affected labeling accuracy due to limited landmark visibility, whereas motion amplitude had no measurable effect. Our measured mean landmark motion was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>11.56</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>5.86</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . We highlight variability in expert-labeled ground truth data for 4D ultrasound imaging and identify ultrasound artifacts as a major source of labeling inaccuracies. These findings underscore the importance of addressing observer variability and artifact-related challenges to improve the reliability of ground truth data for evaluating target tracking algorithms in 4D ultrasound applications.

Artificial intelligence-enhanced ultrasound imaging for thyroid nodule detection and malignancy classification: a study on YOLOv11.

Yang J, Luo Z, Wen Y, Zhang J

pubmed logopapersSep 1 2025
Thyroid nodules are a common clinical concern, with accurate diagnosis being critical for effective treatment and improved patient outcomes. Traditional ultrasound examinations rely heavily on the physician's experience, which can lead to diagnostic variability. The integration of artificial intelligence (AI) into medical imaging offers a promising solution for enhancing diagnostic accuracy and efficiency. This study aimed to evaluate the effectiveness of the You Only Look Once v. 11 (YOLOv11) model in detecting and classifying thyroid nodules through ultrasound images, with the goal of supporting real-time clinical decision-making and improving diagnostic workflows. We used the YOLOv11 model to analyze a dataset of 1,503 thyroid ultrasound images, divided into training (1,203 images), validation (150 images), and test (150 images) sets, comprising 742 benign and 778 malignant nodules. Advanced data augmentation and transfer learning techniques were applied to optimize model performance. Comparative analysis was conducted with other YOLO variants (YOLOv3 to YOLOv10) and residual network 50 (ResNet50) to assess their diagnostic capabilities. The YOLOv11 model exhibited superior performance in thyroid nodule detection as compared to other YOLO variants (from YOLOv3 to YOLOv10) and ResNet50. At an intersection over union (IoU) of 0.5, YOLOv11 achieved a precision (P) of 0.841 and recall (R) of 0.823, outperforming ResNet50's P of 0.8333 and R of 0.8025. Among the YOLO variants, YOLOv11 consistently achieved the highest P and R values. For benign nodules, YOLOv11 obtained a P of 0.835 and R of 0.833, while for malignant nodules, it reached a P of 0.846 and a R of 0.813. Within the YOLOv11 model itself, performance varied across different IoU thresholds (0.25, 0.5, 0.7, and 0.9). Lower IoU thresholds generally resulted in better performance metrics, with P and R values decreasing as the IoU threshold increased. YOLOv11 proved to be a powerful tool for thyroid nodule detection and malignancy classification, offering high P and real-time performance. These attributes are vital for dynamic ultrasound examinations and enhancing diagnostic efficiency. Future research will focus on expanding datasets and validating the model's clinical utility in real-time settings.

Multidisciplinary Consensus Prostate Contours on Magnetic Resonance Imaging: Educational Atlas and Reference Standard for Artificial Intelligence Benchmarking.

Song Y, Dornisch AM, Dess RT, Margolis DJA, Weinberg EP, Barrett T, Cornell M, Fan RE, Harisinghani M, Kamran SC, Lee JH, Li CX, Liss MA, Rusu M, Santos J, Sonn GA, Vidic I, Woolen SA, Dale AM, Seibert TM

pubmed logopapersSep 1 2025
Evaluation of artificial intelligence (AI) algorithms for prostate segmentation is challenging because ground truth is lacking. We aimed to: (1) create a reference standard data set with precise prostate contours by expert consensus, and (2) evaluate various AI tools against this standard. We obtained prostate magnetic resonance imaging cases from six institutions from the Qualitative Prostate Imaging Consortium. A panel of 4 experts (2 genitourinary radiologists and 2 prostate radiation oncologists) meticulously developed consensus prostate segmentations on axial T<sub>2</sub>-weighted series. We evaluated the performance of 6 AI tools (3 commercially available and 3 academic) using Dice scores, distance from reference contour, and volume error. The panel achieved consensus prostate segmentation on each slice of all 68 patient cases included in the reference data set. We present 2 patient examples to serve as contouring guides. Depending on the AI tool, median Dice scores (across patients) ranged from 0.80 to 0.94 for whole prostate segmentation. For a typical (median) patient, AI tools had a mean error over the prostate surface ranging from 1.3 to 2.4 mm. They maximally deviated 3.0 to 9.4 mm outside the prostate and 3.0 to 8.5 mm inside the prostate for a typical patient. Error in prostate volume measurement for a typical patient ranged from 4.3% to 31.4%. We established an expert consensus benchmark for prostate segmentation. The best-performing AI tools have typical accuracy greater than that reported for radiation oncologists using computed tomography scans (the most common clinical approach for radiation therapy planning). Physician review remains essential to detect occasional major errors.

Deep learning model for predicting lymph node metastasis around rectal cancer based on rectal tumor core area and mesangial imaging features.

Guo L, Fu K, Wang W, Zhou L, Chen L, Jiang M

pubmed logopapersSep 1 2025
Assessing lymph node metastasis (LNM) involvement in patients with rectal cancer (RC) is fundamental in disease management. In this study, we used artificial intelligence (AI) technology to develop a segmentation model that automatically segments the tumor core area and mesangial tissue from magnetic resonance T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) images collected from 122 RC patients to improve the accuracy of LNM prediction, after which omics machine modeling was performed on the segmented ROI. An automatic segmentation model was developed using nn-UNet. This pipeline integrates deep learning (DL), specifically 3D U-Net, for semantic segmentation and image processing techniques such as resampling, normalization, connected component analysis, image registration, and radiomics features coupled with machine learning. The results showed that the DL segmentation method could effectively segment the tumor and mesangial areas from MR sequences (the median dice coefficient: 0.90 ± 0.08; mesorectum segmentation: 0.85 ± 0.36), and the radiological characteristics of rectal and mesangial tissues in T2WI and ADC images could help distinguish RC treatments. The nn-UNet model demonstrated promising preliminary results, achieving the highest area under the curve (AUC) values in various scenarios. In the evaluation encompassing both tumor lesions and mesorectum involvement, the model exhibited an AUC of 0.743, highlighting its strong discriminatory ability to predict a combined outcome involving both elements. Specifically targeting tumor lesions, the model achieved an AUC of 0.731, emphasizing its effectiveness in distinguishing between positive and negative cases of tumor lesions. In assessing the prediction of mesorectum involvement, the model displayed moderate predictive utility with an AUC of 0.753. The nn-UNet model demonstrated impressive performance across all evaluated scenarios, including combined tumor lesions and mesorectum involvement, tumor lesions alone, and mesorectum involvement alone. The online version contains supplementary material available at 10.1186/s12880-025-01878-9.

Deep Learning-Based Multimodal Prediction of NAC Response in LARC by Integrating MRI and Proteomics.

Li Y, Ding J, Du F, Wang Z, Liu Z, Liu Y, Zhou Y, Zhang Q

pubmed logopapersSep 1 2025
Locally advanced rectal cancer (LARC) exhibits significant heterogeneity in response to neoadjuvant chemotherapy (NAC), with poor responders facing delayed treatment and unnecessary toxicity. Although MRI provides spatial pathophysiological information and proteomics reveals molecular mechanisms, current single-modal approaches cannot integrate these complementary perspectives, resulting in limited predictive accuracy and biological insight. This retrospective study developed a multimodal deep learning framework using a cohort of 274 LARC patients treated with NAC (2012-2021). Graph neural networks analyzed proteomic profiles from FFPE tissues, incorporating KEGG/GO pathways and PPI networks, while a spatially enhanced 3D ResNet152 processed T2WI. A LightGBM classifier integrated both modalities with clinical features using zero-imputation for missing data. Model performance was assessed through AUC-ROC, decision curve analysis, and interpretability techniques (SHAP and Grad-CAM). The integrated model achieved superior NAC response prediction (test AUC 0.828, sensitivity 0.875, specificity 0.750), significantly outperforming single-modal approaches (MRI ΔAUC +0.109; proteomics ΔAUC +0.125). SHAP analysis revealed MRI-derived features contributed 57.7% of predictive power, primarily through peritumoral stromal heterogeneity quantification. Proteomics identified 10 key chemoresistance proteins, including CYBA, GUSB, ATP6AP2, DYNC1I2, DAD1, ACOX1, COPG1, FBP1, DHRS7, and SSR3. Decision curve analysis confirmed clinical utility across threshold probabilities (0-0.75). Our study established a novel MRI-proteomics integration framework for NAC response prediction, with MRI defining spatial resistance patterns and proteomics deciphering molecular drivers, enabling early organ preservation strategies. The zero-imputation design ensured deplorability in diverse clinical settings.

Army Medic Performance in Trauma Sonography: The Impact of Artificial Intelligence Assistance in Focused Assessments With Sonography in Trauma-A Prospective Randomized Controlled Trial.

Hartline CPTAD, Hartvickson MAJS, Perdue CPTMJ, Sandoval CPTC, Walker LTCJD, Soules CPTA, Mitchell COLCA

pubmed logopapersAug 31 2025
Noncompressible truncal hemorrhage is a leading cause of preventable death in military prehospital settings, particularly in combat environments where advanced imaging is unavailable. The Focused Assessment with Sonography in Trauma (FAST) exam is critical for diagnosing intra-abdominal bleeding. However, Army medics typically lack formal ultrasound training. This study examines whether artificial intelligence (AI) assistance can enhance medics' proficiency in performing FAST exams, thereby improving the speed and accuracy of trauma triage in austere conditions. This is a prospective, randomized controlled trial that involved 60 Army medics who performed 3-view abdominal FAST exams, both with and without AI assistance, using the EchoNous Kosmos device. Investigators randomized participants into 2 groups and evaluated based on time to completion, adequacy of imaging, and confidence in using the device. Two trained investigators assessed adequacy and the participants reported confidence in the device using a 5-point Likert scale. We then analyzed data using the t-test for parametric data, the Wilcoxon rank-sum test, and Cohen's Kappa test for interrater reliability. The AI-assisted group completed the FAST exam in an average of 142.57 seconds compared to 143.87 seconds (P = .9) for the non-AI-assisted group, demonstrating no statistically significant difference in time. However, the AI-assisted group demonstrated significantly higher adequacy in the left upper quadrant and pelvic views (P = .008 and P = .004, respectively). Participants reported significantly higher confidence in the AI-assisted group, with a median score of 4.00 versus 2.50 (P = .006). Interrater agreement was moderate to substantial, with Cohen's Kappa values indicating significant reliability. AI assistance did not significantly reduce the time required to complete a FAST exam but improved image adequacy and user confidence. These findings suggest that AI tools can enhance the quality of FAST exams conducted by minimally trained medics in combat settings. Further research is needed to explore integrating AI-assisted ultrasound training in military medic curricula to optimize trauma care in austere environments.

Adaptive Contrast Adjustment Module: A Clinically-Inspired Plug-and-Play Approach for Enhanced Fetal Plane Classification

Yang Chen, Sanglin Zhao, Baoyu Chen, Mans Gustaf

arxiv logopreprintAug 31 2025
Fetal ultrasound standard plane classification is essential for reliable prenatal diagnosis but faces inherent challenges, including low tissue contrast, boundary ambiguity, and operator-dependent image quality variations. To overcome these limitations, we propose a plug-and-play adaptive contrast adjustment module (ACAM), whose core design is inspired by the clinical practice of doctors adjusting image contrast to obtain clearer and more discriminative structural information. The module employs a shallow texture-sensitive network to predict clinically plausible contrast parameters, transforms input images into multiple contrast-enhanced views through differentiable mapping, and fuses them within downstream classifiers. Validated on a multi-center dataset of 12,400 images across six anatomical categories, the module consistently improves performance across diverse models, with accuracy of lightweight models increasing by 2.02 percent, accuracy of traditional models increasing by 1.29 percent, and accuracy of state-of-the-art models increasing by 1.15 percent. The innovation of the module lies in its content-aware adaptation capability, replacing random preprocessing with physics-informed transformations that align with sonographer workflows while improving robustness to imaging heterogeneity through multi-view fusion. This approach effectively bridges low-level image features with high-level semantics, establishing a new paradigm for medical image analysis under real-world image quality variations.

Utilisation of artificial intelligence to enhance the detection rates of renal cancer on cross-sectional imaging: protocol for a systematic review and meta-analysis.

Ofagbor O, Bhardwaj G, Zhao Y, Baana M, Arkwazi M, Lami M, Bolton E, Heer R

pubmed logopapersAug 31 2025
The incidence of renal cell carcinoma has steadily been on the increase due to the increased use of imaging to identify incidental masses. Although survival has also improved because of early detection, overdiagnosis and overtreatment of benign renal masses are associated with significant morbidity, as patients with a suspected renal malignancy on imaging undergo invasive and risky procedures for a definitive diagnosis. Therefore, accurately characterising a renal mass as benign or malignant on imaging is paramount to improving patient outcomes. Artificial intelligence (AI) poses an exciting solution to the problem, augmenting traditional radiological diagnosis to increase detection accuracy. This report aims to investigate and summarise the current evidence about the diagnostic accuracy of AI in characterising renal masses on imaging. This will involve systematically searching PubMed, MEDLINE, Embase, Web of Science, Scopus and Cochrane databases. Publications of research that have evaluated the use of automated AI, fully or to some extent, in cross-sectional imaging for diagnosing or characterising malignant renal tumours will be included if published between July 2016 and June 2025 and in English. The protocol adheres to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols 2015 checklist. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) score will be used to evaluate the quality and risk of bias across included studies. Furthermore, in line with Checklist for Artificial Intelligence in Medical Imaging recommendations, studies will be evaluated for including the minimum necessary information on AI research reporting. Ethical clearance will not be necessary for conducting this systematic review, and results will be disseminated through peer-reviewed publications and presentations at both national and international conferences. CRD42024529929.

Multi-DECT Image-based Interpretable Model Incorporating Habitat Radiomics and Vision Transformer Deep Learning for Preoperative Prediction of Muscle Invasion in Bladder Cancer.

Du C, Wei W, Hu M, He J, Shen J, Liu Y, Li J, Liu L

pubmed logopapersAug 30 2025
The research aims to evaluate the effectiveness of a multi-dual-energy CT (DECT) image-based interpretable model that integrates habitat radiomics with a 3D Vision Transformer (ViT) deep learning (DL) for preoperatively predicting muscle invasion in bladder cancer (BCa). This retrospective study analyzed 200 BCa patients, who were divided into a training cohort (n=140) and a test cohort (n=60) in a 7:3 ratio. Univariate and multivariate analyses were performed on the DECT quantitative parameters to identify independent predictors, which were subsequently used to develop a DECT model. The K-means algorithm was employed to generate habitat sub-regions of BCa. Traditional radiomics (Rad) model, habitat model, ResNet 18 model, ViT model, and fusion models were constructed from the 40, 70, and 100 keV virtual monochromatic images (VMIs) in DECT. The evaluation of all models used the area under the receiver operating characteristic curve (AUC), calibration curve, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). The SHAP method was employed to interpret the optimal model and visualize its decision-making process. The Habitat-ViT model demonstrated superior performance compared to other single models, achieving an AUC of 0.997 (95% CI 0.992, 1.000) in the training cohort and 0.892 (95% CI 0.814, 0.971) in the test cohort. The incorporation of DECT quantitative parameters did not improve the performance. DCA and calibration curve assessments indicated that the Habitat-ViT model provided a favorable net benefit and demonstrated strong calibration. Furthermore, SHAP clarified the decision-making processes underlying the model's predicted outcomes. Multi-DECT image-based interpretable model that integrates habitat radiomics with a ViT DL holds promise for predicting muscle invasion status in BCa, providing valuable insights for personalized treatment planning and prognostic assessment.
Page 18 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.