Sort by:
Page 103 of 2252246 results

3-D contour-aware U-Net for efficient rectal tumor segmentation in magnetic resonance imaging.

Lu Y, Dang J, Chen J, Wang Y, Zhang T, Bai X

pubmed logopapersJun 1 2025
Magnetic resonance imaging (MRI), as a non-invasive detection method, is crucial for the clinical diagnosis and treatment plan of rectal cancer. However, due to the low contrast of rectal tumor signal in MRI, segmentation is often inaccurate. In this paper, we propose a new three-dimensional rectal tumor segmentation method CAU-Net based on T2-weighted MRI images. The method adopts a convolutional neural network to extract multi-scale features from MRI images and uses a Contour-Aware decoder and attention fusion block (AFB) for contour enhancement. We also introduce adversarial constraint to improve augmentation performance. Furthermore, we construct a dataset of 108 MRI-T2 volumes for the segmentation of locally advanced rectal cancer. Finally, CAU-Net achieved a DSC of 0.7112 and an ASD of 2.4707, which outperforms other state-of-the-art methods. Various experiments on this dataset show that CAU-Net has high accuracy and efficiency in rectal tumor segmentation. In summary, proposed method has important clinical application value and can provide important support for medical image analysis and clinical treatment of rectal cancer. With further development and application, this method has the potential to improve the accuracy of rectal cancer diagnosis and treatment.

Ensemble learning of deep CNN models and two stage level prediction of Cobb angle on surface topography in adolescents with idiopathic scoliosis.

Hassan M, Gonzalez Ruiz JM, Mohamed N, Burke TN, Mei Q, Westover L

pubmed logopapersJun 1 2025
This study employs Convolutional Neural Networks (CNNs) as feature extractors with appended regression layers for the non-invasive prediction of Cobb Angle (CA) from Surface Topography (ST) scans in adolescents with Idiopathic Scoliosis (AIS). The aim is to minimize radiation exposure during critical growth periods by offering a reliable, non-invasive assessment tool. The efficacy of various CNN-based feature extractors-DenseNet121, EfficientNetB0, ResNet18, SqueezeNet, and a modified U-Net-was evaluated on a dataset of 654 ST scans using a regression analysis framework for accurate CA prediction. The dataset comprised 590 training and 64 testing scans. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and accuracy in classifying scoliosis severity (mild, moderate, severe) based on CA measurements. The EfficientNetB0 feature extractor outperformed other models, demonstrating strong performance on the training set (R=0.96, R=20.93) and achieving an MAE of 6.13<sup>∘</sup> and RMSE of 7.5<sup>∘</sup> on the test set. In terms of scoliosis severity classification, it achieved high precision (84.62%) and specificity (95.65% for mild cases and 82.98% for severe cases), highlighting its clinical applicability in AIS management. The regression-based approach using the EfficientNetB0 as a feature extractor presents a significant advancement for accurately determining CA from ST scans, offering a promising tool for improving scoliosis severity categorization and management in adolescents.

Healthcare resource utilization for the management of neonatal head shape deformities: a propensity-matched analysis of AI-assisted and conventional approaches.

Shin J, Caron G, Stoltz P, Martin JE, Hersh DS, Bookland MJ

pubmed logopapersJun 1 2025
Overuse of radiography studies and underuse of conservative therapies for cranial deformities in neonates is a known inefficiency in pediatric craniofacial healthcare. This study sought to establish whether the introduction of artificial intelligence (AI)-generated craniometrics and craniometric interpretations into craniofacial clinical workflow improved resource utilization patterns in the initial evaluation and management of neonatal cranial deformities. A retrospective chart review of pediatric patients referred for head shape concerns between January 2019 and June 2023 was conducted. Patient demographics, final encounter diagnosis, review of an AI analysis, and provider orders were documented. Patients were divided based on whether an AI cranial deformity analysis was documented as reviewed during the index evaluation, then both groups were propensity matched. Rates of index-encounter radiology studies, physical therapy (PT), orthotic therapy, and craniofacial specialist follow-up evaluations were compared using logistic regression and ANOVA analyses. One thousand patient charts were reviewed (663 conventional encounters, 337 AI-assisted encounters). One-to-one propensity matching was performed between these groups. AI models were significantly more likely to be reviewed during telemedicine encounters and advanced practice provider (APP) visits (54.8% telemedicine vs 11.4% in-person, p < 0.0001; 12.3% physician vs 44.4% APP, p < 0.0001). All AI diagnoses of craniosynostosis versus benign deformities were congruent with final diagnoses. AI model review was associated with a significant increase in the use of orthotic therapies for neonatal cranial deformities (31.5% vs 38.6%, p = 0.0132) but not PT or specialist follow-up evaluations. Radiology ordering rates did not correlate with AI-interpreted data review. As neurosurgeons and pediatricians continue to work to limit neonatal radiation exposure and contain healthcare costs, AI-assisted clinical care could be a cheap and easily scalable diagnostic adjunct for reducing reliance on radiography and encouraging adherence to established clinical guidelines. In practice, however, providers appear to default to preexisting diagnostic biases and underweight AI-generated data and interpretations, ultimately negating any potential advantages offered by AI. AI engineers and specialty leadership should prioritize provider education and user interface optimization to improve future adoption of validated AI diagnostic tools.

Artificial Intelligence for Teaching Case Curation: Evaluating Model Performance on Imaging Report Discrepancies.

Bartley M, Huemann Z, Hu J, Tie X, Ross AB, Kennedy T, Warner JD, Bradshaw T, Lawrence EM

pubmed logopapersJun 1 2025
Assess the feasibility of using a large language model (LLM) to identify valuable radiology teaching cases through report discrepancy detection. Retrospective study included after-hours head CT and musculoskeletal radiograph exams from January 2017 to December 2021. Discrepancy level between trainee's preliminary interpretation and final attending report was annotated on a 5-point scale. RadBERT, an LLM pretrained on a vast corpus of radiology text, was fine-tuned for discrepancy detection. For comparison and to ensure the robustness of the approach, Mixstral 8×7B, Mistral 7B, and Llama2 were also evaluated. The model's performance in detecting discrepancies was evaluated using a randomly selected hold-out test set. A subset of discrepant cases identified by the LLM was compared to a random case set by recording clinical parameters, discrepant pathology, and evaluating possible educational value. F1 statistic was used for model comparison. Pearson's chi-squared test was employed to assess discrepancy prevalence and score between groups (significance set at p<0.05). The fine-tuned LLM model achieved an overall accuracy of 90.5% with a specificity of 95.5% and a sensitivity of 66.3% for discrepancy detection. The model sensitivity significantly improved with higher discrepancy scores, 49% (34/70) for score 2 versus 67% (47/62) for score 3, and 81% (35/43) for score 4/5 (p<0.05 compared to score 2). LLM-curated set showed a significant increase in the prevalence of all discrepancies and major discrepancies (scores 4 or 5) compared to a random case set (P<0.05 for both). Evaluation of the clinical characteristics from both the random and discrepant case sets demonstrated a broad mix of pathologies and discrepancy types. An LLM can detect trainee report discrepancies, including both higher and lower-scoring discrepancies, and may improve case set curation for resident education as well as serve as a trainee oversight tool.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.

SAMBV: A fine-tuned SAM with interpolation consistency regularization for semi-supervised bi-ventricle segmentation from cardiac MRI.

Wang Y, Zhou S, Lu K, Wang Y, Zhang L, Liu W, Wang Z

pubmed logopapersJun 1 2025
The SAM (segment anything model) is a foundation model for general purpose image segmentation, however, when it comes to a specific medical application, such as segmentation of both ventricles from the 2D cardiac MRI, the results are not satisfactory. The scarcity of labeled medical image data further increases the difficulty to apply the SAM to medical image processing. To address these challenges, we propose the SAMBV by fine-tuning the SAM for semi-supervised segmentation of bi-ventricle from the 2D cardiac MRI. The SAM is tuned in three aspects, (i) the position and feature adapters are introduced so that the SAM can adapt to bi-ventricle segmentation. (ii) a dual-branch encoder is incorporated to collect missing local feature information in SAM so as to improve bi-ventricle segmentation. (iii) the interpolation consistency regularization (ICR) semi-supervised manner is utilized, allowing the SAMBV to achieve competitive performance with only 40% of the labeled data in the ACDC dataset. Experimental results demonstrate that the proposed SAMBV achieves an average Dice score improvement of 17.6% over the original SAM, raising its performance from 74.49% to 92.09%. Furthermore, the SAMBV outperforms other supervised SAM fine-tuning methods, showing its effectiveness in semi-supervised medical image segmentation tasks. Notably, the proposed method is specifically designed for 2D MRI data.

[Applications of artificial intelligence in cardiovascular imaging: advantages, limitations, and future challenges].

Fortuni F, Petrina SM, Nicolosi GL

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is rapidly transforming cardiovascular imaging, offering innovative solutions to enhance diagnostic precision, prognostic accuracy, and therapeutic decision-making. This review explores the role of AI in cardiovascular imaging, highlighting its applications, advantages, limitations, and future challenges. The discussion is structured by imaging modalities, including echocardiography, cardiac and coronary computed tomography, cardiac magnetic resonance, and nuclear cardiology. For each modality, we examine AI's contributions across the patient care continuum: from patient selection and image acquisition to quantitative and qualitative analysis, interpretation support, prognostic stratification, therapeutic guidance, and integration with other clinical data. AI applications demonstrate significant potential to streamline workflows, improve diagnostic accuracy, and provide advanced insights for complex clinical scenarios. However, several limitations must be addressed. Many AI algorithms are developed using data from single, high-expertise centers, raising concerns about their generalizability to routine clinical practice. In some cases, these algorithms may even produce misleading results. Additionally, the "black box" nature of certain AI systems poses challenges for cardiologists, making discrepancies difficult to interpret or rectify. Importantly, AI should be seen as a complementary tool rather than a replacement for cardiologists, designed to expedite routine tasks and allow clinicians to focus on complex cases. Future challenges include fostering clinician involvement in algorithm development and extending AI implementation to peripheral healthcare centers. This approach aims to enhance accessibility, understanding, and applicability of AI in everyday clinical practice, ultimately democratizing its benefits and ensuring equitable integration into healthcare systems.

Prognostic assessment of osteolytic lesions and mechanical properties of bones bearing breast cancer using neural network and finite element analysis<sup>☆</sup>.

Wang S, Chu T, Wasi M, Guerra RM, Yuan X, Wang L

pubmed logopapersJun 1 2025
The management of skeletal-related events (SREs), particularly the prevention of pathological fractures, is crucial for cancer patients. Current clinical assessment of fracture risk is mostly based on medical images, but incorporating sequential images in the assessment remains challenging. This study addressed this issue by leveraging a comprehensive dataset consisting of 260 longitudinal micro-computed tomography (μCT) scans acquired in normal and breast cancer bearing mice. A machine learning (ML) model based on a spatial-temporal neural network was built to forecast bone structures from previous μCT scans, which were found to have an overall similarity coefficient (Dice) of 0.814 with ground truths. Despite the predicted lesion volumes (18.5 ​% ​± ​15.3 ​%) being underestimated by ∼21 ​% than the ground truths' (22.1 ​% ​± ​14.8 ​%), the time course of the lesion growth was better represented in the predicted images than the preceding scans (10.8 ​% ​± ​6.5 ​%). Under virtual biomechanical testing using finite element analysis (FEA), the predicted bone structures recapitulated the loading carrying behaviors of the ground truth structures with a positive correlation (y ​= ​0.863x) and a high coefficient of determination (R<sup>2</sup> ​= ​0.955). Interestingly, the compliances of the predicted and ground truth structures demonstrated nearly identical linear relationships with the lesion volumes. In summary, we have demonstrated that bone deterioration could be proficiently predicted using machine learning in our preclinical dataset, suggesting the importance of large longitudinal clinical imaging datasets in fracture risk assessment for cancer bone metastasis.

Evaluation of large language models in generating pulmonary nodule follow-up recommendations.

Wen J, Huang W, Yan H, Sun J, Dong M, Li C, Qin J

pubmed logopapersJun 1 2025
To evaluate the performance of large language models (LLMs) in generating clinically follow-up recommendations for pulmonary nodules by leveraging radiological report findings and management guidelines. This retrospective study included CT follow-up reports of pulmonary nodules documented by senior radiologists from September 1st, 2023, to April 30th, 2024. Sixty reports were collected for prompting engineering additionally, based on few-shot learning and the Chain of Thought methodology. Radiological findings of pulmonary nodules, along with finally prompt, were input into GPT-4o-mini or ERNIE-4.0-Turbo-8K to generate follow-up recommendations. The AI-generated recommendations were evaluated against radiologist-defined guideline-based standards through binary classification, assessing nodule risk classifications, follow-up intervals, and harmfulness. Performance metrics included sensitivity, specificity, positive/negative predictive values, and F1 score. On 1009 reports from 996 patients (median age, 50.0 years, IQR, 39.0-60.0 years; 511 male patients), ERNIE-4.0-Turbo-8K and GPT-4o-mini demonstrated comparable performance in both accuracy of follow-up recommendations (94.6 % vs 92.8 %, P = 0.07) and harmfulness rates (2.9 % vs 3.5 %, P = 0.48). In nodules classification, ERNIE-4.0-Turbo-8K and GPT-4o-mini performed similarly with accuracy rates of 99.8 % vs 99.9 % sensitivity of 96.9 % vs 100.0 %, specificity of 99.9 % vs 99.9 %, positive predictive value of 96.9 % vs 96.9 %, negative predictive value of 100.0 % vs 99.9 %, f1-score of 96.9 % vs 98.4 %, respectively. LLMs show promise in providing guideline-based follow-up recommendations for pulmonary nodules, but require rigorous validation and supervision to mitigate potential clinical risks. This study offers insights into their potential role in automated radiological decision support.

Exploring the Limitations of Virtual Contrast Prediction in Brain Tumor Imaging: A Study of Generalization Across Tumor Types and Patient Populations.

Caragliano AN, Macula A, Colombo Serra S, Fringuello Mingo A, Morana G, Rossi A, Alì M, Fazzini D, Tedoldi F, Valbusa G, Bifone A

pubmed logopapersJun 1 2025
Accurate and timely diagnosis of brain tumors is critical for patient management and treatment planning. Magnetic resonance imaging (MRI) is a widely used modality for brain tumor detection and characterization, often aided by the administration of gadolinium-based contrast agents (GBCAs) to improve tumor visualization. Recently, deep learning models have shown remarkable success in predicting contrast-enhancement in medical images, thereby reducing the need of GBCAs and potentially minimizing patient discomfort and risks. In this paper, we present a study aimed at investigating the generalization capabilities of a neural network trained to predict full contrast in brain tumor images from noncontrast MRI scans. While initial results exhibited promising performance on a specific tumor type at a certain stage using a specific dataset, our attempts to extend this success to other tumor types and diverse patient populations yielded unexpected challenges and limitations. Through a rigorous analysis of the factor contributing to these negative results, we aim to shed light on the complexities associated with generalizing contrast enhancement prediction in medical brain tumor imaging, offering valuable insights for future research and clinical applications.
Page 103 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.