Sort by:
Page 44 of 1411405 results

SAMBV: A fine-tuned SAM with interpolation consistency regularization for semi-supervised bi-ventricle segmentation from cardiac MRI.

Wang Y, Zhou S, Lu K, Wang Y, Zhang L, Liu W, Wang Z

pubmed logopapersJun 1 2025
The SAM (segment anything model) is a foundation model for general purpose image segmentation, however, when it comes to a specific medical application, such as segmentation of both ventricles from the 2D cardiac MRI, the results are not satisfactory. The scarcity of labeled medical image data further increases the difficulty to apply the SAM to medical image processing. To address these challenges, we propose the SAMBV by fine-tuning the SAM for semi-supervised segmentation of bi-ventricle from the 2D cardiac MRI. The SAM is tuned in three aspects, (i) the position and feature adapters are introduced so that the SAM can adapt to bi-ventricle segmentation. (ii) a dual-branch encoder is incorporated to collect missing local feature information in SAM so as to improve bi-ventricle segmentation. (iii) the interpolation consistency regularization (ICR) semi-supervised manner is utilized, allowing the SAMBV to achieve competitive performance with only 40% of the labeled data in the ACDC dataset. Experimental results demonstrate that the proposed SAMBV achieves an average Dice score improvement of 17.6% over the original SAM, raising its performance from 74.49% to 92.09%. Furthermore, the SAMBV outperforms other supervised SAM fine-tuning methods, showing its effectiveness in semi-supervised medical image segmentation tasks. Notably, the proposed method is specifically designed for 2D MRI data.

[Applications of artificial intelligence in cardiovascular imaging: advantages, limitations, and future challenges].

Fortuni F, Petrina SM, Nicolosi GL

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is rapidly transforming cardiovascular imaging, offering innovative solutions to enhance diagnostic precision, prognostic accuracy, and therapeutic decision-making. This review explores the role of AI in cardiovascular imaging, highlighting its applications, advantages, limitations, and future challenges. The discussion is structured by imaging modalities, including echocardiography, cardiac and coronary computed tomography, cardiac magnetic resonance, and nuclear cardiology. For each modality, we examine AI's contributions across the patient care continuum: from patient selection and image acquisition to quantitative and qualitative analysis, interpretation support, prognostic stratification, therapeutic guidance, and integration with other clinical data. AI applications demonstrate significant potential to streamline workflows, improve diagnostic accuracy, and provide advanced insights for complex clinical scenarios. However, several limitations must be addressed. Many AI algorithms are developed using data from single, high-expertise centers, raising concerns about their generalizability to routine clinical practice. In some cases, these algorithms may even produce misleading results. Additionally, the "black box" nature of certain AI systems poses challenges for cardiologists, making discrepancies difficult to interpret or rectify. Importantly, AI should be seen as a complementary tool rather than a replacement for cardiologists, designed to expedite routine tasks and allow clinicians to focus on complex cases. Future challenges include fostering clinician involvement in algorithm development and extending AI implementation to peripheral healthcare centers. This approach aims to enhance accessibility, understanding, and applicability of AI in everyday clinical practice, ultimately democratizing its benefits and ensuring equitable integration into healthcare systems.

Prognostic assessment of osteolytic lesions and mechanical properties of bones bearing breast cancer using neural network and finite element analysis<sup>☆</sup>.

Wang S, Chu T, Wasi M, Guerra RM, Yuan X, Wang L

pubmed logopapersJun 1 2025
The management of skeletal-related events (SREs), particularly the prevention of pathological fractures, is crucial for cancer patients. Current clinical assessment of fracture risk is mostly based on medical images, but incorporating sequential images in the assessment remains challenging. This study addressed this issue by leveraging a comprehensive dataset consisting of 260 longitudinal micro-computed tomography (μCT) scans acquired in normal and breast cancer bearing mice. A machine learning (ML) model based on a spatial-temporal neural network was built to forecast bone structures from previous μCT scans, which were found to have an overall similarity coefficient (Dice) of 0.814 with ground truths. Despite the predicted lesion volumes (18.5 ​% ​± ​15.3 ​%) being underestimated by ∼21 ​% than the ground truths' (22.1 ​% ​± ​14.8 ​%), the time course of the lesion growth was better represented in the predicted images than the preceding scans (10.8 ​% ​± ​6.5 ​%). Under virtual biomechanical testing using finite element analysis (FEA), the predicted bone structures recapitulated the loading carrying behaviors of the ground truth structures with a positive correlation (y ​= ​0.863x) and a high coefficient of determination (R<sup>2</sup> ​= ​0.955). Interestingly, the compliances of the predicted and ground truth structures demonstrated nearly identical linear relationships with the lesion volumes. In summary, we have demonstrated that bone deterioration could be proficiently predicted using machine learning in our preclinical dataset, suggesting the importance of large longitudinal clinical imaging datasets in fracture risk assessment for cancer bone metastasis.

Evaluation of large language models in generating pulmonary nodule follow-up recommendations.

Wen J, Huang W, Yan H, Sun J, Dong M, Li C, Qin J

pubmed logopapersJun 1 2025
To evaluate the performance of large language models (LLMs) in generating clinically follow-up recommendations for pulmonary nodules by leveraging radiological report findings and management guidelines. This retrospective study included CT follow-up reports of pulmonary nodules documented by senior radiologists from September 1st, 2023, to April 30th, 2024. Sixty reports were collected for prompting engineering additionally, based on few-shot learning and the Chain of Thought methodology. Radiological findings of pulmonary nodules, along with finally prompt, were input into GPT-4o-mini or ERNIE-4.0-Turbo-8K to generate follow-up recommendations. The AI-generated recommendations were evaluated against radiologist-defined guideline-based standards through binary classification, assessing nodule risk classifications, follow-up intervals, and harmfulness. Performance metrics included sensitivity, specificity, positive/negative predictive values, and F1 score. On 1009 reports from 996 patients (median age, 50.0 years, IQR, 39.0-60.0 years; 511 male patients), ERNIE-4.0-Turbo-8K and GPT-4o-mini demonstrated comparable performance in both accuracy of follow-up recommendations (94.6 % vs 92.8 %, P = 0.07) and harmfulness rates (2.9 % vs 3.5 %, P = 0.48). In nodules classification, ERNIE-4.0-Turbo-8K and GPT-4o-mini performed similarly with accuracy rates of 99.8 % vs 99.9 % sensitivity of 96.9 % vs 100.0 %, specificity of 99.9 % vs 99.9 %, positive predictive value of 96.9 % vs 96.9 %, negative predictive value of 100.0 % vs 99.9 %, f1-score of 96.9 % vs 98.4 %, respectively. LLMs show promise in providing guideline-based follow-up recommendations for pulmonary nodules, but require rigorous validation and supervision to mitigate potential clinical risks. This study offers insights into their potential role in automated radiological decision support.

Exploring the Limitations of Virtual Contrast Prediction in Brain Tumor Imaging: A Study of Generalization Across Tumor Types and Patient Populations.

Caragliano AN, Macula A, Colombo Serra S, Fringuello Mingo A, Morana G, Rossi A, Alì M, Fazzini D, Tedoldi F, Valbusa G, Bifone A

pubmed logopapersJun 1 2025
Accurate and timely diagnosis of brain tumors is critical for patient management and treatment planning. Magnetic resonance imaging (MRI) is a widely used modality for brain tumor detection and characterization, often aided by the administration of gadolinium-based contrast agents (GBCAs) to improve tumor visualization. Recently, deep learning models have shown remarkable success in predicting contrast-enhancement in medical images, thereby reducing the need of GBCAs and potentially minimizing patient discomfort and risks. In this paper, we present a study aimed at investigating the generalization capabilities of a neural network trained to predict full contrast in brain tumor images from noncontrast MRI scans. While initial results exhibited promising performance on a specific tumor type at a certain stage using a specific dataset, our attempts to extend this success to other tumor types and diverse patient populations yielded unexpected challenges and limitations. Through a rigorous analysis of the factor contributing to these negative results, we aim to shed light on the complexities associated with generalizing contrast enhancement prediction in medical brain tumor imaging, offering valuable insights for future research and clinical applications.

Internal Target Volume Estimation for Liver Cancer Radiation Therapy Using an Ultra Quality 4-Dimensional Magnetic Resonance Imaging.

Liao YP, Xiao H, Wang P, Li T, Aguilera TA, Visak JD, Godley AR, Zhang Y, Cai J, Deng J

pubmed logopapersJun 1 2025
Accurate internal target volume (ITV) estimation is essential for effective and safe radiation therapy in liver cancer. This study evaluates the clinical value of an ultraquality 4-dimensional magnetic resonance imaging (UQ 4D-MRI) technique for ITV estimation. The UQ 4D-MRI technique maps motion information from a low spatial resolution dynamic volumetric MRI onto a high-resolution 3-dimensional MRI used for radiation treatment planning. It was validated using a motion phantom and data from 13 patients with liver cancer. ITV generated from UQ 4D-MRI (ITV<sub>4D</sub>) was compared with those obtained through isotropic expansions (ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub>) and those measured using conventional 4D-computed tomography (computed tomography-based ITV, ITV<sub>CT</sub>) for each patient. Phantom studies showed a displacement measurement difference of <5% between UQ 4D-MRI and single-slice 2-dimensional cine MRI. In patient studies, the maximum superior-inferior displacements of the tumor on UQ 4D-MRI showed no significant difference compared with single-slice 2-dimensional cine imaging (<i>P</i> = .985). Computed tomography-based ITV showed no significant difference (<i>P</i> = .72) with ITV<sub>4D</sub>, whereas ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub> significantly overestimated the volume by 29.0% (<i>P</i> = .002) and 120.7% (<i>P</i> < .001) compared with ITV<sub>4D</sub>, respectively. UQ 4D-MRI enables accurate motion assessment for liver tumors, facilitating precise ITV delineation for radiation treatment planning. Despite uncertainties from artificial intelligence-based delineation and variations in patients' respiratory patterns, UQ 4D-MRI excels at capturing tumor motion trajectories, potentially improving treatment planning accuracy and reducing margins in liver cancer radiation therapy.

Accuracy of an Automated Bone Scan Index Measurement System Enhanced by Deep Learning of the Female Skeletal Structure in Patients with Breast Cancer.

Fukai S, Daisaki H, Yamashita K, Kuromori I, Motegi K, Umeda T, Shimada N, Takatsu K, Terauchi T, Koizumi M

pubmed logopapersJun 1 2025
VSBONE<sup>®</sup> BSI (VSBONE), an automated bone scan index (BSI) measurement system was updated from version 2.1 (ver.2) to 3.0 (ver.3). VSBONE ver.3 incorporates deep learning of the skeletal structures of 957 new women, and it can be applied in patients with breast cancer. However, the performance of the updated VSBONE remains unclear. This study aimed to validate the diagnostic accuracy of the VSBONE system in patients with breast cancer. In total, 220 Japanese patients with breast cancer who underwent bone scintigraphy with single-photon emission computed tomography/computed tomography (SPECT/CT) were retrospectively analyzed. The patients were diagnosed with active bone metastases (<i>n</i> = 20) and non-bone metastases (<i>n</i> = 200) according to the physician's radiographic image interpretation. The patients were assessed using the VSBONE ver.2 and VSBONE ver.3, and the BSI findings were compared with the interpretation results by the physicians. The occurrence of segmentation errors, the association of BSI between VSBONE ver.2 and VSBONE ver.3, and the diagnostic accuracy of the systems were evaluated. VSBONE ver.2 and VSBONE ver.3 had segmentation errors in four and two patients. Significant positive linear correlations were confirmed in both versions of the BSI (<i>r</i> = 0.92). The diagnostic accuracy was 54.1% in VSBOBE ver.2, and 80.5% in VSBONE ver.3 <i>(P</i> < 0.001), respectively. The diagnostic accuracy of VSBONE was improved through deep learning of the female skeletal structures. The updated VSBONE ver.3 can be a reliable automated system for measuring BSI in patients with breast cancer.

Advanced Three-Dimensional Assessment and Planning for Hallux Valgus.

Forin Valvecchi T, Marcolli D, De Cesar Netto C

pubmed logopapersJun 1 2025
The article discusses advanced three-dimensional evaluation of hallux valgus deformity using weightbearing computed tomography. Conventional two-dimensional radiographs fall short in assessing the complexity of hallux valgus deformities, whereas weightbearing computed tomography provides detailed insights into bone alignment and joint stability in a weightbearing state. Recent studies have highlighted the significance of first ray hypermobility and intrinsic metatarsal rotation in hallux valgus, influencing surgical planning and outcomes. The integration of semiautomatic and artificial intelligence-assisted tools with weightbearing computed tomography is enhancing the precision of deformity assessment, leading to more personalized and effective hallux valgus management.

Large Language Models for Diagnosing Focal Liver Lesions From CT/MRI Reports: A Comparative Study With Radiologists.

Sheng L, Chen Y, Wei H, Che F, Wu Y, Qin Q, Yang C, Wang Y, Peng J, Bashir MR, Ronot M, Song B, Jiang H

pubmed logopapersJun 1 2025
Whether large language models (LLMs) could be integrated into the diagnostic workflow of focal liver lesions (FLLs) remains unclear. We aimed to investigate two generic LLMs (ChatGPT-4o and Gemini) regarding their diagnostic accuracies referring to the CT/MRI reports, compared to and combined with radiologists of different experience levels. From April 2022 to April 2024, this single-center retrospective study included consecutive adult patients who underwent contrast-enhanced CT/MRI for single FLL and subsequent histopathologic examination. The LLMs were prompted by clinical information and the "findings" section of radiology reports three times to provide differential diagnoses in the descending order of likelihood, with the first considered the final diagnosis. In the research setting, six radiologists (three junior and three middle-level) independently reviewed the CT/MRI images and clinical information in two rounds (first alone, then with LLM assistance). In the clinical setting, diagnoses were retrieved from the "impressions" section of radiology reports. Diagnostic accuracy was investigated against histopathology. 228 patients (median age, 59 years; 155 males) with 228 FLLs (median size, 3.6 cm) were included. Regarding the final diagnosis, the accuracy of two-step ChatGPT-4o (78.9%) was higher than single-step ChatGPT-4o (68.0%, p < 0.001) and single-step Gemini (73.2%, p = 0.004), similar to real-world radiology reports (80.0%, p = 0.34) and junior radiologists (78.9%-82.0%; p-values, 0.21 to > 0.99), but lower than middle-level radiologists (84.6%-85.5%; p-values, 0.001 to 0.02). No incremental diagnostic value of ChatGPT-4o was observed for any radiologist (p-values, 0.63 to > 0.99). Two-step ChatGPT-4o showed matching accuracies to real-world radiology reports and junior radiologists for diagnosing FLLs but was less accurate than middle-level radiologists and demonstrated little incremental diagnostic value.

Predicting lung cancer bone metastasis using CT and pathological imaging with a Swin Transformer model.

Li W, Zou X, Zhang J, Hu M, Chen G, Su S

pubmed logopapersJun 1 2025
Bone metastasis is a common and serious complication in lung cancer patients, leading to severe pain, pathological fractures, and reduced quality of life. Early prediction of bone metastasis can enable timely interventions and improve patient outcomes. In this study, we developed a multimodal Swin Transformer-based deep learning model for predicting bone metastasis risk in lung cancer patients by integrating CT imaging and pathological data. A total of 215 patients with confirmed lung cancer diagnoses, including those with and without bone metastasis, were included. The model was designed to process high-resolution CT images and digitized histopathological images, with the features extracted independently by two Swin Transformer networks. These features were then fused using decision-level fusion techniques to improve classification accuracy. The Swin-Dual Fusion Model achieved superior performance compared to single-modality models and conventional architectures such as ResNet50, with an AUC of 0.966 on the test data and 0.967 on the training data. This integrated model demonstrated high accuracy, sensitivity, and specificity, making it a promising tool for clinical application in predicting bone metastasis risk. The study emphasizes the potential of transformer-based models to revolutionize bone oncology through advanced multimodal analysis and early prediction of metastasis, ultimately improving patient care and treatment outcomes.
Page 44 of 1411405 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.