Sort by:
Page 40 of 1331328 results

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

Towards MR-Based Trochleoplasty Planning

Michael Wehrli, Alicia Durrer, Paul Friedrich, Sidaty El Hadramy, Edwin Li, Luana Brahaj, Carol C. Hasler, Philippe C. Cattin

arxiv logopreprintAug 8 2025
To treat Trochlear Dysplasia (TD), current approaches rely mainly on low-resolution clinical Magnetic Resonance (MR) scans and surgical intuition. The surgeries are planned based on surgeons experience, have limited adoption of minimally invasive techniques, and lead to inconsistent outcomes. We propose a pipeline that generates super-resolved, patient-specific 3D pseudo-healthy target morphologies from conventional clinical MR scans. First, we compute an isotropic super-resolved MR volume using an Implicit Neural Representation (INR). Next, we segment femur, tibia, patella, and fibula with a multi-label custom-trained network. Finally, we train a Wavelet Diffusion Model (WDM) to generate pseudo-healthy target morphologies of the trochlear region. In contrast to prior work producing pseudo-healthy low-resolution 3D MR images, our approach enables the generation of sub-millimeter resolved 3D shapes compatible for pre- and intraoperative use. These can serve as preoperative blueprints for reshaping the femoral groove while preserving the native patella articulation. Furthermore, and in contrast to other work, we do not require a CT for our pipeline - reducing the amount of radiation. We evaluated our approach on 25 TD patients and could show that our target morphologies significantly improve the sulcus angle (SA) and trochlear groove depth (TGD). The code and interactive visualization are available at https://wehrlimi.github.io/sr-3d-planning/.

Automated coronary artery segmentation / tissue characterization and detection of lipid-rich plaque: An integrated backscatter intravascular ultrasound study.

Masuda Y, Takeshita R, Tsujimoto A, Sahashi Y, Watanabe T, Fukuoka D, Hara T, Kanamori H, Okura H

pubmed logopapersAug 8 2025
Intravascular ultrasound (IVUS)-based tissue characterization has been used to detect vulnerable plaque or lipid-rich plaque (LRP). Recently, advancements in artificial intelligence (AI) technology have enabled automated coronary arterial plaque segmentation and tissue characterization. The purpose of this study was to evaluate the feasibility and diagnostic accuracy of a deep learning model for plaque segmentation, tissue characterization and identification of LRP. A total of 1,098 IVUS images from 67 patients who underwent IVUS-guided percutaneous coronary intervention were selected for the training group, while 1,100 IVUS images from 100 vessels (88 patients) were used for the validation group. A 7-layer U-Net ++ was applied for automated coronary artery segmentation and tissue characterization. Segmentation and quantification of the external elastic membrane (EEM), lumen and guidewire artifact were performed and compared with manual measurements. Plaque tissue characterization was conducted using integrated backscatter (IB)-IVUS as the gold standard. LRP was defined as %lipid area of ≥65 %. The deep learning model accurately segmented EEM and lumen. AI-predicted %lipid area (R = 0.90, P < 0.001), % fibrosis area (R = 0.89, P < 0.001), %dense fibrosis area (R = 0.81, P < 0.001) and % calcification area (R = 0.89, P < 0.001), showed strong correlation with IB-IVUS measurements. The model predicted LRP with a sensitivity of 62 %, specificity of 94 %, positive predictive value of 69 %, negative predictive value of 92 % and an area under the receiver operating characteristic curve of 0.919 (95 % CI:0.902-0.934), respectively. The deep-learning model demonstrated accurate automatic segmentation and tissue characterization of human coronary arteries, showing promise for identifying LRP.

GAN-MRI enhanced multi-organ MRI segmentation: a deep learning perspective.

Channarayapatna Srinivasa A, Bhat SS, Baduwal D, Sim ZTJ, Patil SS, Amarapur A, Prakash KNB

pubmed logopapersAug 8 2025
Clinical magnetic resonance imaging (MRI) is a high-resolution tool widely used for detailed anatomical imaging. However, prolonged scan times often lead to motion artefacts and patient discomfort. Fast acquisition techniques can reduce scan times but often produce noisy, low-contrast images, compromising segmentation accuracy essential for diagnosis and treatment planning. To address these limitations, we developed an end-to-end framework that incorporates BIDS-based data organiser and anonymizer, a GAN-based MR image enhancement model (GAN-MRI), AssemblyNet for brain region segmentation, and an attention-residual U-Net with Guided loss for abdominal and thigh segmentation. Thirty brain scans (5,400 slices) and 32 abdominal (1,920 slices) and 55 thigh scans (2,200 slices) acquired from multiple MRI scanners (GE, Siemens, Toshiba) underwent evaluation. Image quality improved significantly, with SNR and CNR for brain scans increasing from 28.44 to 42.92 (p < 0.001) and 11.88 to 18.03 (p < 0.001), respectively. Abdominal scans exhibited SNR increases from 35.30 to 50.24 (p < 0.001) and CNR from 10,290.93 to 93,767.22 (p < 0.001). Double-blind evaluations highlighted improved visualisations of anatomical structures and bias field correction. Segmentation performance improved substantially in the thigh (muscle: + 21%, IMAT: + 9%) and abdominal regions (SSAT: + 1%, DSAT: + 2%, VAT: + 12%), while brain segmentation metrics remained largely stable, reflecting the robustness of the baseline model. Proposed framework is designed to handle data from multiple anatomies with variations from different MRI scanners and centres by enhancing MRI scan and improving segmentation accuracy, diagnostic precision and treatment planning while reducing scan times and maintaining patient comfort.

Thyroid Volume Measurement With AI-Assisted Freehand 3D Ultrasound Compared to 2D Ultrasound-A Clinical Trial.

Rask KB, Makouei F, Wessman MHJ, Kristensen TT, Todsen T

pubmed logopapersAug 8 2025
Accurate thyroid volume assessment is critical in thyroid disease diagnostics, yet conventional high-resolution 2D ultrasound has limitations. Freehand 3D ultrasound with AI-assisted segmentation presents a potential advancement, but its clinical accuracy requires validation. This prospective clinical trial included 14 patients scheduled for total thyroidectomy. Preoperative thyroid volume was measured using both 2D ultrasound (ellipsoid method) and freehand 3D ultrasound with AI segmentation. Postoperative thyroid volume, determined via the water displacement method, served as the reference standard. The median postoperative thyroid volume was 14.8 mL (IQR 8.8-20.2). The median volume difference was 1.7 mL (IQR 1.2-3.3) for 3D ultrasound and 3.6 mL (IQR 2.3-6.6) for 2D ultrasound (p = 0.02). The inter-operator reliability coefficient for 3D ultrasound was 0.986 (p < 0.001). These findings suggest that freehand 3D ultrasound with AI-assisted segmentation provides superior accuracy and reproducibility compared to 2D ultrasound and may enhance clinical thyroid volume assessment. ClinicalTrials.gov identifier: NCT05510609.

A Deep Learning Model to Detect Acute MCA Occlusion on High Resolution Non-Contrast Head CT.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

Text Embedded Swin-UMamba for DeepLesion Segmentation

Ruida Cheng, Tejas Sudharshan Mathai, Pritam Mukherjee, Benjamin Hou, Qingqing Zhu, Zhiyong Lu, Matthew McAuliffe, Ronald M. Summers

arxiv logopreprintAug 8 2025
Segmentation of lesions on CT enables automatic measurement for clinical assessment of chronic diseases (e.g., lymphoma). Integrating large language models (LLMs) into the lesion segmentation workflow offers the potential to combine imaging features with descriptions of lesion characteristics from the radiology reports. In this study, we investigate the feasibility of integrating text into the Swin-UMamba architecture for the task of lesion segmentation. The publicly available ULS23 DeepLesion dataset was used along with short-form descriptions of the findings from the reports. On the test dataset, a high Dice Score of 82% and low Hausdorff distance of 6.58 (pixels) was obtained for lesion segmentation. The proposed Text-Swin-UMamba model outperformed prior approaches: 37% improvement over the LLM-driven LanGuideMedSeg model (p < 0.001),and surpassed the purely image-based xLSTM-UNet and nnUNet models by 1.74% and 0.22%, respectively. The dataset and code can be accessed at https://github.com/ruida/LLM-Swin-UMamba

Can Diffusion Models Bridge the Domain Gap in Cardiac MR Imaging?

Xin Ci Wong, Duygu Sarikaya, Kieran Zucker, Marc De Kamps, Nishant Ravikumar

arxiv logopreprintAug 8 2025
Magnetic resonance (MR) imaging, including cardiac MR, is prone to domain shift due to variations in imaging devices and acquisition protocols. This challenge limits the deployment of trained AI models in real-world scenarios, where performance degrades on unseen domains. Traditional solutions involve increasing the size of the dataset through ad-hoc image augmentation or additional online training/transfer learning, which have several limitations. Synthetic data offers a promising alternative, but anatomical/structural consistency constraints limit the effectiveness of generative models in creating image-label pairs. To address this, we propose a diffusion model (DM) trained on a source domain that generates synthetic cardiac MR images that resemble a given reference. The synthetic data maintains spatial and structural fidelity, ensuring similarity to the source domain and compatibility with the segmentation mask. We assess the utility of our generative approach in multi-centre cardiac MR segmentation, using the 2D nnU-Net, 3D nnU-Net and vanilla U-Net segmentation networks. We explore domain generalisation, where, domain-invariant segmentation models are trained on synthetic source domain data, and domain adaptation, where, we shift target domain data towards the source domain using the DM. Both strategies significantly improved segmentation performance on data from an unseen target domain, in terms of surface-based metrics (Welch's t-test, p < 0.01), compared to training segmentation models on real data alone. The proposed method ameliorates the need for transfer learning or online training to address domain shift challenges in cardiac MR image analysis, especially useful in data-scarce settings.

XAG-Net: A Cross-Slice Attention and Skip Gating Network for 2.5D Femur MRI Segmentation

Byunghyun Ko, Anning Tian, Jeongkyu Lee

arxiv logopreprintAug 8 2025
Accurate segmentation of femur structures from Magnetic Resonance Imaging (MRI) is critical for orthopedic diagnosis and surgical planning but remains challenging due to the limitations of existing 2D and 3D deep learning-based segmentation approaches. In this study, we propose XAG-Net, a novel 2.5D U-Net-based architecture that incorporates pixel-wise cross-slice attention (CSA) and skip attention gating (AG) mechanisms to enhance inter-slice contextual modeling and intra-slice feature refinement. Unlike previous CSA-based models, XAG-Net applies pixel-wise softmax attention across adjacent slices at each spatial location for fine-grained inter-slice modeling. Extensive evaluations demonstrate that XAG-Net surpasses baseline 2D, 2.5D, and 3D U-Net models in femur segmentation accuracy while maintaining computational efficiency. Ablation studies further validate the critical role of the CSA and AG modules, establishing XAG-Net as a promising framework for efficient and accurate femur MRI segmentation.

Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks

Mengyu Li, Guoyao Shen, Chad W. Farris, Xin Zhang

arxiv logopreprintAug 7 2025
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.
Page 40 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.