Sort by:
Page 75 of 1341333 results

Establishment and evaluation of an automatic multi?sequence MRI segmentation model of primary central nervous system lymphoma based on the nnU?Net deep learning network method.

Wang T, Tang X, Du J, Jia Y, Mou W, Lu G

pubmed logopapersJul 1 2025
Accurate quantitative assessment using gadolinium-contrast magnetic resonance imaging (MRI) is crucial in therapy planning, surveillance and prognostic assessment of primary central nervous system lymphoma (PCNSL). The present study aimed to develop a multimodal artificial intelligence deep learning segmentation model to address the challenges associated with traditional 2D measurements and manual volume assessments in MRI. Data from 49 pathologically-confirmed patients with PCNSL from six Chinese medical centers were analyzed, and regions of interest were manually segmented on contrast-enhanced T1-weighted and T2-weighted MRI scans for each patient, followed by fully automated voxel-wise segmentation of tumor components using a 3-dimenstional convolutional deep neural network. Furthermore, the efficiency of the model was evaluated using practical indicators and its consistency and accuracy was compared with traditional methods. The performance of the models were assessed using the Dice similarity coefficient (DSC). The Mann-Whitney U test was used to compare continuous clinical variables and the χ<sup>2</sup> test was used for comparisons between categorical clinical variables. T1WI sequences exhibited the optimal performance (training dice: 0.923, testing dice: 0.830, outer validation dice: 0.801), while T2WI showed a relatively poor performance (training dice of 0.761, a testing dice of 0.647, and an outer validation dice of 0.643. In conclusion, the automatic multi-sequences MRI segmentation model for PCNSL in the present study displayed high spatial overlap ratio and similar tumor volume with routine manual segmentation, indicating its significant potential.

Foundation Model and Radiomics-Based Quantitative Characterization of Perirenal Fat in Renal Cell Carcinoma Surgery.

Mei H, Chen H, Zheng Q, Yang R, Wang N, Jiao P, Wang X, Chen Z, Liu X

pubmed logopapersJul 1 2025
To quantitatively characterize the degree of perirenal fat adhesion using artificial intelligence in renal cell carcinoma. This retrospective study analyzed a total of 596 patients from three cohorts, utilizing corticomedullary phase computed tomography urography (CTU) images. The nnUNet v2 network combined with numerical computation was employed to segment the perirenal fat region. Pyradiomics algorithms and a computed tomography foundation model were used to extract features from CTU images separately, creating single-modality predictive models for identifying perirenal fat adhesion. By concatenating the Pyradiomics and foundation model features, an early fusion multimodal predictive signature was developed. The prognostic performance of the single-modality and multimodality models was further validated in two independent cohorts. The nnUNet v2 segmentation model accurately segmented both kidneys. The neural network and thresholding approach effectively delineated the perirenal fat region. Single-modality models based on radiomic and computed tomography foundation features demonstrated a certain degree of accuracy in diagnosing and identifying perirenal fat adhesion, while the early feature fusion diagnostic model outperformed the single-modality models. Also, the perirenal fat adhesion score showed a positive correlation with surgical time and intraoperative blood loss. AI-based radiomics and foundation models can accurately identify the degree of perirenal fat adhesion and have the potential to be used for surgical risk assessment.

MCAUnet: a deep learning framework for automated quantification of body composition in liver cirrhosis patients.

Wang J, Xia S, Zhang J, Wang X, Zhao C, Zheng W

pubmed logopapersJul 1 2025
Traditional methods for measuring body composition in CT scans rely on labor-intensive manual delineation, which is time-consuming and imprecise. This study proposes a deep learning-driven framework, MCAUnet, for accurate and automated quantification of body composition and comprehensive survival analysis in cirrhotic patients. A total of 11,362 L3-level lumbar CT slices were collected to train and validate the segmentation model. The proposed model incorporates an attention mechanism from the channel perspective, enabling adaptive fusion of critical channel features. Experimental results demonstrate that our approach achieves an average Dice coefficient of 0.952 for visceral fat segmentation, significantly outperforming existing segmentation models. Based on the quantified body composition, sarcopenic visceral obesity (SVO) was defined, and an association model was developed to analyze the relationship between SVO and survival rates in cirrhotic patients. The study revealed that 3-year and 5-year survival rates of SVO patients were significantly lower than those of non-SVO patients. Regression analysis further validated the strong correlation between SVO and mortality in cirrhotic patients. In summary, the MCAUnet framework provides a novel, precise, and automated tool for body composition quantification and survival analysis in cirrhotic patients, offering potential support for clinical decision-making and personalized treatment strategies.

Deep learning-based clinical decision support system for intracerebral hemorrhage: an imaging-based AI-driven framework for automated hematoma segmentation and trajectory planning.

Gan Z, Xu X, Li F, Kikinis R, Zhang J, Chen X

pubmed logopapersJul 1 2025
Intracerebral hemorrhage (ICH) remains a critical neurosurgical emergency with high mortality and long-term disability. Despite advancements in minimally invasive techniques, procedural precision remains limited by hematoma complexity and resource disparities, particularly in underserved regions where 68% of global ICH cases occur. Therefore, the authors aimed to introduce a deep learning-based decision support and planning system to democratize surgical planning and reduce operator dependence. A retrospective cohort of 347 patients (31,024 CT slices) from a single hospital (March 2016-June 2024) was analyzed. The framework integrated nnU-Net-based hematoma and skull segmentation, CT reorientation via ocular landmarks (mean angular correction 20.4° [SD 8.7°]), safety zone delineation with dual anatomical corridors, and trajectory optimization prioritizing maximum hematoma traversal and critical structure avoidance. A validated scoring system was implemented for risk stratification. With the artificial intelligence (AI)-driven system, the automated segmentation accuracy reached clinical-grade performance (Dice similarity coefficient 0.90 [SD 0.14] for hematoma and 0.99 [SD 0.035] for skull), with strong interrater reliability (intraclass correlation coefficient 0.91). For trajectory planning of supratentorial hematomas, the system achieved a low-risk trajectory in 80.8% (252/312) and a moderate-risk trajectory in 15.4% (48/312) of patients, while replanning was required due to high-risk designations in 3.8% of patients (12/312). This AI-driven system demonstrated robust efficacy for supratentorial ICH, addressing 60% of prevalent hemorrhage subtypes. While limitations remain in infratentorial hematomas, this novel automated hematoma segmentation and surgical planning system could be helpful in assisting less-experienced neurosurgeons with limited resources in primary healthcare settings.

Artificial intelligence image analysis for Hounsfield units in preoperative thoracolumbar CT scans: an automated screening for osteoporosis in patients undergoing spine surgery.

Feng E, Jayasuriya NM, Nathani KR, Katsos K, Machlab LA, Johnson GW, Freedman BA, Bydon M

pubmed logopapersJul 1 2025
This study aimed to develop an artificial intelligence (AI) model for automatically detecting Hounsfield unit (HU) values at the L1 vertebra in preoperative thoracolumbar CT scans. This model serves as a screening tool for osteoporosis in patients undergoing spine surgery, offering an alternative to traditional bone mineral density measurement methods like dual-energy x-ray absorptiometry. The authors utilized two CT scan datasets, comprising 501 images, which were split into training, validation, and test subsets. The nnU-Net framework was used for segmentation, followed by an algorithm to calculate HU values from the L1 vertebra. The model's performance was validated against manual HU calculations by expert raters on 56 CT scans. Statistical measures included the Dice coefficient, Pearson correlation coefficient, intraclass correlation coefficient (ICC), and Bland-Altman plots to assess the agreement between AI and human-derived HU measurements. The AI model achieved a high Dice coefficient of 0.91 for vertebral segmentation. The Pearson correlation coefficient between AI-derived HU and human-derived HU values was 0.96, indicating strong agreement. ICC values for interrater reliability were 0.95 and 0.94 for raters 1 and 2, respectively. The mean difference between AI and human HU values was 7.0 HU, with limits of agreement ranging from -21.1 to 35.2 HU. A paired t-test showed no significant difference between AI and human measurements (p = 0.21). The AI model demonstrated strong agreement with human experts in measuring HU values, validating its potential as a reliable tool for automated osteoporosis screening in spine surgery patients. This approach can enhance preoperative risk assessment and perioperative bone health optimization. Future research should focus on external validation and inclusion of diverse patient demographics to ensure broader applicability.

Artificial Intelligence in Obstetric and Gynecological MR Imaging.

Saida T, Gu W, Hoshiai S, Ishiguro T, Sakai M, Amano T, Nakahashi Y, Shikama A, Satoh T, Nakajima T

pubmed logopapersJul 1 2025
This review explores the significant progress and applications of artificial intelligence (AI) in obstetrics and gynecological MRI, charting its development from foundational algorithmic techniques to deep learning strategies and advanced radiomics. This review features research published over the last few years that has used AI with MRI to identify specific conditions such as uterine leiomyosarcoma, endometrial cancer, cervical cancer, ovarian tumors, and placenta accreta. In addition, it covers studies on the application of AI for segmentation and quality improvement in obstetrics and gynecology MRI. The review also outlines the existing challenges and envisions future directions for AI research in this domain. The growing accessibility of extensive datasets across various institutions and the application of multiparametric MRI are significantly enhancing the accuracy and adaptability of AI. This progress has the potential to enable more accurate and efficient diagnosis, offering opportunities for personalized medicine in the field of obstetrics and gynecology.

Deep Learning-enhanced Opportunistic Osteoporosis Screening in Ultralow-Voltage (80 kV) Chest CT: A Preliminary Study.

Li Y, Liu S, Zhang Y, Zhang M, Jiang C, Ni M, Jin D, Qian Z, Wang J, Pan X, Yuan H

pubmed logopapersJul 1 2025
To explore the feasibility of deep learning (DL)-enhanced, fully automated bone mineral density (BMD) measurement using the ultralow-voltage 80 kV chest CT scans performed for lung cancer screening. This study involved 987 patients who underwent 80 kV chest and 120 kV lumbar CT from January to July 2024. Patients were collected from six CT scanners and divided into the training, validation, and test sets 1 and 2 (561: 177: 112: 137). Four convolutional neural networks (CNNs) were employed for automated segmentation (3D VB-Net and SCN), region of interest extraction (3D VB-Net), and BMD calculation (DenseNet and ResNet) of the target vertebrae (T12-L2). The BMD values of T12-L2 were obtained using 80 and 120 kV quantitative CT (QCT), the latter serving as the standard reference. Linear regression and Bland-Altman analyses were used to compare BMD values between 120 kV QCT and 80 kV CNNs, and between 120 kV QCT and 80 kV QCT. Receiver operating characteristic curve analysis was used to assess the diagnostic performance of the 80 kV CNNs and 80 kV QCT for osteoporosis and low BMD from normal BMD. Linear regression and Bland-ltman analyses revealed a stronger correlation (R<sup>2</sup>=0.991-0.998 and 0.990-0.991, P<0.001) and better agreement (mean error, -1.36 to 1.62 and 1.72 to 2.27 mg/cm<sup>3</sup>; 95% limits of agreement, -9.73 to 7.01 and -5.71 to 10.19mg/cm<sup>3</sup>) for BMD between 120 kV QCT and 80 kV CNNs than between 120 kV QCT and 80 kV QCT. The areas under the curve of the 80 kV CNNs and 80 kV QCT in detecting osteoporosis and low BMD were 0.997-1.000 and 0.997-0.998, and 0.998-1.000 and 0.997, respectively. The DL method could achieve fully automated BMD calculation for opportunistic osteoporosis screening with high accuracy using ultralow-voltage 80 kV chest CT performed for lung cancer screening.

Cross-domain subcortical brain structure segmentation algorithm based on low-rank adaptation fine-tuning SAM.

Sui Y, Hu Q, Zhang Y

pubmed logopapersJul 1 2025
Accurate and robust segmentation of anatomical structures in brain MRI provides a crucial basis for the subsequent observation, analysis, and treatment planning of various brain diseases. Deep learning foundation models trained and designed on large-scale natural scene image datasets experience significant performance degradation when applied to subcortical brain structure segmentation in MRI, limiting their direct applicability in clinical diagnosis. This paper proposes a subcortical brain structure segmentation algorithm based on Low-Rank Adaptation (LoRA) to fine-tune SAM (Segment Anything Model) by freezing SAM's image encoder and applying LoRA to approximate low-rank matrix updates to the encoder's training weights, while also fine-tuning SAM's lightweight prompt encoder and mask decoder. The fine-tuned model's learnable parameters (5.92 MB) occupy only 6.39% of the original model's parameter size (92.61 MB). For training, model preheating is employed to stabilize the fine-tuning process. During inference, adaptive prompt learning with point or box prompts is introduced to enhance the model's accuracy for arbitrary brain MRI segmentation. This interactive prompt learning approach provides clinicians with a means of intelligent segmentation for deep brain structures, effectively addressing the challenges of limited data labels and high manual annotation costs in medical image segmentation. We use five MRI datasets of IBSR, MALC, LONI, LPBA, Hammers and CANDI for experiments across various segmentation scenarios, including cross-domain settings with inference samples from diverse MRI datasets and supervised fine-tuning settings, demonstrate the proposed segmentation algorithm's generalization and effectiveness when compared to current mainstream and supervised segmentation algorithms.

Medical Image Segmentation Using Advanced Unet: VMSE-Unet and VM-Unet CBAM+

Sayandeep Kanrar, Raja Piyush, Qaiser Razi, Debanshi Chakraborty, Vikas Hassija, GSS Chalapathi

arxiv logopreprintJul 1 2025
In this paper, we present the VMSE U-Net and VM-Unet CBAM+ model, two cutting-edge deep learning architectures designed to enhance medical image segmentation. Our approach integrates Squeeze-and-Excitation (SE) and Convolutional Block Attention Module (CBAM) techniques into the traditional VM U-Net framework, significantly improving segmentation accuracy, feature localization, and computational efficiency. Both models show superior performance compared to the baseline VM-Unet across multiple datasets. Notably, VMSEUnet achieves the highest accuracy, IoU, precision, and recall while maintaining low loss values. It also exhibits exceptional computational efficiency with faster inference times and lower memory usage on both GPU and CPU. Overall, the study suggests that the enhanced architecture VMSE-Unet is a valuable tool for medical image analysis. These findings highlight its potential for real-world clinical applications, emphasizing the importance of further research to optimize accuracy, robustness, and computational efficiency.

Deep Learning Models for CT Segmentation of Invasive Pulmonary Aspergillosis, Mucormycosis, Bacterial Pneumonia and Tuberculosis: A Multicentre Study.

Li Y, Huang F, Chen D, Zhang Y, Zhang X, Liang L, Pan J, Tan L, Liu S, Lin J, Li Z, Hu G, Chen H, Peng C, Ye F, Zheng J

pubmed logopapersJul 1 2025
The differential diagnosis of invasive pulmonary aspergillosis (IPA), pulmonary mucormycosis (PM), bacterial pneumonia (BP) and pulmonary tuberculosis (PTB) are challenging due to overlapping clinical and imaging features. Manual CT lesion segmentation is time-consuming, deep-learning (DL)-based segmentation models offer a promising solution, yet disease-specific models for these infections remain underexplored. We aimed to develop and validate dedicated CT segmentation models for IPA, PM, BP and PTB to enhance diagnostic accuracy. Methods:Retrospective multi-centre data (115 IPA, 53 PM, 130 BP, 125 PTB) were used for training/internal validation, with 21 IPA, 8PM, 30 BP and 31 PTB cases for external validation. Expert-annotated lesions served as ground truth. An improved 3D U-Net architecture was employed for segmentation, with preprocessing steps including normalisations, cropping and data augmentation. Performance was evaluated using Dice coefficients. Results:Internal validation achieved Dice scores of 78.83% (IPA), 93.38% (PM), 80.12% (BP) and 90.47% (PTB). External validation showed slightly reduced but robust performance: 75.09% (IPA), 77.53% (PM), 67.40% (BP) and 80.07% (PTB). The PM model demonstrated exceptional generalisability, scoring 83.41% on IPA data. Cross-validation revealed mutual applicability, with IPA/PTB models achieving > 75% Dice for each other's lesions. BP segmentation showed lower but clinically acceptable performance ( >72%), likely due to complex radiological patterns. Disease-specific DL segmentation models exhibited high accuracy, particularly for PM and PTB. While IPA and BP models require refinement, all demonstrated cross-disease utility, suggesting immediate clinical value for preliminary lesion annotation. Future efforts should enhance datasets and optimise models for intricate cases.
Page 75 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.