Sort by:
Page 22 of 1331328 results

LoRA-PT: Low-rank adapting UNETR for hippocampus segmentation using principal tensor singular values and vectors.

He G, Cheng W, Zhu H, Yu G

pubmed logopapersSep 1 2025
The hippocampus is an important brain structure involved in various psychiatric disorders, and its automatic and accurate segmentation is vital for studying these diseases. Recently, deep learning-based methods have made significant progress in hippocampus segmentation. However, training deep neural network models requires substantial computational resources, time, and a large amount of labeled training data, which is frequently scarce in medical image segmentation. To address these issues, we propose LoRA-PT, a novel parameter-efficient fine-tuning (PEFT) method that transfers the pre-trained UNETR model from the BraTS2021 dataset to the hippocampus segmentation task. Specifically, LoRA-PT divides the parameter matrix of the transformer structure into three distinct sizes, yielding three third-order tensors. These tensors are decomposed using tensor singular value decomposition to generate low-rank tensors consisting of the principal singular values and vectors, with the remaining singular values and vectors forming the residual tensor. During fine-tuning, only the low-rank tensors (i.e., the principal tensor singular values and vectors) are updated, while the residual tensors remain unchanged. We validated the proposed method on three public hippocampus datasets, and the experimental results show that LoRA-PT outperformed state-of-the-art PEFT methods in segmentation accuracy while significantly reducing the number of parameter updates. Our source code is available at https://github.com/WangangCheng/LoRA-PT/tree/LoRA-PT.

Cross-channel feature transfer 3D U-Net for automatic segmentation of the perilymph and endolymph fluid spaces in hydrops MRI.

Yoo TW, Yeo CD, Lee EJ, Oh IS

pubmed logopapersSep 1 2025
The identification of endolymphatic hydrops (EH) using magnetic resonance imaging (MRI) is crucial for understanding inner ear disorders such as Meniere's disease and sudden low-frequency hearing loss. The EH ratio is calculated as the ratio of the endolymphatic fluid space to the perilymphatic fluid space. We propose a novel cross-channel feature transfer (CCFT) 3D U-Net for fully automated segmentation of the perilymphatic and endolymphatic fluid spaces in hydrops MRI. The model exhibits state-of-the-art performance in segmenting the endolymphatic fluid space by transferring magnetic resonance cisternography (MRC) features to HYDROPS-Mi2 (HYbriD of Reversed image Of Positive endolymph signal and native image of positive perilymph Signal multiplied with the heavily T2-weighted MR cisternography). Experimental results using the CCFT module showed that the segmentation performance of the perilymphatic space was 0.9459 for the Dice similarity coefficient (DSC) and 0.8975 for the intersection over union (IOU), and that of the endolymphatic space was 0.8053 for the DSC and 0.6778 for the IOU.

Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment III Trial Revisited: Objective Classification of Traumatic Brain Injury With Brain Imaging Segmentation and Biomarker Levels.

Cheong S, Gupta R, Kadaba Sridhar S, Hall AJ, Frankel M, Wright DW, Sham YY, Samadani U

pubmed logopapersSep 1 2025
This post hoc study of the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment (ProTECT) III trial investigates whether improving traumatic brain injury (TBI) classification, using serum biomarkers (glial fibrillary acidic protein [GFAP] and ubiquitin carboxyl-terminal esterase L1 [UCH-L1]) and algorithmically assessed total lesion volume, could identify a subset of responders to progesterone treatment, beyond broad measures like the Glasgow Coma Scale (GCS) and Glasgow Outcome Scale-Extended (GOS-E), which may fail to capture subtle changes in TBI recovery. Brain lesion volumes on CT scans were quantified using Brain Lesion Analysis and Segmentation Tool for CT. Patients were classified into true-positive and true-negative groups based on an optimization scheme to determine a threshold that maximizes agreement between radiological assessment and objectively measured lesion volume. True-positives were further categorized into low (> 0.2-10 mL), medium (> 10-50 mL), and high (> 50 mL) lesion volumes for analysis with protein biomarkers and injury severity. Correlation analyses linked Rotterdam scores (RSs) with biomarker levels and lesion volumes, whereas Welch's t-test evaluated biomarker differences between groups and progesterone's effects. Forty-nine level 1 trauma centers in the United States. Patients with moderate-to-severe TBI. Progesterone. GFAP and UCH-L1 levels were significantly higher in true-positive cases with low to medium lesion volume. Only UCH-L1 differed between progesterone and placebo groups at 48 hours. Both biomarkers and lesion volume in the true-positive group correlated with the RS. No sex-specific or treatment differences were found. This study reaffirms elevated levels of GFAP and UCH-L1 as biomarkers for detecting TBI in patients with brain lesions and for predicting clinical outcomes. Despite improved classification using CT-imaging segmentation and serum biomarkers, we did not identify a subset of progesterone responders within 24 or 48 hours of progesterone treatment. More rigorous and quantifiable measures for classifying the nature of injury may be needed to enable development of therapeutics as neither serum markers nor algorithmic CT analysis performed better than the older metrics of Rotterdam or GCS metrics.

Deep Learning for Automated 3D Assessment of Rotator Cuff Muscle Atrophy and Fat Infiltration prior to Total Shoulder Arthroplasty.

Levin JM, Satir OB, Hurley ET, Colasanti C, Becce F, Terrier A, Eghbali P, Goetti P, Klifto C, Anakwenze O, Frankle MA, Namdari S, Büchler P

pubmed logopapersSep 1 2025
Rotator cuff muscle pathology affects outcomes following total shoulder arthroplasty, yet current assessment methods lack reliability in quantifying muscle atrophy and fat infiltration. We developed a deep learning-based model for automated segmentation of rotator cuff muscles on computed tomography (CT) and propose a T-score classification of volumetric muscle atrophy. We further characterized distinct atrophy phenotypes, 3D fat infiltration percentage (3DFI%), and anterior-posterior (AP) balance, which were compared between healthy controls, anatomic total shoulder arthroplasty (aTSA), and reverse total shoulder arthroplasty (rTSA) patients. 952 shoulder CT scans were included (762 controls, 103 undergoing aTSA for glenohumeral osteoarthritis, and 87 undergoing rTSA for cuff tear arthropathy. A deep learning model was developed to allow automated segmentation of supraspinatus (SS), subscapularis (SC), infraspinatus (IS) and teres minor (TM). Muscle volumes were normalized to scapula volume, and control muscle volumes were referenced to calculate T-scores for each muscle. T-scores were classified as no atrophy (>-1.0), moderate atrophy (-1 to -2.5), and severe atrophy (<-2.5). 3DFI% was quantified as the proportion of fat within each muscle using Hounsfield unit thresholds. The T-scores, 3DFI%, and AP balance were compared between the three cohorts. The aTSA cohort had significantly greater atrophy in all muscles compared to control (p<0.001), whereas the rTSA cohort had significantly greater atrophy in SS, SC, and IS than aTSA (p<0.001). In the aTSA cohort, the most common phenotype was SS<sub>severe</sub>/SC<sub>moderate</sub>/IS+TM<sub>moderate</sub>, while in the rTSA cohort it was SS<sub>severe</sub>/SC<sub>moderate</sub>/IS+TM<sub>severe</sub>. The aTSA group had significantly higher 3DFI% compared to controls for all muscles (p<0.001), while the rTSA cohort had significantly higher 3DFI% than aTSA and control cohorts for all muscles (p<0.001). Additionally, the aTSA cohort had a significantly lower AP muscle volume ratio (1.06 vs. 1.14, p<0.001), whereas the rTSA group had a significantly higher AP muscle volume ratio than the control cohort (1.31 vs. 1.14, p<0.001). Our study demonstrates successful development of a deep learning model for automated volumetric assessment of rotator cuff muscle atrophy, 3DFI% and AP balance on shoulder CT scans. We found that aTSA patients had significantly greater muscle atrophy and 3DFI% than controls, while the rTSA patients had the most severe muscle atrophy and 3DFI%. Additionally, distinct phenotypes of muscle atrophy and AP muscle balance exist in aTSA and rTSA that warrant further investigation with regards to shoulder arthroplasty outcomes.

FocalTransNet: A Hybrid Focal-Enhanced Transformer Network for Medical Image Segmentation.

Liao M, Yang R, Zhao Y, Liang W, Yuan J

pubmed logopapersSep 1 2025
CNNs have demonstrated superior performance in medical image segmentation. To overcome the limitation of only using local receptive field, previous work has attempted to integrate Transformers into convolutional network components such as encoders, decoders, or skip connections. However, these methods can only establish long-distance dependencies for some specific patterns and usually neglect the loss of fine-grained details during downsampling in multi-scale feature extraction. To address the issues, we present a novel hybrid Transformer network called FocalTransNet. specifically, we construct a focal-enhanced (FE) Transformer module by introducing dense cross-connections into a CNN-Transformer dual-path structure and deploy the FE Transformer throughout the entire encoder. Different from existing hybrid networks that employ embedding or stacking strategies, the proposed model allows for a comprehensive extraction and deep fusion of both local and global features at different scales. Besides, we propose a symmetric patch merging (SPM) module for downsampling, which can retain the fine-grained details by stablishing a specific information compensation mechanism. We evaluated the proposed method on four different medical image segmentation benchmarks. The proposed method outperforms previous state-of-the-art convolutional networks, Transformers, and hybrid networks. The code for FocalTransNet is publicly available at https://github.com/nemanjajoe/FocalTransNet.

Automated rating of Fazekas scale in fluid-attenuated inversion recovery MRI for ischemic stroke or transient ischemic attack using machine learning.

Jeon ET, Kim SM, Jung JM

pubmed logopapersSep 1 2025
White matter hyperintensities (WMH) are commonly assessed using the Fazekas scale, a subjective visual grading system. Despite the emergence of deep learning models for automatic WMH grading, their application in stroke patients remains limited. This study aimed to develop and validate an automatic segmentation and grading model for WMH in stroke patients, utilizing spatial-probabilistic methods. We developed a two-step deep learning pipeline to predict Fazekas scale scores from T2-weighted FLAIR images. First, WMH segmentation was performed using a residual neural network based on the U-Net architecture. Then, Fazekas scale grading was carried out using a 3D convolutional neural network trained on the segmented WMH probability volumes. A total of 471 stroke patients from three different sources were included in the analysis. The performance metrics included area under the precision-recall curve (AUPRC), Dice similarity coefficient, and absolute error for WMH volume prediction. In addition, agreement analysis and quadratic weighted kappa were calculated to assess the accuracy of the Fazekas scale predictions. The WMH segmentation model achieved an AUPRC of 0.81 (95% CI, 0.55-0.95) and a Dice similarity coefficient of 0.73 (95% CI, 0.49-0.87) in the internal test set. The mean absolute error between the true and predicted WMH volumes was 3.1 ml (95% CI, 0.0 ml-15.9 ml), with no significant variation across Fazekas scale categories. The agreement analysis demonstrated strong concordance, with an R-squared value of 0.91, a concordance correlation coefficient of 0.96, and a systematic difference of 0.33 ml in the internal test set, and 0.94, 0.97, and 0.40 ml, respectively, in the external validation set. In predicting Fazekas scores, the 3D convolutional neural network achieved quadratic weighted kappa values of 0.951 for regression tasks and 0.956 for classification tasks in the internal test set, and 0.898 and 0.956, respectively, in the external validation set. The proposed deep learning pipeline demonstrated robust performance in automatic WMH segmentation and Fazekas scale grading from FLAIR images in stroke patients. This approach offers a reliable and efficient tool for evaluating WMH burden, which may assist in predicting future vascular events.

Pulmonary Biomechanics in COPD: Imaging Techniques and Clinical Applications.

Aguilera SM, Chaudhary MFA, Gerard SE, Reinhardt JM, Bodduluri S

pubmed logopapersSep 1 2025
The respiratory system depends on complex biomechanical processes to enable gas exchange. The mechanical properties of the lung parenchyma, airways, vasculature, and surrounding structures play an essential role in overall ventilation efficacy. These complex biomechanical processes however are significantly altered in chronic obstructive pulmonary disease (COPD) due to emphysematous destruction of lung parenchyma, chronic airway inflammation, and small airway obstruction. Recent advancements computed tomography (CT) and magnetic resonance imaging (MRI) acquisition techniques, combined with sophisticated image post-processing algorithms and deep neural network integration, have enabled comprehensive quantitative assessment of lung structure, tissue deformation, and lung function at the tissue level. These methods have led to better phenotyping, therapeutic strategies and refined our understanding of pathological processes that compromise pulmonary function in COPD. In this review, we discuss recent developments in imaging and image processing methods for studying pulmonary biomechanics with specific focus on clinical applications for chronic obstructive pulmonary disease (COPD) including the assessment of regional ventilation, planning of endobronchial valve treatment, prediction of disease onset and progression, sizing of lungs for transplantation, and guiding mechanical ventilation. These advanced image-based biomechanical measurements when combined with clinical expertise play a critical role in disease management and personalized therapeutic interventions for patients with COPD.

Multidisciplinary Consensus Prostate Contours on Magnetic Resonance Imaging: Educational Atlas and Reference Standard for Artificial Intelligence Benchmarking.

Song Y, Dornisch AM, Dess RT, Margolis DJA, Weinberg EP, Barrett T, Cornell M, Fan RE, Harisinghani M, Kamran SC, Lee JH, Li CX, Liss MA, Rusu M, Santos J, Sonn GA, Vidic I, Woolen SA, Dale AM, Seibert TM

pubmed logopapersSep 1 2025
Evaluation of artificial intelligence (AI) algorithms for prostate segmentation is challenging because ground truth is lacking. We aimed to: (1) create a reference standard data set with precise prostate contours by expert consensus, and (2) evaluate various AI tools against this standard. We obtained prostate magnetic resonance imaging cases from six institutions from the Qualitative Prostate Imaging Consortium. A panel of 4 experts (2 genitourinary radiologists and 2 prostate radiation oncologists) meticulously developed consensus prostate segmentations on axial T<sub>2</sub>-weighted series. We evaluated the performance of 6 AI tools (3 commercially available and 3 academic) using Dice scores, distance from reference contour, and volume error. The panel achieved consensus prostate segmentation on each slice of all 68 patient cases included in the reference data set. We present 2 patient examples to serve as contouring guides. Depending on the AI tool, median Dice scores (across patients) ranged from 0.80 to 0.94 for whole prostate segmentation. For a typical (median) patient, AI tools had a mean error over the prostate surface ranging from 1.3 to 2.4 mm. They maximally deviated 3.0 to 9.4 mm outside the prostate and 3.0 to 8.5 mm inside the prostate for a typical patient. Error in prostate volume measurement for a typical patient ranged from 4.3% to 31.4%. We established an expert consensus benchmark for prostate segmentation. The best-performing AI tools have typical accuracy greater than that reported for radiation oncologists using computed tomography scans (the most common clinical approach for radiation therapy planning). Physician review remains essential to detect occasional major errors.

DeepNuParc: A novel deep clustering framework for fine-scale parcellation of brain nuclei using diffusion MRI tractography.

He H, Zhu C, Zhang L, Liu Y, Xu X, Chen Y, Zekelman L, Rushmore J, Rathi Y, Makris N, O'Donnell LJ, Zhang F

pubmed logopapersSep 1 2025
Brain nuclei are clusters of anatomically distinct neurons that serve as important hubs for processing and relaying information in various neural circuits. Fine-scale parcellation of the brain nuclei is vital for a comprehensive understanding of their anatomico-functional correlations. Diffusion MRI tractography is an advanced imaging technique that can estimate the brain's white matter structural connectivity to potentially reveal the topography of the nuclei of interest for studying their subdivisions. In this work, we present a deep clustering pipeline, namely DeepNuParc, to perform automated, fine-scale parcellation of brain nuclei using diffusion MRI tractography. First, we incorporate a newly proposed deep learning approach to enable accurate segmentation of the nuclei of interest directly on the dMRI data. Next, we design a novel streamline clustering-based structural connectivity feature for a robust representation of voxels within the nuclei. Finally, we improve the popular joint dimensionality reduction and k-means clustering approach to enable nuclei parcellation at a finer scale. We demonstrate DeepNuParc on two important brain structures, i.e. the amygdala and the thalamus, that are known to have multiple anatomically and functionally distinct nucleus subdivisions. Experimental results show that DeepNuParc enables consistent parcellation of the nuclei into multiple parcels across multiple subjects and achieves good correspondence with the widely used coarse-scale atlases. Our code is available at https://github.com/HarlandZZC/deep_nuclei_parcellation.

MSA2-Net: Utilizing Self-Adaptive Convolution Module to Extract Multi-Scale Information in Medical Image Segmentation

Chao Deng, Xiaosen Li, Xiao Qin

arxiv logopreprintSep 1 2025
The nnUNet segmentation framework adeptly adjusts most hyperparameters in training scripts automatically, but it overlooks the tuning of internal hyperparameters within the segmentation network itself, which constrains the model's ability to generalize. Addressing this limitation, this study presents a novel Self-Adaptive Convolution Module that dynamically adjusts the size of the convolution kernels depending on the unique fingerprints of different datasets. This adjustment enables the MSA2-Net, when equipped with this module, to proficiently capture both global and local features within the feature maps. Self-Adaptive Convolution Module is strategically integrated into two key components of the MSA2-Net: the Multi-Scale Convolution Bridge and the Multi-Scale Amalgamation Decoder. In the MSConvBridge, the module enhances the ability to refine outputs from various stages of the CSWin Transformer during the skip connections, effectively eliminating redundant data that could potentially impair the decoder's performance. Simultaneously, the MSADecoder, utilizing the module, excels in capturing detailed information of organs varying in size during the decoding phase. This capability ensures that the decoder's output closely reproduces the intricate details within the feature maps, thus yielding highly accurate segmentation images. MSA2-Net, bolstered by this advanced architecture, has demonstrated exceptional performance, achieving Dice coefficient scores of 86.49\%, 92.56\%, 93.37\%, and 92.98\% on the Synapse, ACDC, Kvasir, and Skin Lesion Segmentation (ISIC2017) datasets, respectively. This underscores MSA2-Net's robustness and precision in medical image segmentation tasks across various datasets.
Page 22 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.