Sort by:
Page 88 of 1341332 results

Combination of 2D and 3D nnU-Net for ground glass opacity segmentation in CT images of Post-COVID-19 patients.

Nguyen QH, Hoang DA, Pham HV

pubmed logopapersJun 20 2025
The COVID-19 pandemic plays a significant roles in the global health, highlighting the imperative for effective management of post-recovery symptoms. Within this context, Ground Glass Opacity (GGO) in lung computed tomography CT scans emerges as a critical indicator for early intervention. Recently, most researchers have investigated initially a challenge to refine techniques for GGO segmentation. These approaches aim to scrutinize and juxtapose cutting-edge methods for analyzing lung CT images of patients recuperating from COVID-19. While many methods in this challenge utilize the nnU-Net architecture, its general approach has not concerned completely GGO areas such as marking infected areas, ground-glass opacity, irregular shapes and fuzzy boundaries. This research has investigated a specialized machine learning algorithm, advancing the nn-UNet framework to accurately segment GGO in lung CT scans of post-COVID-19 patients. We propose a novel approach for two-stage image segmentation methods based on nnU-Net 2D and 3D models including lung and shadow image segmentation, incorporating the attention mechanism. The combination models enhance automatic segmentation and models' accuracy when using different error function in the training process. Experimental results show that the proposed model's outcomes DSC score ranks fifth among the compared results. The proposed method has also the second-highest sensitivity value among the methods, which shows that this method has a higher true segmentation rate than most of the other methods. The proposed method has achieved a Hausdorff95 of 54.566, Surface dice of 0.7193, Sensitivity of 0.7528, and Specificity of 0.7749. As compared with the state-of-the-art methods, the proposed model in experimental results is improved much better than the current methods in term of segmentation of infected areas. The proposed model has been deployed in the case study of real-world problems with the combination of 2D and 3D models. It is demonstrated the capacity to comprehensively detect lung lesions correctly. Additionally, the boundary loss function has assisted in achieving more precise segmentation for low-resolution images. Initially segmenting lung area has reduced the volume of images requiring processing, while diminishing for training process.

Three-dimensional U-Net with transfer learning improves automated whole brain delineation from MRI brain scans of rats, mice, and monkeys.

Porter VA, Hobson BA, D'Almeida AJ, Bales KL, Lein PJ, Chaudhari AJ

pubmed logopapersJun 20 2025
Automated whole-brain delineation (WBD) techniques often struggle to generalize across pre-clinical studies due to variations in animal models, magnetic resonance imaging (MRI) scanners, and tissue contrasts. We developed a 3D U-Net neural network for WBD pre-trained on organophosphate intoxication (OPI) rat brain MRI scans. We used transfer learning (TL) to adapt this OPI-pretrained network to other animal models: rat model of Alzheimer's disease (AD), mouse model of tetramethylenedisulfotetramine (TETS) intoxication, and titi monkey model of social bonding. We assessed an OPI-pretrained 3D U-Net across animal models under three conditions: (1) direct application to each dataset; (2) utilizing TL; and (3) training disease-specific U-Net models. For each condition, training dataset size (TDS) was optimized, and output WBDs were compared to manual segmentations for accuracy. The OPI-pretrained 3D U-Net (TDS = 100) achieved the best accuracy [median[min-max]] for the test OPI dataset with a Dice coefficient (DC) = [0.987 [0.977-0.992]] and Hausdorff distance (HD) = [0.86 [0.55-1.27]]mm. TL improved generalization across all models [AD (TDS = 40): DC = 0.987 [0.977-0.992] and HD = 0.72 [0.54-1.00]mm; TETS (TDS = 10): DC = 0.992 [0.984-0.993] and HD = 0.40 [0.31-0.50]mm; Monkey (TDS = 8): DC = 0.977 [0.968-0.979] and HD = 3.03 [2.19-3.91]mm], showing performance comparable to disease-specific networks. The OPI-pretrained 3D U-Net with TL achieved accuracy comparable to disease-specific networks with reduced training data (TDS ≤ 40 scans) across all models. Future work will focus on developing a multi-region delineation pipeline for pre-clinical MRI brain data, utilizing the proposed WBD as an initial step.

Robust Radiomic Signatures of Intervertebral Disc Degeneration from MRI.

McSweeney T, Tiulpin A, Kowlagi N, Määttä J, Karppinen J, Saarakkala S

pubmed logopapersJun 20 2025
A retrospective analysis. The aim of this study was to identify a robust radiomic signature from deep learning segmentations for intervertebral disc (IVD) degeneration classification. Low back pain (LBP) is the most common musculoskeletal symptom worldwide and IVD degeneration is an important contributing factor. To improve the quantitative phenotyping of IVD degeneration from T2-weighted magnetic resonance imaging (MRI) and better understand its relationship with LBP, multiple shape and intensity features have been investigated. IVD radiomics have been less studied but could reveal sub-visual imaging characteristics of IVD degeneration. We used data from Northern Finland Birth Cohort 1966 members who underwent lumbar spine T2-weighted MRI scans at age 45-47 (n=1397). We used a deep learning model to segment the lumbar spine IVDs and extracted 737 radiomic features, as well as calculating IVD height index and peak signal intensity difference. Intraclass correlation coefficients across image and mask perturbations were calculated to identify robust features. Sparse partial least squares discriminant analysis was used to train a Pfirrmann grade classification model. The radiomics model had balanced accuracy of 76.7% (73.1-80.3%) and Cohen's Kappa of 0.70 (0.67-0.74), compared to 66.0% (62.0-69.9%) and 0.55 (0.51-0.59) for an IVD height index and peak signal intensity model. 2D sphericity and interquartile range emerged as radiomics-based features that were robust and highly correlated to Pfirrmann grade (Spearman's correlation coefficients of -0.72 and -0.77 respectively). Based on our findings these radiomic signatures could serve as alternatives to the conventional indices, representing a significant advance in the automated quantitative phenotyping of IVD degeneration from standard-of-care MRI.

Artificial intelligence-based tumor size measurement on mammography: agreement with pathology and comparison with human readers' assessments across multiple imaging modalities.

Kwon MR, Kim SH, Park GE, Mun HS, Kang BJ, Kim YT, Yoon I

pubmed logopapersJun 20 2025
To evaluate the agreement between artificial intelligence (AI)-based tumor size measurements of breast cancer and the final pathology and compare these results with those of other imaging modalities. This retrospective study included 925 women (mean age, 55.3 years ± 11.6) with 936 breast cancers, who underwent digital mammography, breast ultrasound, and magnetic resonance imaging before breast cancer surgery. AI-based tumor size measurement was performed on post-processed mammographic images, outlining areas with AI abnormality scores of 10, 50, and 90%. Absolute agreement between AI-based tumor sizes, image modalities, and histopathology was assessed using intraclass correlation coefficient (ICC) analysis. Concordant and discordant cases between AI measurements and histopathologic examinations were compared. Tumor size with an abnormality score of 50% showed the highest agreement with histopathologic examination (ICC = 0.54, 95% confidential interval [CI]: 0.49-0.59), showing comparable agreement with mammography (ICC = 0.54, 95% CI: 0.48-0.60, p = 0.40). For ductal carcinoma in situ and human epidermal growth factor receptor 2-positive cancers, AI revealed a higher agreement than that of mammography (ICC = 0.76, 95% CI: 0.67-0.84 and ICC = 0.73, 95% CI: 0.52-0.85). Overall, 52.0% (487/936) of cases were discordant, with these cases more commonly observed in younger patients with dense breasts, multifocal malignancies, lower abnormality scores, and different imaging characteristics. AI-based tumor size measurements with abnormality scores of 50% showed moderate agreement with histopathology but demonstrated size discordance in more than half of the cases. While comparable to mammography, its limitations emphasize the need for further refinement and research.

Radiological data processing system: lifecycle management and annotation.

Bobrovskaya T, Vasilev Y, Vladzymyrskyy A, Omelyanskaya O, Kosov P, Krylova E, Ponomarenko A, Burtsev T, Savkina E, Kodenko M, Kasimov S, Medvedev K, Kovalchuk A, Zinchenko V, Rumyantsev D, Kazarinova V, Semenov S, Arzamasov K

pubmed logopapersJun 20 2025
To develop a platform for automated processing of radiological datasets that operates independently of medical information systems. The platform maintains datasets throughout their lifecycle, from data retrieval to annotation and presentation. The platform employs a modular structure in which modules can operate independently or in conjunction. Each module sequentially processes output from the preceding module. The platform incorporates a local database containing textual study protocols, a radiology information system (RIS), and storage for labeled studies and reports. A platform equipped with local permanent and temporary file storages facilitates radiological datasets processing. The platform's modules enable data search, extraction, anonymization, annotation, generation of annotated files, and standardized documentation of datasets. The platform provides a comprehensive workflow for radiological dataset management and is currently operational at the Center for Diagnostics and Telemedicine. Future development will focus on expanding platform functionality.

Impact of ablation on regional strain from 4D computed tomography in the left atrium.

Mehringer N, Severance L, Park A, Ho G, McVeigh E

pubmed logopapersJun 20 2025
Ablation for atrial fibrillation targets an arrhythmogenic substrate in the left atrium (LA) myocardium with therapeutic energy, resulting in a scar tissue. Although a global LA function typically improves after ablation, the injured tissue is stiffer and non-contractile. The local functional impact of ablation has not been thoroughly investigated. This study retrospectively analyzed the LA mechanics of 15 subjects who received a four-dimensional computed tomography (4DCT) scan pre- and post-ablation for atrial fibrillation. LA volumes were automatically segmented at every frame by a trained neural network and converted into surface meshes. A local endocardial strain was computed at a resolution of 2 mm from the deforming meshes. The LA endocardial surface was automatically divided into five walls and further into 24 sub-segments using the left atrial positioning system. Intraoperative notes gathered during the ablation procedure informed which regions received ablative treatment. In an average of 18 months after ablation, the strain is decreased by 16.3% in the septal wall and by 18.3% in the posterior wall. In subjects who were imaged in sinus rhythm both before and after the procedure, the effect of ablation reduced the regional strain by 15.3% (p = 0.012). Post-ablation strain maps demonstrated spatial patterns of reduced strain which matched the ablation pattern. This study demonstrates the capability of 4DCT to capture high-resolution changes in the left atrial strain in response to tissue damage and explores the quantification of a regionally reduced LA function from the scar tissue.

Automatic Multi-Task Segmentation and Vulnerability Assessment of Carotid Plaque on Contrast-Enhanced Ultrasound Images and Videos via Deep Learning.

Hu B, Zhang H, Jia C, Chen K, Tang X, He D, Zhang L, Gu S, Chen J, Zhang J, Wu R, Chen SL

pubmed logopapersJun 20 2025
Intraplaque neovascularization (IPN) within carotid plaque is a crucial indicator of plaque vulnerability. Contrast-enhanced ultrasound (CEUS) is a valuable tool for assessing IPN by evaluating the location and quantity of microbubbles within the carotid plaque. However, this task is typically performed by experienced radiologists. Here we propose a deep learning-based multi-task model for the automatic segmentation and IPN grade classification of carotid plaque on CEUS images and videos. We also compare the performance of our model with that of radiologists. To simulate the clinical practice of radiologists, who often use CEUS videos with dynamic imaging to track microbubble flow and identify IPN, we develop a workflow for plaque vulnerability assessment using CEUS videos. Our multi-task model outperformed individually trained segmentation and classification models, achieving superior performance in IPN grade classification based on CEUS images. Specifically, our model achieved a high segmentation Dice coefficient of 84.64% and a high classification accuracy of 81.67%. Moreover, our model surpassed the performance of junior and medium-level radiologists, providing more accurate IPN grading of carotid plaque on CEUS images. For CEUS videos, our model achieved a classification accuracy of 80.00% in IPN grading. Overall, our multi-task model demonstrates great performance in the automatic, accurate, objective, and efficient IPN grading in both CEUS images and videos. This work holds significant promise for enhancing the clinical diagnosis of plaque vulnerability associated with IPN in CEUS evaluations.

Segmentation of clinical imagery for improved epidural stimulation to address spinal cord injury

Matelsky, J. K., Sharma, P., Johnson, E. C., Wang, S., Boakye, M., Angeli, C., Forrest, G. F., Harkema, S. J., Tenore, F.

medrxiv logopreprintJun 20 2025
Spinal cord injury (SCI) can severely impair motor and autonomic function, with long-term consequences for quality of life. Epidural stimulation has emerged as a promising intervention, offering partial recovery by activating neural circuits below the injury. To make this therapy effective in practice, precise placement of stimulation electrodes is essential -- and that requires accurate segmentation of spinal cord structures in MRI data. We present a protocol for manual segmentation tailored to SCI anatomy, and evaluated a deep learning approach using a U-Net architecture to automate this segmentation process. Our approach yields accurate, efficient segmentation that identify potential electrode placement sites with high fidelity. Preliminary results suggest that this framework can accelerate SCI MRI analysis and improve planning for epidural stimulation, helping bridge the gap between advanced neurotechnologies and real-world clinical application with faster surgeries and more accurate electrode placement.

An Open-Source Generalizable Deep Learning Framework for Automated Corneal Segmentation in Anterior Segment Optical Coherence Tomography Imaging

Kandakji, L., Liu, S., Balal, S., Moghul, I., Allan, B., Tuft, S., Gore, D., Pontikos, N.

medrxiv logopreprintJun 20 2025
PurposeTo develop a deep learning model - Cornea nnU-Net Extractor (CUNEX) - for full-thickness corneal segmentation of anterior segment optical coherence tomography (AS-OCT) images and evaluate its utility in artificial intelligence (AI) research. MethodsWe trained and evaluated CUNEX using nnU-Net on 600 AS-OCT images (CSO MS-39) from 300 patients: 100 normal, 100 keratoconus (KC), and 100 Fuchs endothelial corneal dystrophy (FECD) eyes. To assess generalizability, we externally validated CUNEX on 1,168 AS-OCT images from an infectious keratitis dataset acquired from a different device (Casia SS-1000). We benchmarked CUNEX against two recent models, CorneaNet and ScLNet. We then applied CUNEX to our dataset of 194,599 scans from 37,499 patients as preprocessing for a classification model evaluating whether segmentation improves AI prediction, including age, sex, and disease staging (KC and FECD). ResultsCUNEX achieved Dice similarity coefficient (DSC) and intersection over union (IoU) scores ranging from 94-95% and 90-99%, respectively, across healthy, KC, and FECD eyes. This was similar to ScLNet (within 3%) but better than CorneaNet (8-35% lower). On external validation, CUNEX maintained high performance (DSC 83%; IoU 71%) while ScLNet (DSC 14%; IoU 8%) and CorneaNet (DSC 16%; IoU 9%) failed to generalize. Unexpectedly, segmentation minimally impacted classification accuracy except for sex prediction, where accuracy dropped from 81 to 68%, suggesting sex-related features may lie outside the cornea. ConclusionCUNEX delivers the first open-source generalizable corneal segmentation model using the latest framework, supporting its use in clinical analysis and AI workflows across diseases and imaging platforms. It is available at https://github.com/lkandakji/CUNEX.

Development and validation of an AI-driven radiomics model using non-enhanced CT for automated severity grading in chronic pancreatitis.

Chen C, Zhou J, Mo S, Li J, Fang X, Liu F, Wang T, Wang L, Lu J, Shao C, Bian Y

pubmed logopapersJun 19 2025
To develop and validate the chronic pancreatitis CT severity model (CATS), an artificial intelligence (AI)-based tool leveraging automated 3D segmentation and radiomics analysis of non-enhanced CT scans for objective severity stratification in chronic pancreatitis (CP). This retrospective study encompassed patients with recurrent acute pancreatitis (RAP) and CP from June 2016 to May 2020. A 3D convolutional neural network segmented non-enhanced CT scans, extracting 1843 radiomic features to calculate the radiomics score (Rad-score). The CATS was formulated using multivariable logistic regression and validated in a subsequent cohort from June 2020 to April 2023. Overall, 2054 patients with RAP and CP were included in the training (n = 927), validation set (n = 616), and external test (n = 511) sets. CP grade I and II patients accounted for 300 (14.61%) and 1754 (85.39%), respectively. The Rad-score significantly correlated with the acinus-to-stroma ratio (p = 0.023; OR, -2.44). The CATS model demonstrated high discriminatory performance in differentiating CP severity grades, achieving an area under the curve (AUC) of 0.96 (95% CI: 0.94-0.98) and 0.88 (95% CI: 0.81-0.90) in the validation and test cohorts. CATS-predicted grades correlated with exocrine insufficiency (all p < 0.05) and showed significant prognostic differences (all p < 0.05). CATS outperformed radiologists in detecting calcifications, identifying all minute calcifications missed by radiologists. The CATS, developed using non-enhanced CT and AI, accurately predicts CP severity, reflects disease morphology, and forecasts short- to medium-term prognosis, offering a significant advancement in CP management. Question Existing CP severity assessments rely on semi-quantitative CT evaluations and multi-modality imaging, leading to inconsistency and inaccuracy in early diagnosis and prognosis prediction. Findings The AI-driven CATS model, using non-enhanced CT, achieved high accuracy in grading CP severity, and correlated with histopathological fibrosis markers. Clinical relevance CATS provides a cost-effective, widely accessible tool for precise CP severity stratification, enabling early intervention, personalized management, and improved outcomes without contrast agents or invasive biopsies.
Page 88 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.