Sort by:
Page 259 of 3703697 results

A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

Liu Q, Jiang J, Wu K, Zhang Y, Sun N, Luo J, Ba T, Lv A, Liu C, Yin Y, Yang Z, Xu H

pubmed logopapersJun 27 2025
To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images. A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated. In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP). The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy. Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data. Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

A multi-view CNN model to predict resolving of new lung nodules on follow-up low-dose chest CT.

Wang J, Zhang X, Tang W, van Tuinen M, Vliegenthart R, van Ooijen P

pubmed logopapersJun 27 2025
New, intermediate-sized nodules in lung cancer screening undergo follow-up CT, but some of these will resolve. We evaluated the performance of a multi-view convolutional neural network (CNN) in distinguishing resolving and non-resolving new, intermediate-sized lung nodules. This retrospective study utilized data on 344 intermediate-sized nodules (50-500 mm<sup>3</sup>) in 250 participants from the NELSON (Dutch-Belgian Randomized Lung Cancer Screening) trial. We implemented four-fold cross-validation for model training and testing. A multi-view CNN model was developed by combining three two-dimensional (2D) CNN models and one three-dimensional (3D) CNN model. We used 2D, 2.5D, and 3D models for comparison. The models' performance was evaluated using sensitivity, specificity, and area under the ROC curve (AUC). Specificity, indicating what percentage of non-resolving nodules requiring follow-up can be correctly predicted, was maximized. Among all nodules, 18.3% (63) were resolving. The multi-view CNN model achieved an AUC of 0.81, with a mean sensitivity of 0.63 (SD, 0.15) and a mean specificity of 0.93 (SD, 0.02). The model significantly improved performance compared to 2D, 2.5D, or 3D models (p < 0.05). Under the premise of specificity greater than 90% (meaning < 10% of non-resolving nodules are incorrectly identified as resolving), follow-up CT in 14% of individuals could be prevented. The multi-view CNN model achieved high specificity in discriminating new intermediate nodules that would need follow-up CT by identifying non-resolving nodules. After further validation and optimization, this model may assist with decision-making when new intermediate nodules are found in lung cancer screening. The multi-view CNN-based model has the potential to reduce unnecessary follow-up scans when new nodules are detected, aiding radiologists in making earlier, more informed decisions. Predicting the resolution of new intermediate lung nodules in lung cancer screening CT is a challenge. Our multi-view CNN model showed an AUC of 0.81, a specificity of 0.93, and a sensitivity of 0.63 at the nodule level. The multi-view model demonstrated a significant improvement in AUC compared to the three 2D models, one 2.5D model, and one 3D model.

Machine learning-based radiomic nomogram from unenhanced computed tomography and clinical data predicts bowel resection in incarcerated inguinal hernia.

Li DL, Zhu L, Liu SL, Wang ZB, Liu JN, Zhou XM, Hu JL, Liu RQ

pubmed logopapersJun 27 2025
Early identification of bowel resection risks is crucial for patients with incarcerated inguinal hernia (IIH). However, the prompt detection of these risks remains a significant challenge. Advancements in radiomic feature extraction and machine learning algorithms have paved the way for innovative diagnostic approaches to assess IIH more effectively. To devise a sophisticated radiomic-clinical model to evaluate bowel resection risks in IIH patients, thereby enhancing clinical decision-making processes. This single-center retrospective study analyzed 214 IIH patients randomized into training (<i>n</i> = 161) and test (<i>n</i> = 53) sets (3:1). Radiologists segmented hernia sac-trapped bowel volumes of interest (VOIs) on computed tomography images. Radiomic features extracted from VOIs generated Rad-scores, which were combined with clinical data to construct a nomogram. The nomogram's performance was evaluated against standalone clinical and radiomic models in both cohorts. A total of 1561 radiomic features were extracted from the VOIs. After dimensionality reduction, 13 radiomic features were used with eight machine learning algorithms to develop the radiomic model. The logistic regression algorithm was ultimately selected for its effectiveness, showing an area under the curve (AUC) of 0.828 [95% confidence interval (CI): 0.753-0.902] in the training set and 0.791 (95%CI: 0.668-0.915) in the test set. The comprehensive nomogram, incorporating clinical indicators showcased strong predictive capabilities for assessing bowel resection risks in IIH patients, with AUCs of 0.864 (95%CI: 0.800-0.929) and 0.800 (95%CI: 0.669-0.931) for the training and test sets, respectively. Decision curve analysis revealed the integrated model's superior performance over standalone clinical and radiomic approaches. This innovative radiomic-clinical nomogram has proven to be effective in predicting bowel resection risks in IIH patients and has substantially aided clinical decision-making.

White Box Modeling of Self-Determined Sequence Exercise Program Among Sarcopenic Older Adults: Uncovering a Novel Strategy Overcoming Decline of Skeletal Muscle Area.

Wei M, He S, Meng D, Lv Z, Guo H, Yang G, Wang Z

pubmed logopapersJun 27 2025
Resistance exercise, Taichi exercise, and the hybrid exercise program consisting of the two aforementioned methods have been demonstrated to increase the skeletal muscle mass of older individuals with sarcopenia. However, the exercise sequence has not been comprehensively investigated. Therefore, we designed a self-determined sequence exercise program, incorporating resistance exercises, Taichi, and the hybrid exercise program to overcome the decline of skeletal muscle area and reverse sarcopenia in older individuals. Ninety-one older patients with sarcopenia between the ages of 60 and 75 completed this three-stage randomized controlled trial for 24 weeks, including the self-determined sequence exercise program group (n = 31), the resistance training group (n = 30), and the control group (n = 30). We used quantitative computed tomography to measure the effects of different intervention protocols on skeletal muscle mass in participants. Participants' demographic variables were analyzed using one-way analysis of variance and chi-square tests, and experimental data were examined using repeated-measures analysis of variance. Furthermore, we utilized the Markov model to explain the effectiveness of the exercise programs among the three-stage intervention and explainable artificial intelligence to predict whether intervention programs can reverse sarcopenia. Repeated-measures analysis of variance results indicated that there were statistically significant Group × Time interactions detected in the L3 skeletal muscle density, L3 skeletal muscle area, muscle fat infiltration, handgrip strength, and relative skeletal muscle mass index. The stacking model exhibited the best accuracy (84.5%) and the best F1-score (68.8%) compared to other algorithms. In the self-determined sequence exercise program group, strength training contributed most to the reversal of sarcopenia. One self-determined sequence exercise program can improve skeletal muscle area among sarcopenic older people. Based on our stacking model, we can predict whether sarcopenia in older people can be reversed accurately. The trial was registered in ClinicalTrials.gov. TRN:NCT05694117. Our findings indicate that such tailored exercise interventions can substantially benefit sarcopenic patients, and our stacking model provides an accurate predictive tool for assessing the reversibility of sarcopenia in older adults. This approach not only enhances individual health outcomes but also informs future development of targeted exercise programs to mitigate age-related muscle decline.

Regional Cortical Thinning and Area Reduction Are Associated with Cognitive Impairment in Hemodialysis Patients.

Chen HJ, Qiu J, Qi Y, Guo Y, Zhang Z, Qin H, Wu F, Chen F

pubmed logopapersJun 27 2025
Magnetic resonance imaging (MRI) has shown that patients with end-stage renal disease have decreased gray matter volume and density. However, the cortical area and thickness in patients on hemodialysis are uncertain, and the relationship between patients' cognition and cortical alterations remains unclear. Thirty-six hemodialysis patients and 25 age- and sex-matched healthy controls were enrolled in this study and underwent brain MRI scans and neuropsychological assessments. According to the Desikan-Killiany atlas, the brain is divided into 68 regions. Using FreeSurfer software, we analyzed the differences in cortical area and thickness of each region between groups. Machine learning-based classification was also used to differentiate hemodialysis patients from healthy individuals. The patients exhibited decreased cortical thickness in the frontal and temporal regions, including the left bankssts, left lingual gyrus, left pars triangularis, bilateral superior temporal gyrus, and right pars opercularis and decreased cortical area in the left rostral middle frontal gyrus, left superior frontal gyrus, right fusiform gyrus, right pars orbitalis and right superior frontal gyrus. Decreased cortical thickness was positively associated with poorer scores on the neuropsychological tests and increased uric acid and urea levels. Cortical thickness pattern allowed differentiating the patients from the controls with 96.7% accuracy (97.5% sensitivity, 95.0% specificity, 97.5% precision, and AUC: 0.983) on the support vector machine analysis. Patients on hemodialysis exhibited decreased cortical area and thickness, which was associated with poorer cognition and uremic toxins.

Association of Covert Cerebrovascular Disease With Falls Requiring Medical Attention.

Clancy Ú, Puttock EJ, Chen W, Whiteley W, Vickery EM, Leung LY, Luetmer PH, Kallmes DF, Fu S, Zheng C, Liu H, Kent DM

pubmed logopapersJun 27 2025
The impact of covert cerebrovascular disease on falls in the general population is not well-known. Here, we determine the time to a first fall following incidentally detected covert cerebrovascular disease during a clinical neuroimaging episode. This longitudinal cohort study assessed computed tomography (CT) and magnetic resonance imaging from 2009 to 2019 of patients aged >50 years registered with Kaiser Permanente Southern California which is a healthcare organization combining health plan coverage with coordinated medical services, excluding those with before stroke/dementia. We extracted evidence of incidental covert brain infarcts (CBI) and white matter hyperintensities/hypoattenuation (WMH) from imaging reports using natural language processing. We examined associations of CBI and WMH with falls requiring medical attention, using Cox proportional hazards regression models with adjustment for 12 variables including age, sex, ethnicity multimorbidity, polypharmacy, and incontinence. We assessed 241 050 patients, mean age 64.9 (SD, 10.42) years, 61.3% female, detecting covert cerebrovascular disease in 31.1% over a mean follow-up duration of 3.04 years. A recorded fall occurred in 21.2% (51 239/241 050) during follow-up. On CT, single fall incidence rate/1000 person-years (p-y) was highest in individuals with both CBI and WMH on CT (129.3 falls/1000 p-y [95% CI, 123.4-135.5]), followed by WMH (109.9 falls/1000 p-y [108.0-111.9]). On magnetic resonance imaging, the incidence rate was the highest with both CBI and WMH (76.3 falls/1000 p-y [95% CI, 69.7-83.2]), followed by CBI (71.4 falls/1000 p-y [95% CI, 65.9-77.2]). The adjusted hazard ratio for single index fall in individuals with CBI on CT was 1.13 (95% CI, 1.09-1.17); versus magnetic resonance imaging 1.17 (95% CI, 1.08-1.27). On CT, the risk for single index fall incrementally increased for mild (1.37 [95% CI, 1.32-1.43]), moderate (1.57 [95% CI, 1.48-1.67]), or severe WMH (1.57 [95% CI, 1.45-1.70]). On magnetic resonance imaging, index fall risk similarly increased with increasing WMH severity: mild (1.11 [95% CI, 1.07-1.17]), moderate (1.21 [95% CI, 1.13-1.28]), and severe WMH (1.34 [95% CI, 1.22-1.46]). In a large population with neuroimaging, CBI and WMH are independently associated with greater risks of an index fall. Increasing severities of WMH are associated incrementally with fall risk across imaging modalities.

Early prediction of adverse outcomes in liver cirrhosis using a CT-based multimodal deep learning model.

Xie N, Liang Y, Luo Z, Hu J, Ge R, Wan X, Wang C, Zou G, Guo F, Jiang Y

pubmed logopapersJun 27 2025
Early-stage cirrhosis frequently presents without symptoms, making timely identification of high-risk patients challenging. We aimed to develop a deep learning-based triple-modal fusion liver cirrhosis network (TMF-LCNet) for the prediction of adverse outcomes, offering a promising tool to enhance early risk assessment and improve clinical management strategies. This retrospective study included 243 patients with early-stage cirrhosis across two centers. Adverse outcomes were defined as the development of severe complications like ascites, hepatic encephalopathy and variceal bleeding. TMF-LCNet was developed by integrating three types of data: non-contrast abdominal CT images, radiomic features extracted from liver and spleen, and clinical text detailing laboratory parameters and adipose tissue composition measurements. TMF-LCNet was compared with conventional methods on the same dataset, and single-modality versions of TMF-LCNet were tested to determine the impact of each data type. Model effectiveness was measured using the area under the receiver operating characteristics curve (AUC) for discrimination, calibration curves for model fit, and decision curve analysis (DCA) for clinical utility. TMF-LCNet demonstrated superior predictive performance compared to conventional image-based, radiomics-based, and multimodal methods, achieving an AUC of 0.797 in the training cohort (n = 184) and 0.747 in the external test cohort (n = 59). Only TMF-LCNet exhibited robust model calibration in both cohorts. Of the three data types, the imaging modality contributed the most, as the image-only version of TMF-LCNet achieved performance closest to the complete version (AUC = 0.723 and 0.716, respectively; p > 0.05). This was followed by the text modality, with radiomics contributing the least, a pattern consistent with the clinical utility trends observed in DCA. TMF-LCNet represents an accurate and robust tool for predicting adverse outcomes in early-stage cirrhosis by integrating multiple data types. It holds potential for early identification of high-risk patients, guiding timely interventions, and ultimately improving patient prognosis.

Automated Sella-Turcica Annotation and Mesh Alignment of 3D Stereophotographs for Craniosynostosis Patients Using a PCA-FFNN Based Approach.

Bielevelt F, Chargi N, van Aalst J, Nienhuijs M, Maal T, Delye H, de Jong G

pubmed logopapersJun 27 2025
Craniosynostosis, characterized by the premature fusion of cranial sutures, can lead to significant neurological and developmental complications, necessitating early diagnosis and precise treatment. Traditional cranial morphologic assessment has relied on CT scans, which expose infants to ionizing radiation. Recently, 3D stereophotogrammetry has emerged as a noninvasive alternative, but accurately aligning 3D photographs within standardized reference frames, such as the Sella-turcica-Nasion (S-N) frame, remains a challenge. This study proposes a novel method for predicting the Sella turcica (ST) coordinate from 3D cranial surface models using Principal Component Analysis (PCA) combined with a Feedforward Neural Network (FFNN). The accuracy of this method is compared with the conventional Computed Cranial Focal Point (CCFP) method, which has limitations, especially in cases of asymmetric cranial deformations like plagiocephaly. A data set of 153 CT scans, including 68 craniosynostosis subjects, was used to train and test the PCA-FFNN model. The results demonstrate that the PCA-FFNN approach outperforms CCFP, achieving significantly lower deviations in ST coordinate predictions (3.61 vs. 8.38 mm, P<0.001), particularly along the y-axes and z-axes. In addition, mesh realignment within the S-N reference frame showed improved accuracy with the PCA-FFNN method, evidenced by lower mean deviations and reduced dispersion in distance maps. These findings highlight the potential of the PCA-FFNN approach to provide a more reliable, noninvasive solution for cranial assessment, improving craniosynostosis follow-up and enhancing clinical outcomes.

<sup>Advanced glaucoma disease segmentation and classification with grey wolf optimized U</sup> <sup>-Net++ and capsule networks</sup>.

Govindharaj I, Deva Priya W, Soujanya KLS, Senthilkumar KP, Shantha Shalini K, Ravichandran S

pubmed logopapersJun 27 2025
Early detection of glaucoma represents a vital factor in securing vision while the disease retains its position as one of the central causes of blindness worldwide. The current glaucoma screening strategies with expert interpretation depend on complex and time-consuming procedures which slow down both diagnosis processes and intervention timing. This research adopts a complex automated glaucoma diagnostic system that combines optimized segmentation solutions together with classification platforms. The proposed segmentation approach implements an enhanced version of U-Net++ using dynamic parameter control provided by GWO to segment optic disc and cup regions in retinal fundus images. Through the implementation of GWO the algorithm uses wolf-pack hunting strategies to adjust parameters dynamically which enables it to locate diverse textural patterns inside images. The system uses a CapsNet capsule network for classification because it maintains visual spatial organization to detect glaucoma-related patterns precisely. The developed system secures an evaluation accuracy of 95.1% in segmentation and classification tasks better than typical approaches. The automated system eliminates and enhances clinical diagnostic speed as well as diagnostic precision. The tool stands out because of its supreme detection accuracy and reliability thus making it an essential clinical early-stage glaucoma diagnostic system and a scalable healthcare deployment solution. To develop an advanced automated glaucoma diagnostic system by integrating an optimized U-Net++ segmentation model with a Capsule Network (CapsNet) classifier, enhanced through Grey Wolf Optimization Algorithm (GWOA), for precise segmentation of optic disc and cup regions and accurate glaucoma classification from retinal fundus images. This study proposes a two-phase computer-assisted diagnosis (CAD) framework. In the segmentation phase, an enhanced U-Net++ model, optimized by GWOA, is employed to accurately delineate the optic disc and cup regions in fundus images. The optimization dynamically tunes hyperparameters based on grey wolf hunting behavior for improved segmentation precision. In the classification phase, a CapsNet architecture is used to maintain spatial hierarchies and effectively classify images as glaucomatous or normal based on segmented outputs. The performance of the proposed model was validated using the ORIGA retinal fundus image dataset, and evaluated against conventional approaches. The proposed GWOA-UNet++ and CapsNet framework achieved a segmentation and classification accuracy of 95.1%, outperforming existing benchmark models such as MTA-CS, ResFPN-Net, DAGCN, MRSNet and AGCT. The model demonstrated robustness against image irregularities, including variations in optic disc size and fundus image quality, and showed superior performance across accuracy, sensitivity, specificity, precision, and F1-score metrics. The developed automated glaucoma detection system exhibits enhanced diagnostic accuracy, efficiency, and reliability, offering significant potential for early-stage glaucoma detection and clinical decision support. Future work will involve large-scale multi-ethnic dataset validation, integration with clinical workflows, and deployment as a mobile or cloud-based screening tool.

3D Auto-segmentation of pancreas cancer and surrounding anatomical structures for surgical planning.

Rhu J, Oh N, Choi GS, Kim JM, Choi SY, Lee JE, Lee J, Jeong WK, Min JH

pubmed logopapersJun 27 2025
This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning. We included patients with pancreatic cancer who underwent pancreatic surgery at three tertiary referral hospitals. A hierarchical Swin Transformer V2 model was implemented to segment the pancreas, pancreatic cancers, and peripancreatic structures from preoperative contrast-enhanced CT scans. Data was divided into training and internal validation sets at a 3:1 ratio (from one tertiary institution), with separately prepared external validation set (from two separate institutions). Segmentation performance was quantitatively assessed using the dice similarity coefficient (DSC) and qualitatively evaluated (complete vs partial vs absent). A total of 275 patients (51.6% male, mean age 65.8 ± 9.5 years) were included (176 training group, 59 internal validation group, and 40 external validation group). No significant differences in baseline characteristics were observed between the groups. The model achieved an overall mean DSC of 75.4 ± 6.0 and 75.6 ± 4.8 in the internal and external validation groups, respectively. It showed high accuracy particularly in the pancreas parenchyma (84.8 ± 5.3 and 86.1 ± 4.1) and lower accuracy in pancreatic cancer (57.0 ± 28.7 and 54.5 ± 23.5). The DSC scores for pancreatic cancer tended to increase with larger tumor sizes. Moreover, the qualitative assessments revealed high accuracy in the superior mesenteric artery (complete segmentation, 87.5%-100%), portal and superior mesenteric vein (97.5%-100%), pancreas parenchyma (83.1%-87.5%), but lower accuracy in cancers (62.7%-65.0%). The deep learning-based autosegmentation model for 3D visualization of pancreatic cancer and peripancreatic structures showed robust performance. Further improvement will enhance many promising applications in clinical research.
Page 259 of 3703697 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.