Sort by:
Page 1 of 99986 results
Next

Do patients with renal calculi exhibit viscerosomatic reflexes as evident on CT imaging?

Haughton DR, Gupta AK, Nasir BR, Kania AM

pubmed logopapersOct 10 2025
Experimental evidence supporting the existence of the viscerosomatic reflex highlights an involvement of multiple vertebral levels when renal pathology is present. Further exploration of this reflex, particularly in the context of nephrolithiasis, could offer valuable insights for osteopathic treatments related to this pathology. Open-sourced machine learning datasets provide a valuable source of imaging data for investigating osteopathic phenomena including the viscerosomatic reflex. This study aimed to compare the rotation of vertebrae at levels associated with the viscerosomatic reflex in renal pathology in patients with nephrolithiasis vs. those without kidney stones. A total of 210 unenhanced computed tomography (CT) scans were examined from an open-sourced dataset designed for kidney and kidney stone segmentation. Among these, 166 scans were excluded due to pathologies that could affect analysis (osteophytes, renal masses, etc.). The 44 scans included in the analysis encompassed 292 relevant vertebrae. Of those, 15 scans were of patients with kidney stones in the right kidney, 13 in the left kidney, 7 bilaterally, and 11 without kidney stones. These scans included vertebral levels from T5-L5, with the majority falling within T10-L5. An open-sourced algorithm was employed to segment individual vertebrae, generating models that maintained their orientation in three-dimensional (3D) space. A self-coded 3D slicer module utilizing vertebral symmetry for rotation detection was then applied. Two-way analysis of variance (ANOVA) testing was conducted to assess differences in vertebral rotation between the four possible combinations of kidney stone location (left-sided, right-sided, bilateral, or none) and vertebral levels (T10-L4). Subsequently, the two-way ANOVA analysis was narrowed down to include various combinations of three vertebral levels (T10-L4) to identify the most significant levels. We observed a statistically significant difference in average vertebral rotation (p=0.0038) dependent on kidney stone location. Post-hoc analysis showed an average difference in rotation of -1.38° leftward between scans that contained left kidney stones compared to no kidney stones (p=0.027), as well as an average difference of -1.72° leftward in the scans containing right kidney stones compared to no kidney stone (p=0.0037). The average differences in rotation between the remaining stone location combinations were not statistically significant. Narrowed analysis of three vertebral level combinations showed a single statistically significant combination (T10, T12, and L4) out of a total of 35 combinations (p=0.028). A subsequent post-hoc procedure showed that angular rotation at these levels had the only statistically significant contribution to the difference between scans containing right kidney stones and no kidney stones (p=0.046). This study observed a statistically significant difference in the rotation of vertebrae at the levels associated with the viscerosomatic reflex between patients with unilateral kidney stones and those without kidney stones. The vertebral levels with the highest significance of association with this finding, particularly in right kidney stones, were T10, T12, and L4.

Artificial intelligence-based method for renal function automatic assessment of each kidney using plain computed tomography (CT) scans.

Guo R, Xia W, Xu F, Qian Y, Han Q, Geng D, Gao X, Wang Y

pubmed logopapersOct 9 2025
Separate renal function assessment is important in clinical decision making. The single-photon emission computed tomography is commonly used for the assessment although radioactive, tedious and of high cost. This study aimed to automatically assess the separate renal function using plain CT images and artificial intelligence methods, including deep learning-based automatic segmentation and radiomics modeling. We performed a retrospective study on 281 patients with nephrarctia or hydronephrosis from two centers (Training set: 159 patients from Center I; Test set: 122 patients from Center II). The renal parenchyma and hydronephrosis regions in plain CT images were automatically segmented using deep learning-based U-Net transformers (UNETR). Radiomic features were extracted from the two regions and used to build radiomic signature using the ElasticNet, then further combined with clinical characteristics using multivariable logistic regression to obtain an integrated model. The automatic segmentation was evaluated using the dice similarity coefficient (DSC). The mean DSC of automatic kidney segmentation based on UNETR was 0.894 and 0.881 in the training and test sets. The average time of automatic and manual segmentation was 3.4 s/case and 1477.9 s/case. The AUC of radiomic signature was 0.778 in the training set and 0.801 in the test set. The AUC of the integrated model was 0.792 and 0.825 in the training and test sets. It is feasible to assess the renal function of each kidney separately using plain CT and AI methods. Our method can minimize the radiation risk, improve the diagnostic efficiency and reduce the costs.

Random Window Augmentations for Deep Learning Robustness in CT and Liver Tumor Segmentation

Eirik A. Østmo, Kristoffer K. Wickstrøm, Keyur Radiya, Michael C. Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen

arxiv logopreprintOct 9 2025
Contrast-enhanced Computed Tomography (CT) is important for diagnosis and treatment planning for various medical conditions. Deep learning (DL) based segmentation models may enable automated medical image analysis for detecting and delineating tumors in CT images, thereby reducing clinicians' workload. Achieving generalization capabilities in limited data domains, such as radiology, requires modern DL models to be trained with image augmentation. However, naively applying augmentation methods developed for natural images to CT scans often disregards the nature of the CT modality, where the intensities measure Hounsfield Units (HU) and have important physical meaning. This paper challenges the use of such intensity augmentations for CT imaging and shows that they may lead to artifacts and poor generalization. To mitigate this, we propose a CT-specific augmentation technique, called Random windowing, that exploits the available HU distribution of intensities in CT images. Random windowing encourages robustness to contrast-enhancement and significantly increases model performance on challenging images with poor contrast or timing. We perform ablations and analysis of our method on multiple datasets, and compare to, and outperform, state-of-the-art alternatives, while focusing on the challenge of liver tumor segmentation.

MRI-derived quantification of hepatic vessel-to-volume ratios in chronic liver disease using a deep learning approach

Alexander Herold, Daniel Sobotka, Lucian Beer, Nina Bastati, Sarah Poetter-Lang, Michael Weber, Thomas Reiberger, Mattias Mandorfer, Georg Semmler, Benedikt Simbrunner, Barbara D. Wichtmann, Sami A. Ba-Ssalamah, Michael Trauner, Ahmed Ba-Ssalamah, Georg Langs

arxiv logopreprintOct 9 2025
Background: We aimed to quantify hepatic vessel volumes across chronic liver disease stages and healthy controls using deep learning-based magnetic resonance imaging (MRI) analysis, and assess correlations with biomarkers for liver (dys)function and fibrosis/portal hypertension. Methods: We assessed retrospectively healthy controls, non-advanced and advanced chronic liver disease (ACLD) patients using a 3D U-Net model for hepatic vessel segmentation on portal venous phase gadoxetic acid-enhanced 3-T MRI. Total (TVVR), hepatic (HVVR), and intrahepatic portal vein-to-volume ratios (PVVR) were compared between groups and correlated with: albumin-bilirubin (ALBI) and model for end-stage liver disease-sodium (MELD-Na) score, and fibrosis/portal hypertension (Fibrosis-4 [FIB-4] score, liver stiffness measurement [LSM], hepatic venous pressure gradient [HVPG], platelet count [PLT], and spleen volume). Results: We included 197 subjects, aged 54.9 $\pm$ 13.8 years (mean $\pm$ standard deviation), 111 males (56.3\%): 35 healthy controls, 44 non-ACLD, and 118 ACLD patients. TVVR and HVVR were highest in controls (3.9; 2.1), intermediate in non-ACLD (2.8; 1.7), and lowest in ACLD patients (2.3; 1.0) ($p \leq 0.001$). PVVR was reduced in both non-ACLD and ACLD patients (both 1.2) compared to controls (1.7) ($p \leq 0.001$), but showed no difference between CLD groups ($p = 0.999$). HVVR significantly correlated indirectly with FIB-4, ALBI, MELD-Na, LSM, and spleen volume ($\rho$ ranging from -0.27 to -0.40), and directly with PLT ($\rho = 0.36$). TVVR and PVVR showed similar but weaker correlations. Conclusions: Deep learning-based hepatic vessel volumetry demonstrated differences between healthy liver and chronic liver disease stages and shows correlations with established markers of disease severity.

Ultra-Low-Dose Liver CT With Artificial Intelligence Iterative Reconstruction.

Wang S, Meng T, Peng L, Zeng Q

pubmed logopapersOct 9 2025
To investigate the potential feasibility of ultra-low-dose (ULD) liver CT with the artificial intelligence iterative reconstruction (AIIR). Sixty-five patients who underwent triphasic contrast-enhanced liver CT were prospectively enrolled. Low tube voltage (80/100 kV) and tube current (35 to 78 mAs) were set in both portal venous phase (PVP) and delayed phase (DP). For each phase, an ULD acquisition (1.11 to 2.50 mGy) was taken followed immediately by a routine-dose (RD) acquisition (11.71 to 19.73 mGy). RD images were reconstructed with a hybrid iterative reconstruction algorithm (RD-HIR), while ULD images were reconstructed with both HIR (ULD-HIR) and AIIR (ULD-AIIR). The noise power spectrum (NPS) noise magnitude, average NPS spatial frequency, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were calculated for the quantitative assessment. Qualitative assessment was performed by 2 radiologists who independently scored the images for diagnostic acceptance. In addition, the radiologists identified focal lesions and characterized noncystic lesions as benign or malignant with both RD and ULD liver CT. Among the enrolled patients (mean age: 58.6±12.9 y, 35 men), 234 lesions with a mean size of 1.27±1.56 cm were identified. In both phases, ULD-AIIR showed comparable NPS noise magnitude with RD-HIR (all P>0.017), and lower NPS noise than ULD-HIR (all P<0.001). Average NPS spatial frequency, SNR, and CNR were highest with ULD-AIIR, followed by RD-HIR and ULD-HIR (all P<0.001). ULD-AIIR showed comparable diagnostic acceptance scores with RD-HIR, while ULD-HIR failed to meet the diagnostic acceptance requirements. RD-HIR and ULD-AIIR achieved comparable detection rate (99.6% vs. 99.1%) and area under curve (AUC) of the receiver operating characteristic curve (ROC) in classifying benign (n=46) and malignant (n=58) noncystic lesions (0.98 vs. 0.97, P=0.3). With AIIR, it is potentially feasible to achieve ULD liver CT (60% dose reduction) while preserving the image and diagnostic quality.

Non-invasive prediction of Central lymph node metastasis in papillary thyroid microcarcinoma with machine learning-based CT radiomics: a multicenter study.

Cheng F, Lin G, Chen W, Chen Y, Zhou R, Yang J, Zhou B, Chen M, Ji J

pubmed logopapersOct 9 2025
This study aimed to develop and validate a machine learning-based computed tomography (CT) radiomics method to preoperatively predict the presence of central lymph node metastasis (CLNM) in patients with papillary thyroid microcarcinoma (PTMC). A total of 921 patients with histopathologically proven PTMC from three medical centers were included in this retrospective study and divided into training, internal validation, external test 1, and external test 2 sets. Radiomics features of thyroid tumors were extracted from CT images and selected for dimensional reduction. Five machine learning classifiers were applied, and the best classifier was selected to calculate radiomics scores (rad-scores). Then, the rad-scores and clinical factors were combined to construct a nomogram model. In the four sets, 35.18% (324/921) patients were CLNM+. The XGBoost classifier showed the best performance, with the highest average area under the curve (AUC) of 0.756 in the validation set. The nomogram model incorporating XGBoost-based rad-scores with age and sex showed better performance than the clinical model in the training [AUC: 0.847(0.809-0.879) vs. 0.706(0.660-0.748)], internal validation [AUC: 0.773(0.682-0.847) vs. 0.671(0.575-0.758)], external test 1 [AUC: 0.807(0.757-0.852) vs. 0.639(0.580-0.695)], and external test 2 [AUC: 0.746(0.645-0.830) vs. 0.608(0.502-0.707)] sets. Furthermore, the nomogram showed better clinical benefit than the clinical and radiomics models. The nomogram model based on the XGBoost classifier exhibited favorable performance. This model provides a potential approach for the non-invasive diagnosis of CLNM in patients with PTMC. This study developed a potential surrogate of preoperative accurate evaluation of CLNM status, which is non-invasive and easy-to-use.

Automatic segmentation of male pelvic floor soft tissue structures for anatomical simulation and morphological assessment in lower rectal cancer surgery.

Aisu Y, Okada T, Itatani Y, Masuo A, Tani R, Fujimoto K, Kido A, Sawada A, Sakai Y, Obama K

pubmed logopapersOct 8 2025
Pelvic anatomy is a complex network of organs that varies between individuals. Understanding the anatomy of individual patients is crucial for precise rectal cancer surgeries. Therefore, developing technology that can allow visualization of anatomy before surgery is necessary. This study aims to develop an auto-segmentation model of pelvic structures using AI technology and to evaluate the accuracy of the model toward preoperative anatomical understanding. Data were collected from 63 male patients who underwent 3D MRI during a preoperative examination for colorectal and urogenital diseases between November 2015 and July 2019 and from 11 healthy male volunteers. Eleven organs and tissues were segmented. The model was developed using a threefold cross-validation process with a total of 59 cases as development data. The accuracy was evaluated with the separately prepared test data using dice similarity coefficient (DSC), true positive rate (TPR), and positive predictive value (PPV) by comparing AI-segmented data with manual-segmented data. The highest value of DSC, TPR, and PPV were 0.927, 0.909, and 0.948 for the internal anal sphincter (including the rectum), respectively. On the other hand, the lowest values were 0.384, 0.772, and 0.263 for the superficial transverse perineal muscle, respectively. While there were differences among organs, the overall quality of automatic segmentation was maintained in our model, suggesting that the morphological characteristics of the organs may influence the accuracy. We developed an auto-segmentation model that can independently delineate soft-tissue structures in the male pelvis using 3D T2-weighted MRIs, providing valuable assistance to doctors in understanding pelvic anatomy.

Improving Artifact Robustness for CT Deep Learning Models Without Labeled Artifact Images via Domain Adaptation

Justin Cheung, Samuel Savine, Calvin Nguyen, Lin Lu, Alhassan S. Yasin

arxiv logopreprintOct 8 2025
Deep learning models which perform well on images from their training distribution can degrade substantially when applied to new distributions. If a CT scanner introduces a new artifact not present in the training labels, the model may misclassify the images. Although modern CT scanners include design features which mitigate these artifacts, unanticipated or difficult-to-mitigate artifacts can still appear in practice. The direct solution of labeling images from this new distribution can be costly. As a more accessible alternative, this study evaluates domain adaptation as an approach for training models that maintain classification performance despite new artifacts, even without corresponding labels. We simulate ring artifacts from detector gain error in sinogram space and evaluate domain adversarial neural networks (DANN) against baseline and augmentation-based approaches on the OrganAMNIST abdominal CT dataset. Our results demonstrate that baseline models trained only on clean images fail to generalize to images with ring artifacts, and traditional augmentation with other distortion types provides no improvement on unseen artifact domains. In contrast, the DANN approach successfully maintains high classification accuracy on ring artifact images using only unlabeled artifact data during training, demonstrating the viability of domain adaptation for artifact robustness. The domain-adapted model achieved classification performance on ring artifact test data comparable to models explicitly trained with labeled artifact images, while also showing unexpected generalization to uniform noise. These findings provide empirical evidence that domain adaptation can effectively address distribution shift in medical imaging without requiring expensive expert labeling of new artifact distributions, suggesting promise for deployment in clinical settings where novel artifacts may emerge.

InfoOOD: information bottleneck optimization for post hoc medical image out-of-distribution detection.

Schott B, Klanecek Z, Santoro-Fernandes V, Tie X, Salgado-Maldonado SI, Deatsch A, Jeraj R

pubmed logopapersOct 8 2025
Deep learning models are prone to failure when inferring upon out-of-distribution (OOD) data, i.e., data whose features fundamentally differ from those in the training set. Existing OOD measures often lack sensitivity to the subtle image variations encountered within clinical settings. In this work, we investigate a post hoc, information-based approach to OOD detection-termed InfoOOD-which iteratively quantifies the amount of embedded feature information that can be shared between the training data and test data without degrading the model output.&#xD;Approach. Abdominal CT images from patients with metastatic liver lesions were used. A 3D U-Net was trained to segment liver organs and lesions using N=157 images. Physics-based artifacts-low dose, sparse view angles, and rings artifacts-were simulated on a separate set of N=40 test images at three intensity magnitudes. Segmentation performance and the ability of the InfoOOD measure to detect the artifact-induced OOD data were evaluated. An additional N=131 test images were used to assess the correlation between the InfoOOD measure and segmentation model performance metrics. In all evaluations, InfoOOD was compared with established embedded feature-based and reconstruction-based OOD detection methods. &#xD;Results. Artifact simulation significantly degraded segmentation model performance across all artifact types and magnitudes (ρ<0.001), with model performance worsening as artifact magnitude increased. The InfoOOD measure consistently outperformed the embedded feature-based measures in detecting OOD data (e.g., AUC=0.93 vs. AUC=0.57 for the strong rings artifact) and surpassed the reconstruction-based measure across weak magnitude artifacts (e.g., AUC=0.75 vs. AUC=0.61 for the weak sparse view artifact). The InfoOOD measure also achieved stronger, negative correlations with segmentation performance metrics (e.g., ρ=-0.52 vs. ρ≥-0.11 for the lesion sensitivity metric). In both assessments, InfoOOD measure performance increased considerably with information bottleneck optimization iterations. &#xD;Significance. This work introduces and validates a novel, highly sensitive, and clinically relevant information-theoretic approach for medical image OOD detection, supporting the safe deployment of deep learning models in clinical settings.

Clinical evaluation of medical image synthesis: a case study in wireless capsule endoscopy.

Gatoula P, Diamantis DE, Koulaouzidis A, Carretero C, Chetcuti-Zammit S, Valdivia PC, González-Suárez B, Mussetto A, Plevris J, Robertson A, Rosa B, Toth E, Iakovidis DK

pubmed logopapersOct 8 2025
Synthetic Data Generation (SDG) based on Artificial Intelligence (AI) can transform the way clinical medicine is delivered by overcoming privacy barriers that currently render clinical data sharing difficult. This is the key to accelerating the development of digital tools contributing to enhanced patient safety. Such tools include robust data-driven clinical decision support systems, and example-based digital training tools that will enable healthcare professionals to improve their diagnostic performance for enhanced patient safety. This study focuses on the clinical evaluation of medical SDG, with a proof-of-concept investigation on diagnosing Inflammatory Bowel Disease (IBD) using Wireless Capsule Endoscopy (WCE) images. Its scientific contributions include (a) a novel protocol for the systematic Clinical Evaluation of Medical Image Synthesis (CEMIS); (b) a novel variational autoencoder-based model, named TIDE-II, which enhances its predecessor model, TIDE (This Intestine Does not Exist), for the generation of high-resolution synthetic WCE images; and (c) a comprehensive evaluation of the synthetic images using the CEMIS protocol by 10 international WCE specialists, in terms of image quality, diversity, and realism, as well as their utility for clinical decision-making. The results show that TIDE-II generates clinically plausible, very realistic WCE images, of improved quality compared to relevant state-of-the-art generative models. Concludingly, CEMIS can serve as a reference for future research on medical image-generation techniques, while the adaptation/extension of the architecture of TIDE-II to other imaging domains can be promising.
Page 1 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.