Sort by:
Page 151 of 2082073 results

Development and validation of a CT-based radiomics machine learning model for differentiating immune-related interstitial pneumonia.

Luo T, Guo J, Xi J, Luo X, Fu Z, Chen W, Huang D, Chen K, Xiao Q, Wei S, Wang Y, Du H, Liu L, Cai S, Dong H

pubmed logopapersMay 27 2025
Immune checkpoint inhibitor-related interstitial pneumonia (CIP) poses a diagnostic challenge due to its radiographic similarity to other pneumonias. We developed a non-invasive model using CT imaging to differentiate CIP from other pneumonias (OTP). We analyzed CIP and OTP patients after the immunotherapy from five medical centers between 2020 and 2023, and randomly divided into training and validation in 7:3. A radiomics model was developed using random forest analysis. A new model was then built by combining independent risk factors for CIP. The models were evaluated using ROC, calibration, and decision curve analysis. A total of 238 patients with pneumonia following immunotherapy were included, with 116 CIP and 122 OTP. After random allocation, the training cohort included 166 patients, and the validation included 72 patients. A radiomics model composed of 11 radiomic features was established using the random forest method, with an AUC of 0.833 for the training cohort and 0.821 for the validation. Univariate and multivariate logistic regression analysis revealed significant differences in smoking history, radiotherapy history, and radiomics score between CIP and OTP (p < 0.05). A new model was constructed based on these three factors and a nomogram was drawn. This model showed good calibration and net benefit in both the training and validation cohorts, with AUCs of 0.872 and 0.860, respectively. Using the random forest method of machine learning, we successfully constructed a CT-based radiomics CIP differential diagnostic model that can accurately, non-invasively, and rapidly provide clinicians with etiological support for pneumonia diagnosis.

A Left Atrial Positioning System to Enable Follow-Up and Cohort Studies.

Mehringer NJ, McVeigh ER

pubmed logopapersMay 27 2025
We present a new algorithm to automatically convert 3-dimensional left atrium surface meshes into a standard 2-dimensional space: a Left Atrial Positioning System (LAPS). Forty-five contrast-enhanced 4- dimensional computed tomography datasets were collected from 30 subjects. The left atrium volume was segmented using a trained neural network and converted into a surface mesh. LAPS coordinates were calculated on each mesh by computing lines of longitude and latitude on the surface of the mesh with reference to the center of the posterior wall and the mitral valve. LAPS accuracy was evaluated with one-way transfer of coordinates from a template mesh to a synthetic ground truth, which was created by registering the template mesh and pre-calculated LAPS coordinates to a target mesh. The Euclidian distance error was measured between each test node and its ground truth location. The median point transfer error was 2.13 mm between follow-up scans of the same subject (n = 15) and 3.99 mm between different subjects (n = 30). The left atrium was divided into 24 anatomic regions and represented on a 2D square diagram. The Left Atrial Positioning System is fully automatic, accurate, robust to anatomic variation, and has flexible visualization for mapping data in the left atrium. This provides a framework for comparing regional LA surface data values in both follow-up and cohort studies.

Modeling Brain Aging with Explainable Triamese ViT: Towards Deeper Insights into Autism Disorder.

Zhang Z, Aggarwal V, Angelov P, Jiang R

pubmed logopapersMay 27 2025
Machine learning, particularly through advanced imaging techniques such as three-dimensional Magnetic Resonance Imaging (MRI), has significantly improved medical diagnostics. This is especially critical for diagnosing complex conditions like Alzheimer's disease. Our study introduces Triamese-ViT, an innovative Tri-structure of Vision Transformers (ViTs) that incorporates a built-in interpretability function, it has structure-aware explainability that allows for the identification and visualization of key features or regions contributing to the prediction, integrates information from three perspectives to enhance brain age estimation. This method not only increases accuracy but also improves interoperability with existing techniques. When evaluated, Triamese-ViT demonstrated superior performance and produced insightful attention maps. We applied these attention maps to the analysis of natural aging and the diagnosis of Autism Spectrum Disorder (ASD). The results aligned with those from occlusion analysis, identifying the Cingulum, Rolandic Operculum, Thalamus, and Vermis as important regions in normal aging, and highlighting the Thalamus and Caudate Nucleus as key regions for ASD diagnosis.

ToPoMesh: accurate 3D surface reconstruction from CT volumetric data via topology modification.

Chen J, Zhu Q, Xie B, Li T

pubmed logopapersMay 27 2025
Traditional computed tomography (CT) methods for 3D reconstruction face resolution limitations and require time-consuming post-processing workflows. While deep learning techniques improve the accuracy of segmentation, traditional voxel-based segmentation and surface reconstruction pipelines tend to introduce artifacts such as disconnected regions, topological inconsistencies, and stepped distortions. To overcome these challenges, we propose ToPoMesh, an end-to-end 3D mesh reconstruction deep learning framework for direct reconstruction of high-fidelity surface meshes from CT volume data. To address the existing problems, our approach introduces three core innovations: (1) accurate local and global shape modeling by preserving and enhancing local feature information through residual connectivity and self-attention mechanisms in graph convolutional networks; (2) an adaptive variant density (Avd) mesh de-pooling strategy, which dynamically optimizes the vertex distribution; (3) a topology modification module that iteratively prunes the error surfaces and boundary smoothing via variable regularity terms to obtain finer mesh surfaces. Experiments on the LiTS, MSD pancreas tumor, MSD hippocampus, and MSD spleen datasets demonstrate that ToPoMesh outperforms state-of-the-art methods. Quantitative evaluations demonstrate a 57.4% reduction in Chamfer distance (liver) and a 0.47% improvement in F-score compared to end-to-end 3D reconstruction methods, while qualitative results confirm enhanced fidelity for thin structures and complex anatomical topologies versus segmentation frameworks. Importantly, our method eliminates the need for manual post-processing, realizes the ability to reconstruct 3D meshes from images, and can provide precise guidance for surgical planning and diagnosis.

Dual-energy CT combined with histogram parameters in the assessment of perineural invasion in colorectal cancer.

Wang Y, Tan H, Li S, Long C, Zhou B, Wang Z, Cao Y

pubmed logopapersMay 27 2025
The purpose is to evaluate the predictive value of dual-energy CT (DECT) combined with histogram parameters and a clinical prediction model for perineural invasion (PNI) in colorectal cancer (CRC). We retrospectively analyzed clinical and imaging data from 173 CRC patients who underwent preoperative DECT-enhanced scanning at two centers. Data from Qinghai University Affiliated Hospital (n = 120) were randomly divided into training and validation sets, while data from Lanzhou University Second Hospital (n = 53) served as the external validation set. Regions of interest (ROIs) were delineated to extract spectral and histogram parameters, and multivariate logistic regression identified optimal predictors. Six machine learning models-support vector machine (SVM), decision tree (DT), random forest (RF), logistic regression (LR), k-nearest neighbors (KNN), and extreme gradient boosting (XGBoost)-were constructed. Model performance and clinical utility were assessed using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). Four independent predictive factors were identified through multivariate analysis: entropy, CT40<sub>KeV</sub>, CEA, and skewness. Among the six classifier models, RF model demonstrated the best performance in the training set (AUC = 0.918, 95% CI: 0.862-0.969). In the validation set, RF outperformed other models (AUC = 0.885, 95% CI: 0.772-0.972). Notably, in the external validation set, the XGBoost model achieved the highest performance (AUC = 0.823, 95% CI: 0.672-0.945). Dual-energy CT-based combined with histogram parameters and clinical prediction modeling can be effectively used for preoperative noninvasive assessment of perineural invasion in colorectal cancer.

Development of an Open-Source Algorithm for Automated Segmentation in Clinician-Led Paranasal Sinus Radiologic Research.

Darbari Kaul R, Zhong W, Liu S, Azemi G, Liang K, Zou E, Sacks PL, Thiel C, Campbell RG, Kalish L, Sacks R, Di Ieva A, Harvey RJ

pubmed logopapersMay 27 2025
Artificial Intelligence (AI) research needs to be clinician led; however, expertise typically lies outside their skill set. Collaborations exist but are often commercially driven. Free and open-source computational algorithms and software expertise are required for meaningful clinically driven AI medical research. Deep learning algorithms automate segmenting regions of interest for analysis and clinical translation. Numerous studies have automatically segmented paranasal sinus computed tomography (CT) scans; however, openly accessible algorithms capturing the sinonasal cavity remain scarce. The purpose of this study was to validate and provide an open-source segmentation algorithm for paranasal sinus CTs for the otolaryngology research community. A cross-sectional comparative study was conducted with a deep learning algorithm, UNet++, modified for automatic segmentation of paranasal sinuses CTs and "ground-truth" manual segmentations. A dataset of 100 paranasal sinuses scans was manually segmented, with an 80/20 training/testing split. The algorithm is available at https://github.com/rheadkaul/SinusSegment. Primary outcomes included the Dice similarity coefficient (DSC) score, Intersection over Union (IoU), Hausdorff distance (HD), sensitivity, specificity, and visual similarity grading. Twenty scans representing 7300 slices were assessed. The mean DSC was 0.87 and IoU 0.80, with HD 33.61 mm. The mean sensitivity was 83.98% and specificity 99.81%. The median visual similarity grading score was 3 (good). There were no statistically significant differences in outcomes with normal or diseased paranasal sinus CTs. Automatic segmentation of CT paranasal sinuses yields good results when compared with manual segmentation. This study provides an open-source segmentation algorithm as a foundation and gateway for more complex AI-based analysis of large datasets.

Automatic assessment of lower limb deformities using high-resolution X-ray images.

Rostamian R, Panahi MS, Karimpour M, Nokiani AA, Khaledi RJ, Kashani HG

pubmed logopapersMay 27 2025
Planning an osteotomy or arthroplasty surgery on a lower limb requires prior classification/identification of its deformities. The detection of skeletal landmarks and the calculation of angles required to identify the deformities are traditionally done manually, with measurement accuracy relying considerably on the experience of the individual doing the measurements. We propose a novel, image pyramid-based approach to skeletal landmark detection. The proposed approach uses a Convolutional Neural Network (CNN) that receives the raw X-ray image as input and produces the coordinates of the landmarks. The landmark estimations are modified iteratively via the error feedback method to come closer to the target. Our clinically produced full-leg X-Rays dataset is made publically available and used to train and test the network. Angular quantities are calculated based on detected landmarks. Angles are then classified as lower than normal, normal or higher than normal according to predefined ranges for a normal condition. The performance of our approach is evaluated at several levels: landmark coordinates accuracy, angles' measurement accuracy, and classification accuracy. The average absolute error (difference between automatically and manually determined coordinates) for landmarks was 0.79 ± 0.57 mm on test data, and the average absolute error (difference between automatically and manually calculated angles) for angles was 0.45 ± 0.42°. Results from multiple case studies involving high-resolution images show that the proposed approach outperforms previous deep learning-based approaches in terms of accuracy and computational cost. It also enables the automatic detection of the lower limb misalignments in full-leg x-ray images.

Multicentre evaluation of deep learning CT autosegmentation of the head and neck region for radiotherapy.

Pang EPP, Tan HQ, Wang F, Niemelä J, Bolard G, Ramadan S, Kiljunen T, Capala M, Petit S, Seppälä J, Vuolukka K, Kiitam I, Zolotuhhin D, Gershkevitsh E, Lehtiö K, Nikkinen J, Keyriläinen J, Mokka M, Chua MLK

pubmed logopapersMay 27 2025
This is a multi-institutional study to evaluate a head-and-neck CT auto-segmentation software across seven institutions globally. 11 lymph node levels and 7 organs-at-risk contours were evaluated in a two-phase study design. Time savings were measured in both phases, and the inter-observer variability across the seven institutions was quantified in phase two. Overall time savings were found to be 42% in phase one and 49% in phase two. Lymph node levels IA, IB, III, IVA, and IVB showed no significant time savings, with some centers reporting longer editing times than manual delineation. All the edited ROIs showed reduced inter-observer variability compared to manual segmentation. Our study shows that auto-segmentation plays a crucial role in harmonizing contouring practices globally. However, the clinical benefits of auto-segmentation software vary significantly across ROIs and between clinics. To maximize its potential, institution-specific commissioning is required to optimize the clinical benefits.

Machine learning decision support model construction for craniotomy approach of pineal region tumors based on MRI images.

Chen Z, Chen Y, Su Y, Jiang N, Wanggou S, Li X

pubmed logopapersMay 27 2025
Pineal region tumors (PRTs) are rare but deep-seated brain tumors, and complete surgical resection is crucial for effective tumor treatment. The choice of surgical approach is often challenging due to the low incidence and deep location. This study aims to combine machine learning and deep learning algorithms with pre-operative MRI images to build a model for PRTs surgical approaches recommendation, striving to model clinical experience for practical reference and education. This study was a retrospective study which enrolled a total of 173 patients diagnosed with PRTs radiologically from our hospital. Three traditional surgical approaches of were recorded for prediction label. Clinical and VASARI related radiological information were selected for machine learning prediction model construction. And MRI images from axial, sagittal and coronal views of orientation were also used for deep learning craniotomy approach prediction model establishment and evaluation. 5 machine learning methods were applied to construct the predictive classifiers with the clinical and VASARI features and all methods could achieve area under the ROC (Receiver operating characteristic) curve (AUC) values over than 0.7. And also, 3 deep learning algorithms (ResNet-50, EfficientNetV2-m and ViT) were applied based on MRI images from different orientations. EfficientNetV2-m achieved the highest AUC value of 0.89, demonstrating a significant high performance of prediction. And class activation mapping was used to reveal that the tumor itself and its surrounding relations are crucial areas for model decision-making. In our study, we used machine learning and deep learning to construct surgical approach recommendation models. Deep learning could achieve high performance of prediction and provide efficient and personalized decision support tools for PRTs surgical approach. Not applicable.

Deep learning network enhances imaging quality of low-b-value diffusion-weighted imaging and improves lesion detection in prostate cancer.

Liu Z, Gu WJ, Wan FN, Chen ZZ, Kong YY, Liu XH, Ye DW, Dai B

pubmed logopapersMay 27 2025
Diffusion-weighted imaging with higher b-value improves detection rate for prostate cancer lesions. However, obtaining high b-value DWI requires more advanced hardware and software configuration. Here we use a novel deep learning network, NAFNet, to generate a deep learning reconstructed (DLR<sub>1500</sub>) images from 800 b-value to mimic 1500 b-value images, and to evaluate its performance and lesion detection improvements based on whole-slide images (WSI). We enrolled 303 prostate cancer patients with both 800 and 1500 b-values from Fudan University Shanghai Cancer Centre between 2017 and 2020. We assigned these patients to the training and validation set in a 2:1 ratio. The testing set included 36 prostate cancer patients from an independent institute who had only preoperative DWI at 800 b-value. Two senior radiology doctors and two junior radiology doctors read and delineated cancer lesions on DLR<sub>1500</sub>, original 800 and 1500 b-values DWI images. WSI were used as the ground truth to assess the lesion detection improvement of DLR<sub>1500</sub> images in the testing set. After training and generating, within junior radiology doctors, the diagnostic AUC based on DLR<sub>1500</sub> images is not inferior to that based on 1500 b-value images (0.832 (0.788-0.876) vs. 0.821 (0.747-0.899), P = 0.824). The same phenomenon is also observed in senior radiology doctors. Furthermore, in the testing set, DLR<sub>1500</sub> images could significantly enhance junior radiology doctors' diagnostic performance than 800 b-value images (0.848 (0.758-0.938) vs. 0.752 (0.661-0.843), P = 0.043). DLR<sub>1500</sub> DWIs were comparable in quality to original 1500 b-value images within both junior and senior radiology doctors. NAFNet based DWI enhancement can significantly improve the image quality of 800 b-value DWI, and therefore promote the accuracy of prostate cancer lesion detection for junior radiology doctors.
Page 151 of 2082073 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.