Sort by:
Page 67 of 2212205 results

A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

Liu Q, Jiang J, Wu K, Zhang Y, Sun N, Luo J, Ba T, Lv A, Liu C, Yin Y, Yang Z, Xu H

pubmed logopapersJun 27 2025
To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images. A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated. In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP). The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy. Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data. Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

Leadership in radiology in the era of technological advancements and artificial intelligence.

Wichtmann BD, Paech D, Pianykh OS, Huang SY, Seltzer SE, Brink J, Fennessy FM

pubmed logopapersJun 27 2025
Radiology has evolved from the pioneering days of X-ray imaging to a field rich in advanced technologies on the cusp of a transformative future driven by artificial intelligence (AI). As imaging workloads grow in volume and complexity, and economic as well as environmental pressures intensify, visionary leadership is needed to navigate the unprecedented challenges and opportunities ahead. Leveraging its strengths in automation, accuracy and objectivity, AI will profoundly impact all aspects of radiology practice-from workflow management, to imaging, diagnostics, reporting and data-driven analytics-freeing radiologists to focus on value-driven tasks that improve patient care. However, successful AI integration requires strong leadership and robust governance structures to oversee algorithm evaluation, deployment, and ongoing maintenance, steering the transition from static to continuous learning systems. The vision of a "diagnostic cockpit" that integrates multidimensional data for quantitative precision diagnoses depends on visionary leadership that fosters innovation and interdisciplinary collaboration. Through administrative automation, precision medicine, and predictive analytics, AI can enhance operational efficiency, reduce administrative burden, and optimize resource allocation, leading to substantial cost reductions. Leaders need to understand not only the technical aspects but also the complex human, administrative, and organizational challenges of AI's implementation. Establishing sound governance and organizational frameworks will be essential to ensure ethical compliance and appropriate oversight of AI algorithms. As radiology advances toward this AI-driven future, leaders must cultivate an environment where technology enhances rather than replaces human skills, upholding an unwavering commitment to human-centered care. Their vision will define radiology's pioneering role in AI-enabled healthcare transformation. KEY POINTS: Question Artificial intelligence (AI) will transform radiology, improving workflow efficiency, reducing administrative burden, and optimizing resource allocation to meet imaging workloads' increasing complexity and volume. Findings Strong leadership and governance ensure ethical deployment of AI, steering the transition from static to continuous learning systems while fostering interdisciplinary innovation and collaboration. Clinical relevance Visionary leaders must harness AI to enhance, rather than replace, the role of professionals in radiology, advancing human-centered care while pioneering healthcare transformation.

Deep Learning-Based Prediction of PET Amyloid Status Using MRI.

Kim D, Ottesen JA, Kumar A, Ho BC, Bismuth E, Young CB, Mormino E, Zaharchuk G

pubmed logopapersJun 27 2025
Identifying amyloid-beta (Aβ)-positive patients is essential for Alzheimer's disease (AD) clinical trials and disease-modifying treatments but currently requires PET or cerebrospinal fluid sampling. Previous MRI-based deep learning models, using only T1-weighted (T1w) images, have shown moderate performance. Multi-contrast MRI and PET-based quantitative Aβ deposition were retrospectively obtained from three public datasets: ADNI, OASIS3, and A4. Aβ positivity was defined using each dataset's recommended centiloid threshold. Two EfficientNet models were trained to predict amyloid positivity: one using only T1w images and another incorporating both T1w and T2-FLAIR. Model performance was assessed using an internal held-out test set, evaluating AUC, accuracy, sensitivity, and specificity. External validation was conducted using an independent cohort from Stanford Alzheimer's Disease Research Center. DeLong's and McNemar's tests were used to compare AUC and accuracy, respectively. A total of 4,056 exams (mean [SD] age: 71.6 [6.3] years; 55% female; 55% amyloid-positive) were used for network development, and 149 exams were used for external testing (mean [SD] age: 72.1 [9.6] years; 58% female; 56% amyloid-positive). The multi-contrast model outperformed the single-modality model in the internal held-out test set (AUC: 0.67, 95% CI: 0.65-0.70, <i>P</i> < 0.001; accuracy: 0.63, 95% CI: 0.62-0.65, <i>P</i> < 0.001) compared to the T1w-only model (AUC: 0.61; accuracy: 0.59). Among cognitive subgroups, the highest performance (AUC: 0.71) was observed in mild cognitive impairment. The multi-contrast model also demonstrated consistent performance in the external test set (AUC: 0.65, 95% CI: 0.60-0.71, <i>P</i> = 0.014; accuracy: 0.62, 95% CI: 0.58- 0.65, <i>P</i> < 0.001). The use of multi-contrast MRI, specifically incorporating T2-FLAIR in addition to T1w images, significantly improved the predictive accuracy of PET-determined amyloid status from MRI scans using a deep learning approach. Aβ= amyloid-beta; AD= Alzheimer's disease; AUC= area under the receiver operating characteristic curve; CN= cognitively normal; MCI= mild cognitive impairment; T1w = T1-wegithed; T2-FLAIR = T2-weighted fluid attenuated inversion recovery; FBP=<sup>18</sup>F-florbetapir; FBB=<sup>18</sup>F-florbetaben; SUVR= standard uptake value ratio.

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.&#xD;Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.&#xD;Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.&#xD;Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.

Prospective quality control in chest radiography based on the reconstructed 3D human body.

Tan Y, Ye Z, Ye J, Hou Y, Li S, Liang Z, Li H, Tang J, Xia C, Li Z

pubmed logopapersJun 27 2025
Chest radiography requires effective quality control (QC) to reduce high retake rates. However, existing QC measures are all retrospective and implemented after exposure, often necessitating retakes when image quality fails to meet standards and thereby increasing radiation exposure to patients. To address this issue, we proposed a 3D human body (3D-HB) reconstruction algorithm to realize prospective QC. Our objective was to investigate the feasibility of using the reconstructed 3D-HB for prospective QC in chest radiography and evaluate its impact on retake rates.&#xD;Approach: This prospective study included patients indicated for posteroanterior (PA) and lateral (LA) chest radiography in May 2024. A 3D-HB reconstruction algorithm integrating the SMPL-X model and the HybrIK-X algorithm was proposed to convert patients' 2D images into 3D-HBs. QC metrics regarding patient positioning and collimation were assessed using chest radiographs (reference standard) and 3D-HBs, with results compared using ICCs, linear regression, and receiver operating characteristic curves. For retake rate evaluation, a real-time 3D-HB visualization interface was developed and chest radiography was conducted in two four-week phases: the first without prospective QC and the second with prospective QC. Retake rates between the two phases were compared using chi-square tests. &#xD;Main results: 324 participants were included (mean age, 42 years±19 [SD]; 145 men; 324 PA and 294 LA examinations). The ICCs for the clavicle and midaxillary line angles were 0.80 and 0.78, respectively. Linear regression showed good relation for clavicle angles (R2: 0.655) and midaxillary line angles (R2: 0.616). In PA chest radiography, the AUCs of 3D-HBs were 0.89, 0.87, 0.91 and 0.92 for assessing scapula rotation, lateral tilt, centered positioning and central X-ray alignment respectively, with 97% accuracy in collimation assessment. In LA chest radiography, the AUCs of 3D-HBs were 0.87, 0.84, 0.87 and 0.88 for assessing arms raised, chest rotation, centered positioning and central X-ray alignment respectively, with 94% accuracy in collimation assessment. In retake rate evaluation, 3995 PA and 3295 LA chest radiographs were recorded. The implementation of prospective QC based on the 3D-HB reduced retake rates from 8.6% to 3.5% (PA) and 19.6% to 4.9% (LA) (p < .001).&#xD;Significance: The reconstructed 3D-HB is a feasible tool for prospective QC in chest radiography, providing real-time feedback on patient positioning and collimation before exposure. Prospective QC based on the reconstructed 3D-HB has the potential to reshape the future of radiography QC by significantly reducing retake rates and improving clinical standardization.

Deep learning for hydrocephalus prognosis: Advances, challenges, and future directions: A review.

Huang J, Shen N, Tan Y, Tang Y, Ding Z

pubmed logopapersJun 27 2025
Diagnosis of hydrocephalus involves a careful check of the patient's history and thorough neurological assessment. The traditional diagnosis has predominantly depended on the professional judgment of physicians based on clinical experience, but with the advancement of precision medicine and individualized treatment, such experience-based methods are no longer sufficient to keep pace with current clinical requirements. To fit this adjustment, the medical community actively devotes itself to data-driven intelligent diagnostic solutions. Building a prognosis prediction model for hydrocephalus has thus become a new focus, among which intelligent prediction systems supported by deep learning offer new technical advantages for clinical diagnosis and treatment decisions. Over the past several years, algorithms of deep learning have demonstrated conspicuous advantages in medical image analysis. Studies revealed that the accuracy rate of the diagnosis of hydrocephalus by magnetic resonance imaging can reach 90% through convolutional neural networks, while their sensitivity and specificity are also better than these of traditional methods. With the extensive use of medical technology in terms of deep learning, its successful use in modeling hydrocephalus prognosis has also drawn extensive attention and recognition from scholars. This review explores the application of deep learning in hydrocephalus diagnosis and prognosis, focusing on image-based, biochemical, and structured data models. Highlighting recent advancements, challenges, and future trajectories, the study emphasizes deep learning's potential to enhance personalized treatment and improve outcomes.

Regional Cortical Thinning and Area Reduction Are Associated with Cognitive Impairment in Hemodialysis Patients.

Chen HJ, Qiu J, Qi Y, Guo Y, Zhang Z, Qin H, Wu F, Chen F

pubmed logopapersJun 27 2025
Magnetic resonance imaging (MRI) has shown that patients with end-stage renal disease have decreased gray matter volume and density. However, the cortical area and thickness in patients on hemodialysis are uncertain, and the relationship between patients' cognition and cortical alterations remains unclear. Thirty-six hemodialysis patients and 25 age- and sex-matched healthy controls were enrolled in this study and underwent brain MRI scans and neuropsychological assessments. According to the Desikan-Killiany atlas, the brain is divided into 68 regions. Using FreeSurfer software, we analyzed the differences in cortical area and thickness of each region between groups. Machine learning-based classification was also used to differentiate hemodialysis patients from healthy individuals. The patients exhibited decreased cortical thickness in the frontal and temporal regions, including the left bankssts, left lingual gyrus, left pars triangularis, bilateral superior temporal gyrus, and right pars opercularis and decreased cortical area in the left rostral middle frontal gyrus, left superior frontal gyrus, right fusiform gyrus, right pars orbitalis and right superior frontal gyrus. Decreased cortical thickness was positively associated with poorer scores on the neuropsychological tests and increased uric acid and urea levels. Cortical thickness pattern allowed differentiating the patients from the controls with 96.7% accuracy (97.5% sensitivity, 95.0% specificity, 97.5% precision, and AUC: 0.983) on the support vector machine analysis. Patients on hemodialysis exhibited decreased cortical area and thickness, which was associated with poorer cognition and uremic toxins.

Machine learning-based radiomic nomogram from unenhanced computed tomography and clinical data predicts bowel resection in incarcerated inguinal hernia.

Li DL, Zhu L, Liu SL, Wang ZB, Liu JN, Zhou XM, Hu JL, Liu RQ

pubmed logopapersJun 27 2025
Early identification of bowel resection risks is crucial for patients with incarcerated inguinal hernia (IIH). However, the prompt detection of these risks remains a significant challenge. Advancements in radiomic feature extraction and machine learning algorithms have paved the way for innovative diagnostic approaches to assess IIH more effectively. To devise a sophisticated radiomic-clinical model to evaluate bowel resection risks in IIH patients, thereby enhancing clinical decision-making processes. This single-center retrospective study analyzed 214 IIH patients randomized into training (<i>n</i> = 161) and test (<i>n</i> = 53) sets (3:1). Radiologists segmented hernia sac-trapped bowel volumes of interest (VOIs) on computed tomography images. Radiomic features extracted from VOIs generated Rad-scores, which were combined with clinical data to construct a nomogram. The nomogram's performance was evaluated against standalone clinical and radiomic models in both cohorts. A total of 1561 radiomic features were extracted from the VOIs. After dimensionality reduction, 13 radiomic features were used with eight machine learning algorithms to develop the radiomic model. The logistic regression algorithm was ultimately selected for its effectiveness, showing an area under the curve (AUC) of 0.828 [95% confidence interval (CI): 0.753-0.902] in the training set and 0.791 (95%CI: 0.668-0.915) in the test set. The comprehensive nomogram, incorporating clinical indicators showcased strong predictive capabilities for assessing bowel resection risks in IIH patients, with AUCs of 0.864 (95%CI: 0.800-0.929) and 0.800 (95%CI: 0.669-0.931) for the training and test sets, respectively. Decision curve analysis revealed the integrated model's superior performance over standalone clinical and radiomic approaches. This innovative radiomic-clinical nomogram has proven to be effective in predicting bowel resection risks in IIH patients and has substantially aided clinical decision-making.

Machine learning to identify hypoxic-ischemic brain injury on early head CT after pediatric cardiac arrest.

Kirschen MP, Li J, Elmer J, Manteghinejad A, Arefan D, Graham K, Morgan RW, Nadkarni V, Diaz-Arrastia R, Berg R, Topjian A, Vossough A, Wu S

pubmed logopapersJun 27 2025
To train deep learning models to detect hypoxic-ischemic brain injury (HIBI) on early CT scans after pediatric out-of-hospital cardiac arrest (OHCA) and determine if models could identify HIBI that was not visually appreciable to a radiologist. Retrospective study of children who had a CT scan within 24 hours of OHCA compared to age-matched controls. We designed models to detect HIBI by discriminating CT images from OHCA cases and controls, and predict death and unfavorable outcome (PCPC 4-6 at hospital discharge) among cases. Model performance was measured by AUC. We trained a second model to distinguish OHCA cases with radiologist-identified HIBI from controls without OHCA and tested the model on OHCA cases without radiologist-identified HIBI. We compared outcomes between OHCA cases with and without model-categorized HIBI. We analyzed 117 OHCA cases (age 3.1 [0.7-12.2] years); 43% died and 58% had unfavorable outcome. Median time from arrest to CT was 2.1 [1.0,7.2] hours. Deep learning models discriminated OHCA cases from controls with a mean AUC of 0.87±0.05. Among OHCA cases, mean AUCs for predicting death and unfavorable outcome were 0.79±0.06 and 0.69±0.06, respectively. Mean AUC was 0.98±0.01for discriminating between 44 OHCA cases with radiologist-identified HIBI and controls. Among 73 OHCA cases without radiologist-identified HIBI, the model identified 36% as having presumed HIBI; 31% of whom died compared to 17% of cases without HIBI identified radiologically and via the model (p=0.174). Deep learning models can identify HIBI on early CT images after pediatric OHCA and detect some presumed HIBI visually not identified by a radiologist.

White Box Modeling of Self-Determined Sequence Exercise Program Among Sarcopenic Older Adults: Uncovering a Novel Strategy Overcoming Decline of Skeletal Muscle Area.

Wei M, He S, Meng D, Lv Z, Guo H, Yang G, Wang Z

pubmed logopapersJun 27 2025
Resistance exercise, Taichi exercise, and the hybrid exercise program consisting of the two aforementioned methods have been demonstrated to increase the skeletal muscle mass of older individuals with sarcopenia. However, the exercise sequence has not been comprehensively investigated. Therefore, we designed a self-determined sequence exercise program, incorporating resistance exercises, Taichi, and the hybrid exercise program to overcome the decline of skeletal muscle area and reverse sarcopenia in older individuals. Ninety-one older patients with sarcopenia between the ages of 60 and 75 completed this three-stage randomized controlled trial for 24 weeks, including the self-determined sequence exercise program group (n = 31), the resistance training group (n = 30), and the control group (n = 30). We used quantitative computed tomography to measure the effects of different intervention protocols on skeletal muscle mass in participants. Participants' demographic variables were analyzed using one-way analysis of variance and chi-square tests, and experimental data were examined using repeated-measures analysis of variance. Furthermore, we utilized the Markov model to explain the effectiveness of the exercise programs among the three-stage intervention and explainable artificial intelligence to predict whether intervention programs can reverse sarcopenia. Repeated-measures analysis of variance results indicated that there were statistically significant Group × Time interactions detected in the L3 skeletal muscle density, L3 skeletal muscle area, muscle fat infiltration, handgrip strength, and relative skeletal muscle mass index. The stacking model exhibited the best accuracy (84.5%) and the best F1-score (68.8%) compared to other algorithms. In the self-determined sequence exercise program group, strength training contributed most to the reversal of sarcopenia. One self-determined sequence exercise program can improve skeletal muscle area among sarcopenic older people. Based on our stacking model, we can predict whether sarcopenia in older people can be reversed accurately. The trial was registered in ClinicalTrials.gov. TRN:NCT05694117. Our findings indicate that such tailored exercise interventions can substantially benefit sarcopenic patients, and our stacking model provides an accurate predictive tool for assessing the reversibility of sarcopenia in older adults. This approach not only enhances individual health outcomes but also informs future development of targeted exercise programs to mitigate age-related muscle decline.
Page 67 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.