Sort by:
Page 102 of 1411402 results

Scout-Dose-TCM: Direct and Prospective Scout-Based Estimation of Personalized Organ Doses from Tube Current Modulated CT Exams

Maria Jose Medrano, Sen Wang, Liyan Sun, Abdullah-Al-Zubaer Imran, Jennie Cao, Grant Stevens, Justin Ruey Tse, Adam S. Wang

arxiv logopreprintJun 30 2025
This study proposes Scout-Dose-TCM for direct, prospective estimation of organ-level doses under tube current modulation (TCM) and compares its performance to two established methods. We analyzed contrast-enhanced chest-abdomen-pelvis CT scans from 130 adults (120 kVp, TCM). Reference doses for six organs (lungs, kidneys, liver, pancreas, bladder, spleen) were calculated using MC-GPU and TotalSegmentator. Based on these, we trained Scout-Dose-TCM, a deep learning model that predicts organ doses corresponding to discrete cosine transform (DCT) basis functions, enabling real-time estimates for any TCM profile. The model combines a feature learning module that extracts contextual information from lateral and frontal scouts and scan range with a dose learning module that output DCT-based dose estimates. A customized loss function incorporated the DCT formulation during training. For comparison, we implemented size-specific dose estimation per AAPM TG 204 (Global CTDIvol) and its organ-level TCM-adapted version (Organ CTDIvol). A 5-fold cross-validation assessed generalizability by comparing mean absolute percentage dose errors and r-squared correlations with benchmark doses. Average absolute percentage errors were 13% (Global CTDIvol), 9% (Organ CTDIvol), and 7% (Scout-Dose-TCM), with bladder showing the largest discrepancies (15%, 13%, and 9%). Statistical tests confirmed Scout-Dose-TCM significantly reduced errors vs. Global CTDIvol across most organs and improved over Organ CTDIvol for the liver, bladder, and pancreas. It also achieved higher r-squared values, indicating stronger agreement with Monte Carlo benchmarks. Scout-Dose-TCM outperformed Global CTDIvol and was comparable to or better than Organ CTDIvol, without requiring organ segmentations at inference, demonstrating its promise as a tool for prospective organ-level dose estimation in CT.

Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Bowen Xin, Rohan Hickey, Tamara Blake, Jin Jin, Claire E Wainwright, Thomas Benkert, Alto Stemmer, Peter Sly, David Coman, Jason Dowling

arxiv logopreprintJun 30 2025
Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.

Deep learning for automated, motion-resolved tumor segmentation in radiotherapy.

Sarkar S, Teo PT, Abazeed ME

pubmed logopapersJun 30 2025
Accurate tumor delineation is foundational to radiotherapy. In the era of deep learning, the automation of this labor-intensive and variation-prone process is increasingly tractable. We developed a deep neural network model to segment gross tumor volumes (GTVs) in the lung and propagate them across 4D CT images to generate an internal target volume (ITV), capturing tumor motion during respiration. Using a multicenter cohort-based registry from 9 clinics across 2 health systems, we trained a 3D UNet model (iSeg) on pre-treatment CT images and corresponding GTV masks (n = 739, 5-fold cross-validation) and validated it on two independent cohorts (n = 161; n = 102). The internal cohort achieved a median Dice (DSC) of 0.73 [IQR: 0.62-0.80], with comparable performance in external cohorts (DSC = 0.70 [0.52-0.78] and 0.71 [0.59-79]), indicating multi-site validation. iSeg matched human inter-observer variability and was robust to image quality and tumor motion (DSC = 0.77 [0.68-0.86]). Machine-generated ITVs were significantly smaller than physician delineated contours (p < 0.0001), indicating more precise delineation. Notably, higher false positive voxel rate (regions segmented by the machine but not the human) were associated with increased local failure (HR: 1.01 per voxel, p = 0.03), suggesting the clinical relevance of these discordant regions. These results mark a leap in automated target volume segmentation and suggest that machine delineation can enhance the accuracy, reproducibility, and efficiency of this core task in radiotherapy.

Advancements in Herpes Zoster Diagnosis, Treatment, and Management: Systematic Review of Artificial Intelligence Applications.

Wu D, Liu N, Ma R, Wu P

pubmed logopapersJun 30 2025
The application of artificial intelligence (AI) in medicine has garnered significant attention in recent years, offering new possibilities for improving patient care across various domains. For herpes zoster, a viral infection caused by the reactivation of the varicella-zoster virus, AI technologies have shown remarkable potential in enhancing disease diagnosis, treatment, and management. This study aims to investigate the current research status in the use of AI for herpes zoster, offering a comprehensive synthesis of existing advancements. A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Three databases of Web of Science Core Collection, PubMed, and IEEE were searched to identify relevant studies on AI applications in herpes zoster research on November 17, 2023. Inclusion criteria were as follows: (1) research articles, (2) published in English, (3) involving actual AI applications, and (4) focusing on herpes zoster. Exclusion criteria comprised nonresearch articles, non-English papers, and studies only mentioning AI without application. Two independent clinicians screened the studies, with a third senior clinician resolving disagreements. In total, 26 articles were included. Data were extracted on AI task types; algorithms; data sources; data types; and clinical applications in diagnosis, treatment, and management. Trend analysis revealed an increasing annual interest in AI applications for herpes zoster. Hospital-derived data were the primary source (15/26, 57.7%), followed by public databases (6/26, 23.1%) and internet data (5/26, 19.2%). Medical images (9/26, 34.6%) and electronic medical records (7/26, 26.9%) were the most commonly used data types. Classification tasks (85.2%) dominated AI applications, with neural networks, particularly multilayer perceptron and convolutional neural networks being the most frequently used algorithms. AI applications were analyzed across three domains: (1) diagnosis, where mobile deep neural networks, convolutional neural network ensemble models, and mixed-scale attention-based models have improved diagnostic accuracy and efficiency; (2) treatment, where machine learning models, such as deep autoencoders combined with functional magnetic resonance imaging, electroencephalography, and clinical data, have enhanced treatment outcome predictions; and (3) management, where AI has facilitated case identification, epidemiological research, health care burden assessment, and risk factor exploration for postherpetic neuralgia and other complications. Overall, this study provides a comprehensive overview of AI applications in herpes zoster from clinical, data, and algorithmic perspectives, offering valuable insights for future research in this rapidly evolving field. AI has significantly advanced herpes zoster research by enhancing diagnostic accuracy, predicting treatment outcomes, and optimizing disease management. However, several limitations exist, including potential omissions from excluding databases like Embase and Scopus, language bias due to the inclusion of only English publications, and the risk of subjective bias in study selection. Broader studies and continuous updates are needed to fully capture the scope of AI applications in herpes zoster in the future.

Radiation Dose Reduction and Image Quality Improvement of UHR CT of the Neck by Novel Deep-learning Image Reconstruction.

Messerle DA, Grauhan NF, Leukert L, Dapper AK, Paul RH, Kronfeld A, Al-Nawas B, Krüger M, Brockmann MA, Othman AE, Altmann S

pubmed logopapersJun 30 2025
We evaluated a dedicated dose-reduced UHR-CT for head and neck imaging, combined with a novel deep learning reconstruction algorithm to assess its impact on image quality and radiation exposure. Retrospective analysis of ninety-eight consecutive patients examined using a new body weight-adapted protocol. Images were reconstructed using adaptive iterative dose reduction and advanced intelligent Clear-IQ engine with an already established (DL-1) and a newly implemented reconstruction algorithm (DL-2). Additional thirty patients were scanned without body-weight-adapted dose reduction (DL-1-SD). Three readers evaluated subjective image quality regarding image quality and assessment of several anatomic regions. For objective image quality, signal-to-noise ratio and contrast-to-noise ratio were calculated for temporalis and masseteric muscle and the floor of the mouth. Radiation dose was evaluated by comparing the computed tomography dose index (CTDIvol) values. Deep learning-based reconstruction algorithms significantly improved subjective image quality (diagnostic acceptability: DL‑1 vs AIDR OR of 25.16 [6.30;38.85], p < 0.001 and DL‑2 vs AIDR 720.15 [410.14;> 999.99], p < 0.001). Although higher doses (DL-1-SD) resulted in significantly enhanced image quality, DL‑2 demonstrated significant superiority over all other techniques across all defined parameters (p < 0.001). Similar results were demonstrated for objective image quality, e.g. image noise (DL‑1 vs AIDR OR of 19.0 [11.56;31.24], p < 0.001 and DL‑2 vs AIDR > 999.9 [825.81;> 999.99], p < 0.001). Using weight-adapted kV reduction, very low radiation doses could be achieved (CTDIvol: 7.4 ± 4.2 mGy). AI-based reconstruction algorithms in ultra-high resolution head and neck imaging provide excellent image quality while achieving very low radiation exposure.

Cognition-Eye-Brain Connection in Alzheimer's Disease Spectrum Revealed by Multimodal Imaging.

Shi Y, Shen T, Yan S, Liang J, Wei T, Huang Y, Gao R, Zheng N, Ci R, Zhang M, Tang X, Qin Y, Zhu W

pubmed logopapersJun 29 2025
The connection between cognition, eye, and brain remains inconclusive in Alzheimer's disease (AD) spectrum disorders. To explore the relationship between cognitive function, retinal biometrics, and brain alterations in the AD spectrum. Prospective. Healthy control (HC) (n = 16), subjective cognitive decline (SCD) (n = 35), mild cognitive impairment (MCI) (n = 18), and AD group (n = 7). 3-T, 3D T1-weighted Brain Volume (BRAVO) and resting-state functional MRI (fMRI). In all subgroups, cortical thickness was measured from BRAVO and segmented using the Desikan-Killiany-Tourville (DKT) atlas. The fractional amplitude of low-frequency fluctuations (FALFF) and regional homogeneity (ReHo) were measured in fMRI using voxel-based analysis. The eye was imaged by optical coherence tomography angiography (OCTA), with the deep learning model FARGO segmenting the foveal avascular zone (FAZ) and retinal vessels. FAZ area and perimeter, retinal blood vessels curvature (RBVC), thicknesses of the retinal nerve fiber layer (RNFL) and ganglion cell layer-inner plexiform layer (GCL-IPL) were calculated. Cognition-eye-brain associations were compared across the HC group and each AD spectrum stage using multivariable linear regression. Multivariable linear regression analysis. Statistical significance was set at p < 0.05 with FWE correction for fMRI and p < 1/62 (Bonferroni-corrected) for structural analyses. Reductions of FALFF in temporal regions, especially the left superior temporal gyrus (STG) in MCI patients, were linked to decreased RNFL thickness and increased FAZ area significantly. In AD patients, reduced ReHo values in occipital regions, especially the right middle occipital gyrus (MOG), were significantly associated with an enlarged FAZ area. The SCD group showed widespread cortical thickening significantly associated with all aforementioned retinal biometrics, with notable thickening in the right fusiform gyrus (FG) and right parahippocampal gyrus (PHG) correlating with reduced GCL-IPL thickness. Brain function and structure may be associated with cognition and retinal biometrics across the AD spectrum. Specifically, cognition-eye-brain connections may be present in SCD. 2. 3.

MedRegion-CT: Region-Focused Multimodal LLM for Comprehensive 3D CT Report Generation

Sunggu Kyung, Jinyoung Seo, Hyunseok Lim, Dongyeong Kim, Hyungbin Park, Jimin Sung, Jihyun Kim, Wooyoung Jo, Yoojin Nam, Namkug Kim

arxiv logopreprintJun 29 2025
The recent release of RadGenome-Chest CT has significantly advanced CT-based report generation. However, existing methods primarily focus on global features, making it challenging to capture region-specific details, which may cause certain abnormalities to go unnoticed. To address this, we propose MedRegion-CT, a region-focused Multi-Modal Large Language Model (MLLM) framework, featuring three key innovations. First, we introduce Region Representative ($R^2$) Token Pooling, which utilizes a 2D-wise pretrained vision model to efficiently extract 3D CT features. This approach generates global tokens representing overall slice features and region tokens highlighting target areas, enabling the MLLM to process comprehensive information effectively. Second, a universal segmentation model generates pseudo-masks, which are then processed by a mask encoder to extract region-centric features. This allows the MLLM to focus on clinically relevant regions, using six predefined region masks. Third, we leverage segmentation results to extract patient-specific attributions, including organ size, diameter, and locations. These are converted into text prompts, enriching the MLLM's understanding of patient-specific contexts. To ensure rigorous evaluation, we conducted benchmark experiments on report generation using the RadGenome-Chest CT. MedRegion-CT achieved state-of-the-art performance, outperforming existing methods in natural language generation quality and clinical relevance while maintaining interpretability. The code for our framework is publicly available.

Revealing the Infiltration: Prognostic Value of Automated Segmentation of Non-Contrast-Enhancing Tumor in Glioblastoma

Gomez-Mahiques, M., Lopez-Mateu, C., Gil-Terron, F. J., Montosa-i-Mico, V., Svensson, S. F., Mendoza Mireles, E. E., Vik-Mo, E. O., Emblem, K., Balana, C., Puig, J., Garcia-Gomez, J. M., Fuster-Garcia, E.

medrxiv logopreprintJun 28 2025
BackgroundPrecise delineation of non-contrast-enhancing tumor (nCET) in glioblastoma (GB) is critical for maximal safe resection, yet routine imaging cannot reliably separate infiltrative tumor from vasogenic edema. The aim of this study was to develop and validate an automated method to identify nCET and assess its prognostic value. MethodsPre-operative T2-weighted and FLAIR MRI from 940 patients with newly diagnosed GB in four multicenter cohorts were analyzed. A deep-learning model segmented enhancing tumor, edema and necrosis; a non-local spatially varying finite mixture model then isolated edema subregions containing nCET. The ratio of nCET to total edema volume--the Diffuse Infiltration Index (DII)--was calculated. Associations between DII and overall survival (OS) were examined with Kaplan-Meier curves and multivariable Cox regression. ResultsThe algorithm distinguished nCET from vasogenic edema in 97.5 % of patients, showing a mean signal-intensity gap > 5 %. Higher DII is able to stratify patients with shorter OS. In the NCT03439332 cohort, DII above the optimal threshold doubled the hazard of death (hazard ratio 2.09, 95 % confidence interval 1.34-3.25; p = 0.0012) and reduced median survival by 122 days. Significant, though smaller, effects were confirmed in GLIOCAT & BraTS (hazard ratio 1.31; p = 0.022), OUS (hazard ratio 1.28; p = 0.007) and in pooled analysis (hazard ratio 1.28; p = 0.0003). DII remained an independent predictor after adjustment for age, extent of resection and MGMT methylation. ConclusionsWe present a reproducible, server-hosted tool for automated nCET delineation and DII biomarker extraction that enables robust, independent prognostic stratification. It promises to guide supramaximal surgical planning and personalized neuro-oncology research and care. Key Points- KP1: Robust automated MRI tool segments non-contrast-enhancing (nCET) glioblastoma. - KP2: Introduced and validated the Diffuse Infiltration Index with prognostic value. - KP3: nCET mapping enables RANO supramaximal resection for personalized surgery. Importance of the StudyThis study underscores the clinical importance of accurately delineating non-contrast-enhancing tumor (nCET) regions in glioblastoma (GB) using standard MRI. Despite their lack of contrast enhancement, nCET areas often harbor infiltrative tumor cells critical for disease progression and recurrence. By integrating deep learning segmentation with a non-local finite mixture model, we developed a reproducible, automated methodology for nCET delineation and introduced the Diffuse Infiltration Index (DII), a novel imaging biomarker. Higher DII values were independently associated with reduced overall survival across large, heterogeneous cohorts. These findings highlight the prognostic relevance of imaging-defined infiltration patterns and support the use of nCET segmentation in clinical decision-making. Importantly, this methodology aligns with and operationalizes recent RANO criteria on supramaximal resection, offering a practical, image-based tool to improve surgical planning. In doing so, our work advances efforts toward more personalized neuro-oncological care, potentially improving outcomes while minimizing functional compromise.

Developing ultrasound-based machine learning models for accurate differentiation between sclerosing adenosis and invasive ductal carcinoma.

Liu G, Yang N, Qu Y, Chen G, Wen G, Li G, Deng L, Mai Y

pubmed logopapersJun 28 2025
This study aimed to develop a machine learning model using breast ultrasound images to improve the non-invasive differential diagnosis between Sclerosing Adenosis (SA) and Invasive Ductal Carcinoma (IDC). 2046 ultrasound images from 772 SA and IDC patients were collected, Regions of Interest (ROI) were delineated, and features were extracted. The dataset was split into training and test cohorts, and feature selection was performed by correlation coefficients and Recursive Feature Elimination. 10 classifiers with Grid Search and 5-fold cross-validation were applied during model training. Receiver Operating Characteristic (ROC) curve and Youden index were used to model evaluation. SHapley Additive exPlanations (SHAP) was employed for model interpretation. Another 224 ROIs of 84 patients from other hospitals were used for external validation. For the ROI-level model, XGBoost with 18 features achieved an area under the curve (AUC) of 0.9758 (0.9654-0.9847) in the test cohort and 0.9906 (0.9805-0.9973) in the validation cohort. For the patient-level model, logistic regression with 9 features achieved an AUC of 0.9653 (0.9402-0.9859) in the test cohort and 0.9846 (0.9615-0.9978) in the validation cohort. The feature "Original shape Major Axis Length" was identified as the most important, with its value positively correlated with a higher likelihood of the sample being IDC. Feature contributions for specific ROIs were visualized as well. We developed explainable, ultrasound-based machine learning models with high performance for differentiating SA and IDC, offering a potential non-invasive tool for improved differential diagnosis. Question Accurately distinguishing between sclerosing adenosis (SA) and invasive ductal carcinoma (IDC) in a non-invasive manner has been a diagnostic challenge. Findings Explainable, ultrasound-based machine learning models with high performance were developed for differentiating SA and IDC, and validated well in external validation cohort. Critical relevance These models provide non-invasive tools to reduce misdiagnoses of SA and improve early detection for IDC.
Page 102 of 1411402 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.