Sort by:
Page 12 of 51504 results

Scout-Dose-TCM: Direct and Prospective Scout-Based Estimation of Personalized Organ Doses from Tube Current Modulated CT Exams

Maria Jose Medrano, Sen Wang, Liyan Sun, Abdullah-Al-Zubaer Imran, Jennie Cao, Grant Stevens, Justin Ruey Tse, Adam S. Wang

arxiv logopreprintJun 30 2025
This study proposes Scout-Dose-TCM for direct, prospective estimation of organ-level doses under tube current modulation (TCM) and compares its performance to two established methods. We analyzed contrast-enhanced chest-abdomen-pelvis CT scans from 130 adults (120 kVp, TCM). Reference doses for six organs (lungs, kidneys, liver, pancreas, bladder, spleen) were calculated using MC-GPU and TotalSegmentator. Based on these, we trained Scout-Dose-TCM, a deep learning model that predicts organ doses corresponding to discrete cosine transform (DCT) basis functions, enabling real-time estimates for any TCM profile. The model combines a feature learning module that extracts contextual information from lateral and frontal scouts and scan range with a dose learning module that output DCT-based dose estimates. A customized loss function incorporated the DCT formulation during training. For comparison, we implemented size-specific dose estimation per AAPM TG 204 (Global CTDIvol) and its organ-level TCM-adapted version (Organ CTDIvol). A 5-fold cross-validation assessed generalizability by comparing mean absolute percentage dose errors and r-squared correlations with benchmark doses. Average absolute percentage errors were 13% (Global CTDIvol), 9% (Organ CTDIvol), and 7% (Scout-Dose-TCM), with bladder showing the largest discrepancies (15%, 13%, and 9%). Statistical tests confirmed Scout-Dose-TCM significantly reduced errors vs. Global CTDIvol across most organs and improved over Organ CTDIvol for the liver, bladder, and pancreas. It also achieved higher r-squared values, indicating stronger agreement with Monte Carlo benchmarks. Scout-Dose-TCM outperformed Global CTDIvol and was comparable to or better than Organ CTDIvol, without requiring organ segmentations at inference, demonstrating its promise as a tool for prospective organ-level dose estimation in CT.

Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Bowen Xin, Rohan Hickey, Tamara Blake, Jin Jin, Claire E Wainwright, Thomas Benkert, Alto Stemmer, Peter Sly, David Coman, Jason Dowling

arxiv logopreprintJun 30 2025
Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).

MedSAM-CA: A CNN-Augmented ViT with Attention-Enhanced Multi-Scale Fusion for Medical Image Segmentation

Peiting Tian, Xi Chen, Haixia Bi, Fan Li

arxiv logopreprintJun 30 2025
Medical image segmentation plays a crucial role in clinical diagnosis and treatment planning, where accurate boundary delineation is essential for precise lesion localization, organ identification, and quantitative assessment. In recent years, deep learning-based methods have significantly advanced segmentation accuracy. However, two major challenges remain. First, the performance of these methods heavily relies on large-scale annotated datasets, which are often difficult to obtain in medical scenarios due to privacy concerns and high annotation costs. Second, clinically challenging scenarios, such as low contrast in certain imaging modalities and blurry lesion boundaries caused by malignancy, still pose obstacles to precise segmentation. To address these challenges, we propose MedSAM-CA, an architecture-level fine-tuning approach that mitigates reliance on extensive manual annotations by adapting the pretrained foundation model, Medical Segment Anything (MedSAM). MedSAM-CA introduces two key components: the Convolutional Attention-Enhanced Boundary Refinement Network (CBR-Net) and the Attention-Enhanced Feature Fusion Block (Atte-FFB). CBR-Net operates in parallel with the MedSAM encoder to recover boundary information potentially overlooked by long-range attention mechanisms, leveraging hierarchical convolutional processing. Atte-FFB, embedded in the MedSAM decoder, fuses multi-level fine-grained features from skip connections in CBR-Net with global representations upsampled within the decoder to enhance boundary delineation accuracy. Experiments on publicly available datasets covering dermoscopy, CT, and MRI imaging modalities validate the effectiveness of MedSAM-CA. On dermoscopy dataset, MedSAM-CA achieves 94.43% Dice with only 2% of full training data, reaching 97.25% of full-data training performance, demonstrating strong effectiveness in low-resource clinical settings.

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.

Deep learning for automated, motion-resolved tumor segmentation in radiotherapy.

Sarkar S, Teo PT, Abazeed ME

pubmed logopapersJun 30 2025
Accurate tumor delineation is foundational to radiotherapy. In the era of deep learning, the automation of this labor-intensive and variation-prone process is increasingly tractable. We developed a deep neural network model to segment gross tumor volumes (GTVs) in the lung and propagate them across 4D CT images to generate an internal target volume (ITV), capturing tumor motion during respiration. Using a multicenter cohort-based registry from 9 clinics across 2 health systems, we trained a 3D UNet model (iSeg) on pre-treatment CT images and corresponding GTV masks (n = 739, 5-fold cross-validation) and validated it on two independent cohorts (n = 161; n = 102). The internal cohort achieved a median Dice (DSC) of 0.73 [IQR: 0.62-0.80], with comparable performance in external cohorts (DSC = 0.70 [0.52-0.78] and 0.71 [0.59-79]), indicating multi-site validation. iSeg matched human inter-observer variability and was robust to image quality and tumor motion (DSC = 0.77 [0.68-0.86]). Machine-generated ITVs were significantly smaller than physician delineated contours (p < 0.0001), indicating more precise delineation. Notably, higher false positive voxel rate (regions segmented by the machine but not the human) were associated with increased local failure (HR: 1.01 per voxel, p = 0.03), suggesting the clinical relevance of these discordant regions. These results mark a leap in automated target volume segmentation and suggest that machine delineation can enhance the accuracy, reproducibility, and efficiency of this core task in radiotherapy.

Advancements in Herpes Zoster Diagnosis, Treatment, and Management: Systematic Review of Artificial Intelligence Applications.

Wu D, Liu N, Ma R, Wu P

pubmed logopapersJun 30 2025
The application of artificial intelligence (AI) in medicine has garnered significant attention in recent years, offering new possibilities for improving patient care across various domains. For herpes zoster, a viral infection caused by the reactivation of the varicella-zoster virus, AI technologies have shown remarkable potential in enhancing disease diagnosis, treatment, and management. This study aims to investigate the current research status in the use of AI for herpes zoster, offering a comprehensive synthesis of existing advancements. A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Three databases of Web of Science Core Collection, PubMed, and IEEE were searched to identify relevant studies on AI applications in herpes zoster research on November 17, 2023. Inclusion criteria were as follows: (1) research articles, (2) published in English, (3) involving actual AI applications, and (4) focusing on herpes zoster. Exclusion criteria comprised nonresearch articles, non-English papers, and studies only mentioning AI without application. Two independent clinicians screened the studies, with a third senior clinician resolving disagreements. In total, 26 articles were included. Data were extracted on AI task types; algorithms; data sources; data types; and clinical applications in diagnosis, treatment, and management. Trend analysis revealed an increasing annual interest in AI applications for herpes zoster. Hospital-derived data were the primary source (15/26, 57.7%), followed by public databases (6/26, 23.1%) and internet data (5/26, 19.2%). Medical images (9/26, 34.6%) and electronic medical records (7/26, 26.9%) were the most commonly used data types. Classification tasks (85.2%) dominated AI applications, with neural networks, particularly multilayer perceptron and convolutional neural networks being the most frequently used algorithms. AI applications were analyzed across three domains: (1) diagnosis, where mobile deep neural networks, convolutional neural network ensemble models, and mixed-scale attention-based models have improved diagnostic accuracy and efficiency; (2) treatment, where machine learning models, such as deep autoencoders combined with functional magnetic resonance imaging, electroencephalography, and clinical data, have enhanced treatment outcome predictions; and (3) management, where AI has facilitated case identification, epidemiological research, health care burden assessment, and risk factor exploration for postherpetic neuralgia and other complications. Overall, this study provides a comprehensive overview of AI applications in herpes zoster from clinical, data, and algorithmic perspectives, offering valuable insights for future research in this rapidly evolving field. AI has significantly advanced herpes zoster research by enhancing diagnostic accuracy, predicting treatment outcomes, and optimizing disease management. However, several limitations exist, including potential omissions from excluding databases like Embase and Scopus, language bias due to the inclusion of only English publications, and the risk of subjective bias in study selection. Broader studies and continuous updates are needed to fully capture the scope of AI applications in herpes zoster in the future.

Radiation Dose Reduction and Image Quality Improvement of UHR CT of the Neck by Novel Deep-learning Image Reconstruction.

Messerle DA, Grauhan NF, Leukert L, Dapper AK, Paul RH, Kronfeld A, Al-Nawas B, Krüger M, Brockmann MA, Othman AE, Altmann S

pubmed logopapersJun 30 2025
We evaluated a dedicated dose-reduced UHR-CT for head and neck imaging, combined with a novel deep learning reconstruction algorithm to assess its impact on image quality and radiation exposure. Retrospective analysis of ninety-eight consecutive patients examined using a new body weight-adapted protocol. Images were reconstructed using adaptive iterative dose reduction and advanced intelligent Clear-IQ engine with an already established (DL-1) and a newly implemented reconstruction algorithm (DL-2). Additional thirty patients were scanned without body-weight-adapted dose reduction (DL-1-SD). Three readers evaluated subjective image quality regarding image quality and assessment of several anatomic regions. For objective image quality, signal-to-noise ratio and contrast-to-noise ratio were calculated for temporalis and masseteric muscle and the floor of the mouth. Radiation dose was evaluated by comparing the computed tomography dose index (CTDIvol) values. Deep learning-based reconstruction algorithms significantly improved subjective image quality (diagnostic acceptability: DL‑1 vs AIDR OR of 25.16 [6.30;38.85], p < 0.001 and DL‑2 vs AIDR 720.15 [410.14;> 999.99], p < 0.001). Although higher doses (DL-1-SD) resulted in significantly enhanced image quality, DL‑2 demonstrated significant superiority over all other techniques across all defined parameters (p < 0.001). Similar results were demonstrated for objective image quality, e.g. image noise (DL‑1 vs AIDR OR of 19.0 [11.56;31.24], p < 0.001 and DL‑2 vs AIDR > 999.9 [825.81;> 999.99], p < 0.001). Using weight-adapted kV reduction, very low radiation doses could be achieved (CTDIvol: 7.4 ± 4.2 mGy). AI-based reconstruction algorithms in ultra-high resolution head and neck imaging provide excellent image quality while achieving very low radiation exposure.

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles.

Shen X, Huang H, Nichyporuk B, Arbel T

pubmed logopapersJun 30 2025
Once deployed, medical image analysis methods are often faced with unexpected image corruptions and noise perturbations. These unknown covariate shifts present significant challenges to deep learning based methods trained on "clean" images. This often results in unreliable predictions and poorly calibrated confidence, hence hindering clinical applicability. While recent methods have been developed to address specific issues such as confidence calibration or adversarial robustness, no single framework effectively tackles all these challenges simultaneously. To bridge this gap, we propose LaDiNE, a novel ensemble learning method combining the robustness of Vision Transformers with diffusion-based generative models for improved reliability in medical image classification. Specifically, transformer encoder blocks are used as hierarchical feature extractors that learn invariant features from images for each ensemble member, resulting in features that are robust to input perturbations. In addition, diffusion models are used as flexible density estimators to estimate member densities conditioned on the invariant features, leading to improved modeling of complex data distributions while retaining properly calibrated confidence. Extensive experiments on tuberculosis chest X-rays and melanoma skin cancer datasets demonstrate that LaDiNE achieves superior performance compared to a wide range of state-of-the-art methods by simultaneously improving prediction accuracy and confidence calibration under unseen noise, adversarial perturbations, and resolution degradation.

BIScreener: enhancing breast cancer ultrasound diagnosis through integrated deep learning with interpretability.

Chen Y, Wang P, Ouyang J, Tan M, Nie L, Zhang Y, Wang T

pubmed logopapersJun 30 2025
Breast cancer is the leading cause of death among women worldwide, and early detection through the standardized BI-RADS framework helps physicians assess the risk of malignancy and guide appropriate diagnostic and treatment decisions. In this study, an interpretable deep learning model (BIScreener) was proposed for predicting BI-RADS classifications from breast ultrasound images, aiding in the accurate assessment of breast cancer risk and improving diagnostic efficiency. BIScreener utilizes the stacked generalization of three pretrained convolutional neural networks to analyze ultrasound images obtained from two specific instruments (Mindray R5 and HITACHI) used at local hospitals. BIScreener achieved a classification total accuracy of 90.0% and ROC-AUC value of 0.982 in the external test set for five BI-RADS categories. The proposed method achieved 83.8% classification total accuracy and 0.967 ROC-AUC value for seven BI-RADS categories. In addition, the model improved the diagnostic accuracy of two radiologists by more than 8.1% for five BI-RADS categories and by more than 4.8% for seven BI-RADS categories and reduced the explanation time by more than 19.0%, demonstrating its potential to accelerate and improve the breast cancer diagnosis process.
Page 12 of 51504 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.