Sort by:
Page 155 of 6486473 results

Leon Suarez-Rodriguez, Roman Jacome, Romario Gualdron-Hurtado, Ana Mantilla-Dulcey, Henry Arguello

arxiv logopreprintSep 18 2025
Sparse-view computed tomography (CT) reconstruction is fundamentally challenging due to undersampling, leading to an ill-posed inverse problem. Traditional iterative methods incorporate handcrafted or learned priors to regularize the solution but struggle to capture the complex structures present in medical images. In contrast, diffusion models (DMs) have recently emerged as powerful generative priors that can accurately model complex image distributions. In this work, we introduce Diffusion Consensus Equilibrium (DICE), a framework that integrates a two-agent consensus equilibrium into the sampling process of a DM. DICE alternates between: (i) a data-consistency agent, implemented through a proximal operator enforcing measurement consistency, and (ii) a prior agent, realized by a DM performing a clean image estimation at each sampling step. By balancing these two complementary agents iteratively, DICE effectively combines strong generative prior capabilities with measurement consistency. Experimental results show that DICE significantly outperforms state-of-the-art baselines in reconstructing high-quality CT images under uniform and non-uniform sparse-view settings of 15, 30, and 60 views (out of a total of 180), demonstrating both its effectiveness and robustness.

Tan Q, Kubicka F, Nickel D, Weiland E, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersSep 18 2025
Deep learning-accelerated single-shot turbo-spin-echo techniques (DL-HASTE) enable single-breath-hold T2-weighted abdominal imaging. However, studies evaluating the image quality of DL-HASTE with and without fat saturation (FS) remain limited. This study aimed to prospectively evaluate the technical feasibility and image quality of abdominal DL-HASTE with and without FS at 3 Tesla. DL-HASTE of the upper abdomen was acquired with variable sequence parameters regarding FS, flip angle (FA) and field of view (FOV) in 10 healthy volunteers and 50 patients. DL-HASTE sequences were compared to clinical sequences (HASTE, HASTE-FS and T2-TSE-FS BLADE). Two radiologists independently assessed the sequences regarding scores of overall image quality, delineation of abdominal organs, artifacts and fat saturation using a Likert scale (range: 1-5). Breath-hold time of DL-HASTE and DL-HASTE-FS was 21 ± 2 s with fixed FA and 20 ± 2 s with variable FA (p < 0.001), with no overall image quality difference (p > 0.05). DL-HASTE required a 10% larger FOV than DL-HASTE-FS to avoid aliasing artifacts from subcutaneous fat. Both DL-HASTE and DL-HASTE-FS had significantly higher overall image quality scores than standard HASTE acquisitions (DL-HASTE vs. HASTE: 4.8 ± 0.40 vs. 4.1 ± 0.50; DL-HASTE-FS vs. HASTE-FS: 4.6 ± 0.50 vs. 3.6 ± 0.60; p < 0.001). Compared to the T2-TSE-FS BLADE, DL-HASTE-FS provided higher overall image quality (4.6 ± 0.50 vs. 4.3 ± 0.63, p = 0.011). DL-HASTE achieved significant higher image quality (p = 0.006) and higher sharpness score of organs compared to DL-HASTE-FS (p < 0.001). Deep learning-accelerated HASTE with and without fat saturation were both feasible at 3 Tesla and showed improved image quality compared to conventional sequences. Not applicable.

Rohlfsen C, Shannon K, Parsons AS

pubmed logopapersSep 18 2025
Navigating uncertainty is fundamental to sound clinical decision-making. With the advent of artificial intelligence, mathematical approximations of disease states-expressed as entropy-offer a novel approach to quantify and communicate uncertainty. Although entropy is well established in fields like physics and computer science, its technical complexity has delayed its routine adoption in clinical reasoning. In this narrative review, we adhere to Shannon's definition of entropy from information processing theory and examine how it has been used in clinical decision-making over the last 15 years. Grounding our analysis in decision theory-which frames decisions in terms of states, acts, consequences, and preferences-we evaluated 20 studies that employed entropy. Our findings reveal that entropy is predominantly used to quantify uncertainty rather than directly guiding clinical actions. High-stakes fields such as oncology and radiology have led the way, using entropy to improve diagnostic accuracy and support risk assessment, while applications in neurology and hematology remain largely exploratory. Notably, no study has yet translated entropy into an operational, evidence-based decision-support framework. These results point to entropy's value as a quantitative tool in clinical reasoning, while also highlighting the need for prospective validation and the development of integrated clinical tools.

Zhang S, Gharleghi R, Singh S, Shen C, Adikari D, Zhang M, Moses D, Vickers D, Sowmya A, Beier S

pubmed logopapersSep 18 2025
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, with incidence rates continuing to rise. Automated coronary artery medical image segmentation can ultimately improve CAD management by enabling more advanced and efficient diagnostic assessments. Deep learning-based segmentation methods have shown significant promise and offered higher accuracy while reducing reliance on manual inputs. However, achieving consistent performance across diverse datasets remains a persistent challenge due to substantial variability in imaging protocols, equipment and patient-specific factors, such as signal intensities, anatomical differences and disease severity. This study investigates the influence of image quality and resolution, governed by vessel size and common disease characteristics that introduce artefacts, such as calcification, on coronary artery segmentation accuracy in computed tomography coronary angiography (CTCA). Two datasets were utilised for model training and validation, including the publicly available ASOCA dataset (40 cases) and a GeoCAD dataset (70 cases) with more cases of coronary disease. Coronary artery segmentations were generated using three deep learning frameworks/architectures: default U-Net, Swin-UNETR, and EfficientNet-LinkNet. The impact of various factors on model generalisation was evaluated, focusing on imaging characteristics (contrast-to-noise ratio, artery contrast enhancement, and edge sharpness) and the extent of calcification at both the coronary tree and individual vessel branch levels. The calcification ranges considered were 0 (no calcification), 1-99 (low), 100-399 (moderate), and > 400 (high). The findings demonstrated that image features, including artery contrast enhancement (r = 0.408, p < 0.001) and edge sharpness (r = 0.239, p = 0.046), were significantly correlated with improved segmentation performance in test cases. Regardless of severity, calcification had a negative impact on segmentation accuracy, with low calcification affecting the segmentation most poorly (p < 0.05). This may be because smaller calcified lesions produce less distinct contrast against the bright lumen, making it harder for the model to accurately identify and segment these lesions. Additionally, in males, a larger diameter of the first obtuse marginal branch (OM1) (p = 0.036) was associated with improved segmentation performance for OM1. Similarly, in females, larger diameters of left main (LM) coronary artery (p = 0.008) and right coronary artery (RCA) (p < 0.001) were associated with better segmentation performance for LM and RCA, respectively. These findings emphasise the importance of accounting for imaging characteristics and anatomical variability when developing generalisable deep learning models for coronary artery segmentation. Unlike previous studies, which broadly acknowledge the role of image quality in segmentation, our work quantitatively demonstrates the extent to which contrast enhancement, edge sharpness, calcification and vessel diameter impact segmentation performance, offering a data-driven foundation for model adaptation strategies. Potential improvements include optimising pre-segmentation imaging (e.g. ensuring adequate edge sharpness in low-contrast regions) and developing algorithms to address vessel-specific challenges, such as improving segmentation of low-level calcifications and accurately identifying LM, RCA and OM1 of smaller diameters.

Carretero-Gómez L, Wiesinger F, Fung M, Nunes B, Pedoia V, Majumdar S, Desai AD, Gatti A, Chaudhari A, Sánchez-Lacalle E, Malpica N, Padrón M

pubmed logopapersSep 18 2025
Clinical adoption of T2 mapping is limited by poor reproducibility, lengthy examination times, and cumbersome image analysis. This study aimed to develop an accelerated deep learning (DL)-enhanced cartilage T2 mapping sequence (DL CartiGram), demonstrate its repeatability and reproducibility, and evaluate its accuracy compared to conventional T2 mapping using a semi-automatic pipeline. DL CartiGram was implemented using a modified 2D Multi-Echo Spin-Echo sequence at 3 T, incorporating parallel imaging and DL-based image reconstruction. Phantom tests were performed at two sites to obtain test-retest T2 maps, using single-echo spin-echo (SE) measurements as reference values. At one site, DL CartiGram and conventional T2 mapping were performed on 43 patients. T2 values were extracted from 52 patellar and femoral compartments using DL knee segmentation and the DOSMA framework. Repeatability and reproducibility were assessed using coefficients of variation (CV), Bland-Altman analysis, and concordance correlation coefficients (CCC). T2 differences were evaluated with Wilcoxon signed-rank tests, paired t tests, and accuracy CV. Phantom tests showed intra-site repeatability with CVs ≤ 2.52% and T2 precision ≤ 1 ms. Inter-site reproducibility showed a CV of 2.74% and a CCC of 99% (CI 92-100%). Bland-Altman analysis showed a bias of 1.56 ms between sites (p = 0.03), likely due to temperature effects. In vivo, DL CartiGram reduced scan time by 40%, yielding accurate cartilage T2 measurements (CV = 0.97%) with no significant differences compared to conventional T2 mapping (p = 0.1). DL CartiGram significantly accelerates T2 mapping, while still assuring excellent repeatability and reproducibility. Combined with the semi-automatic post-processing pipeline, it emerges as a promising tool for quantitative T2 cartilage biomarker assessment in clinical settings.

Shiyam Sundar LK, Gutschmayer S, Pires M, Ferrara D, Nguyen T, Abdelhafez YG, Spencer B, Cherry SR, Badawi RD, Kersting D, Fendler WP, Kim MS, Lassen ML, Hasbak P, Schmidt F, Linder P, Mu X, Jiang Z, Abenavoli EM, Sciagrà R, Frille A, Wirtz H, Hesse S, Sabri O, Bailey D, Chan D, Callahan J, Hicks RJ, Beyer T

pubmed logopapersSep 18 2025
Combined PET/CT imaging provides critical insights into both anatomic and molecular processes, yet traditional single-tracer approaches limit multidimensional disease phenotyping; to address this, we developed the PET Unified Multitracer Alignment (PUMA) framework-an open-source, postprocessing tool that multiplexes serial PET/CT scans for comprehensive voxelwise tissue characterization. <b>Methods:</b> PUMA utilizes artificial intelligence-based CT segmentation from multiorgan objective segmentation to generate multilabel maps of 24 body regions, guiding a 2-step registration: affine alignment followed by symmetric diffeomorphic registration. Tracer images are then normalized and assigned to red-green-blue channels for simultaneous visualization of up to 3 tracers. The framework was evaluated on longitudinal PET/CT scans from 114 subjects across multiple centers and vendors. Rigid, affine, and deformable registration methods were compared for optimal coregistration. Performance was assessed using the Dice similarity coefficient for organ alignment and absolute percentage differences in organ intensity and tumor SUV<sub>mean</sub> <b>Results:</b> Deformable registration consistently achieved superior alignment, with Dice similarity coefficient values exceeding 0.90 in 60% of organs while maintaining organ intensity differences below 3%; similarly, SUV<sub>mean</sub> differences for tumors were minimal at 1.6% ± 0.9%, confirming that PUMA preserves quantitative PET data while enabling robust spatial multiplexing. <b>Conclusion:</b> PUMA provides a vendor-independent solution for postacquisition multiplexing of serial PET/CT images, integrating complementary tracer data voxelwise into a composite image without modifying clinical protocols. This enhances multidimensional disease phenotyping and supports better diagnostic and therapeutic decisions using serial multitracer PET/CT imaging.

Ussi KK, Mtenga RB

pubmed logopapersSep 18 2025
Magnetic resonance imaging (MRI) is a cornerstone of brain and spine diagnostics. Yet, access across Africa is limited by high installation costs, power requirements, and the need for specialized shielding and facilities. Low-and ultra low-field (ULF) MRI systems operating below 0.3 T are emerging as a practical alternative to expand neuroimaging capacity in resource-constrained settings. However, its faced with challenges that hinder its use in clinical setting. Technological advances that seek to tackle these challenges such as permanent Halbach array magnets, portable scanner designs such as those successfully deployed in Uganda and Malawi, and deep learning methods including convolutional neural network electromagnetic interference cancellation and residual U-Net image reconstruction have improved image quality and reduced noise, making ULF MRI increasingly viable. We review the state of low-field MRI technology, its application in point-of-care and rural contexts, and the specific limitations that remain, including reduced signal-to-noise ratio, larger voxel size requirements, and susceptibility to motion artifacts. Although not a replacement for high-field scanners in detecting subtle or small lesions, low-field MRI offers a promising pathway to broaden diagnostic imaging availability, support clinical decision-making, and advance equitable neuroimaging research in under-resourced regions.ABBREVIATIONS: CNN=Convolutional neural network; EMI=Electromagnetic interference; FID=Free induction wave; LMIC=Low and middle income countries; MRI=Magnetic Resonance Imaging; NCDs=Non communicable diseases; RF=Radiofrequency Pulse; SNR= Signal to noise ratio; TBI=Traumatic brain Injury.

Jackson P, Buteau JP, McIntosh L, Sun Y, Kashyap R, Casanueva S, Ravi Kumar AS, Sandhu S, Azad AA, Alipour R, Saghebi J, Kong G, Jewell K, Eifer M, Bollampally N, Hofman MS

pubmed logopapersSep 18 2025
Metastatic castration-resistant prostate cancer has a high rate of mortality with a limited number of effective treatments after hormone therapy. Radiopharmaceutical therapy with [<sup>177</sup>Lu]Lu-prostate-specific membrane antigen-617 (LuPSMA) is one treatment option; however, response varies and is partly predicted by PSMA expression and metabolic activity, assessed on [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL and [<sup>18</sup>F]FDG PET, respectively. Automated methods to measure these on PET imaging have previously yielded modest accuracy. Refining computational workflows and standardizing approaches may improve patient selection and prognostication for LuPSMA therapy. <b>Methods:</b> PET/CT and quantitative SPECT/CT images from an institutional cohort of patients staged for LuPSMA therapy were annotated for total disease burden. In total, 676 [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET, 390 [<sup>18</sup>F]FDG PET, and 477 LuPSMA SPECT images were used for development of automated workflow and tested on 56 cases with externally referred PET/CT staging. A segmentation framework, the Global Threshold Regional Consensus Network, was developed based on nnU-Net, with processing refinements to improve boundary definition and overall label accuracy. <b>Results:</b> Using the model to contour disease extent, the mean volumetric Dice similarity coefficient for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET was 0.94, for [<sup>18</sup>F]FDG PET was 0.84, and for LuPSMA SPECT was 0.97. On external test cases, Dice accuracy was 0.95 and 0.84 on PSMA and FDG PET, respectively. The refined models yielded consistent improvements compared with nnU-Net, with an increase of 3%-5% in Dice accuracy and 10%-17% in surface agreement. Quantitative biomarkers were compared with a human-defined ground truth using the Pearson coefficient, with scores for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and LuPSMA, respectively, of 0.98, 0.94, and 0.99 for disease volume; 0.98, 0.88, and 0.99 for SUV<sub>mean</sub>; 0.96, 0.91, and 0.99 for SUV<sub>max</sub>; and 0.97, 0.96, and 0.99 for volume intensity product. <b>Conclusion:</b> Delineation of disease extent and tracer avidity can be performed with a high degree of accuracy using automated deep learning methods. By incorporating threshold-based postprocessing, the tools can closely match the output of manual workflows. Pretrained models and scripts to adapt to institutional data are provided for open use.

Shahrbabaki Mofrad M, Ghafari A, Amiri Tehrani Zade A, Aghahosseini F, Ay M, Farzenefar S, Sheikhzadeh P

pubmed logopapersSep 18 2025
&#xD;This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging.&#xD;Materials and Methods:&#xD;A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test.&#xD;Results:&#xD;Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value>0.05), but significant differences were found in pathological images (p-value <0.05)&#xD;Conclusion:&#xD;The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.&#xD.

Liu J, Li S, Li M, Li G, Huang N, Shu B, Chen J, Zhu T, Huang H, Duan G

pubmed logopapersSep 18 2025
Aspiration of gastric contents is a serious complication associated with anaesthesia. Accurate prediction of gastric volume may assist in risk stratification and help prevent aspiration. This study aimed to develop and validate machine learning models to predict gastric volume based on ultrasound and clinical features. This cross-sectional multicentre study was conducted at two hospitals and included adult patients undergoing gastroscopy under intravenous anaesthesia. Patients from Centre 1 were prospectively enrolled and randomly divided into a training set (Cohort A, n = 415) and an internal validation set (Cohort B, n = 179), while patients from Centre 2 were used as an external validation set (Cohort C, n = 199). The primary outcome was gastric volume, which was measured by endoscopic aspiration immediately following ultrasonographic examination. Least absolute shrinkage and selection operator (LASSO) regression was used for feature selection, and eight machine learning models were developed and evaluated using Bland-Altman analysis. The models' ability to predict medium-to-high and high gastric volumes was assessed. The top-performing models were externally validated, and their predictive performance was compared with the traditional Perlas model. Among the 793 enrolled patients, the number and proportion of patients with high gastric volume were as follows: 23 (5.5 %) in the development cohort, 10 (5.6 %) in the internal validation cohort, and 3 (1.5 %) in the external validation cohort. Eight models were developed using age, cross-sectional area of gastric antrum in right lateral decubitus (RLD-CSA) position, and Perlas grade, with these variables selected through LASSO regression. In internal validation, Bland-Altman analysis showed that the Perlas model overestimated gastric volume (mean bias 23.5 mL), while the new models provided accurate estimates (mean bias -0.1 to 2.0 mL). The models significantly improved prediction of medium-high gastric volume (area under the curve [AUC]: 0.74-0.77 vs. 0.63) and high gastric volume (AUC: 0.85-0.94 vs. 0.74). The best-performing adaptive boosting and linear regression models underwent externally validation, with AUCs of 0.81 (95 % confidence interval [CI], 0.74-0.89) and 0.80 (95 %CI, 0.72-0.89) for medium-high and 0.96 (95 %CI, 0.91-1) and 0.96 (95 %CI, 0.89-1) for high gastric volume. We propose a novel machine learning-based predictive model that outperforms Perlas model by incorporating the key features of age, RLD-CSA, and Perlas grade, enabling accurate prediction of gastric volume.
Page 155 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.