Sort by:
Page 286 of 3463455 results

Diffusion Transformer-based Universal Dose Denoising for Pencil Beam Scanning Proton Therapy

Yuzhen Ding, Jason Holmes, Hongying Feng, Martin Bues, Lisa A. McGee, Jean-Claude M. Rwigema, Nathan Y. Yu, Terence S. Sio, Sameer R. Keole, William W. Wong, Steven E. Schild, Jonathan B. Ashman, Sujay A. Vora, Daniel J. Ma, Samir H. Patel, Wei Liu

arxiv logopreprintJun 4 2025
Purpose: Intensity-modulated proton therapy (IMPT) offers precise tumor coverage while sparing organs at risk (OARs) in head and neck (H&N) cancer. However, its sensitivity to anatomical changes requires frequent adaptation through online adaptive radiation therapy (oART), which depends on fast, accurate dose calculation via Monte Carlo (MC) simulations. Reducing particle count accelerates MC but degrades accuracy. To address this, denoising low-statistics MC dose maps is proposed to enable fast, high-quality dose generation. Methods: We developed a diffusion transformer-based denoising framework. IMPT plans and 3D CT images from 80 H&N patients were used to generate noisy and high-statistics dose maps using MCsquare (1 min and 10 min per plan, respectively). Data were standardized into uniform chunks with zero-padding, normalized, and transformed into quasi-Gaussian distributions. Testing was done on 10 H&N, 10 lung, 10 breast, and 10 prostate cancer cases, preprocessed identically. The model was trained with noisy dose maps and CT images as input and high-statistics dose maps as ground truth, using a combined loss of mean square error (MSE), residual loss, and regional MAE (focusing on top/bottom 10% dose voxels). Performance was assessed via MAE, 3D Gamma passing rate, and DVH indices. Results: The model achieved MAEs of 0.195 (H&N), 0.120 (lung), 0.172 (breast), and 0.376 Gy[RBE] (prostate). 3D Gamma passing rates exceeded 92% (3%/2mm) across all sites. DVH indices for clinical target volumes (CTVs) and OARs closely matched the ground truth. Conclusion: A diffusion transformer-based denoising framework was developed and, though trained only on H&N data, generalizes well across multiple disease sites.

3D Quantification of Viral Transduction Efficiency in Living Human Retinal Organoids

Rogler, T. S., Salbaum, K. A., Brinkop, A. T., Sonntag, S. M., James, R., Shelton, E. R., Thielen, A., Rose, R., Babutzka, S., Klopstock, T., Michalakis, S., Serwane, F.

biorxiv logopreprintJun 4 2025
The development of therapeutics builds on testing their efficiency in vitro. To optimize gene therapies, for example, fluorescent reporters expressed by treated cells are typically utilized as readouts. Traditionally, their global fluorescence signal has been used as an estimate of transduction efficiency. However, analysis in individual cells within a living 3D tissue remains a challenge. Readout on a single-cell level can be realized via fluo-rescence-based flow cytometry at the cost of tissue dissociation and loss of spatial information. Complementary, spatial information is accessible via immunofluorescence of fixed samples. Both approaches impede time-dependent studies on the delivery of the vector to the cells. Here, quantitative 3D characterization of viral transduction efficiencies in living retinal organoids is introduced. The approach combines quantified gene delivery efficiency in space and time, leveraging human retinal organ-oids, engineered adeno-associated virus (AAV) vectors, confocal live imaging, and deep learning-based image segmentation. The integration of these tools in an organoid imaging and analysis pipeline allows quantitative testing of future treatments and other gene delivery methods. It has the potential to guide the development of therapies in biomedical applications.

Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 2: Analysis of DALL-E 3.

Currie G, Hewis J, Hawk E, Rohren E

pubmed logopapersJun 4 2025
Disparity among gender and ethnicity remains an issue across medicine and health science. Only 26%-35% of trainee radiologists are female, despite more than 50% of medical students' being female. Similar gender disparities are evident across the medical imaging professions. Generative artificial intelligence text-to-image production could reinforce or amplify gender biases. <b>Methods:</b> In March 2024, DALL-E 3 was utilized via GPT-4 to generate a series of individual and group images of medical imaging professionals: radiologist, nuclear medicine physician, radiographer, nuclear medicine technologist, medical physicist, radiopharmacist, and medical imaging nurse. Multiple iterations of images were generated using a variety of prompts. Collectively, 120 images were produced for evaluation of 524 characters. All images were independently analyzed by 3 expert reviewers from medical imaging professions for apparent gender and skin tone. <b>Results:</b> Collectively (individual and group images), 57.4% (<i>n</i> = 301) of medical imaging professionals were depicted as male, 42.4% (<i>n</i> = 222) as female, and 91.2% (<i>n</i> = 478) as having a light skin tone. The male gender representation was 65% for radiologists, 62% for nuclear medicine physicians, 52% for radiographers, 56% for nuclear medicine technologists, 62% for medical physicists, 53% for radiopharmacists, and 26% for medical imaging nurses. For all professions, this overrepresents men compared with women. There was no representation of persons with a disability. <b>Conclusion:</b> This evaluation reveals a significant overrepresentation of the male gender associated with generative artificial intelligence text-to-image production using DALL-E 3 across the medical imaging professions. Generated images have a disproportionately high representation of white men, which is not representative of the diversity of the medical imaging professions.

Machine Learning to Automatically Differentiate Hypertrophic Cardiomyopathy, Cardiac Light Chain, and Cardiac Transthyretin Amyloidosis: A Multicenter CMR Study.

Weberling LD, Ochs A, Benovoy M, Aus dem Siepen F, Salatzki J, Giannitsis E, Duan C, Maresca K, Zhang Y, Möller J, Friedrich S, Schönland S, Meder B, Friedrich MG, Frey N, André F

pubmed logopapersJun 4 2025
Cardiac amyloidosis is associated with poor outcomes and is caused by the interstitial deposition of misfolded proteins, typically ATTR (transthyretin) or AL (light chains). Although specific therapies during early disease stages exist, the diagnosis is often only established at an advanced stage. Cardiovascular magnetic resonance (CMR) is the gold standard for imaging suspected myocardial disease. However, differentiating cardiac amyloidosis from hypertrophic cardiomyopathy may be challenging, and a reliable method for an image-based classification of amyloidosis subtypes is lacking. This study sought to investigate a CMR machine learning (ML) algorithm to identify and distinguish cardiac amyloidosis. This retrospective, multicenter, multivendor feasibility study included consecutive patients diagnosed with hypertrophic cardiomyopathy or AL/ATTR amyloidosis and healthy volunteers. Standard clinical information, semiautomated CMR imaging data, and qualitative CMR features were integrated into a trained ML algorithm. Four hundred participants (95 healthy, 94 hypertrophic cardiomyopathy, 95 AL, and 116 ATTR) from 56 institutions were included (269 men aged 58.5 [48.4-69.4] years). A 3-stage ML screening cascade sequentially differentiated healthy volunteers from patients, then hypertrophic cardiomyopathy from amyloidosis, and then AL from ATTR. The ML algorithm resulted in an accurate differentiation at each step (area under the curve, 1.0, 0.99, and 0.92, respectively). After reducing included data to demographics and imaging data alone, the performance remained excellent (area under the curve, 0.99, 0.98, and 0.88, respectively), even after removing late gadolinium enhancement imaging data from the model (area under the curve, 1.0, 0.95, 0.86, respectively). A trained ML model using semiautomated CMR imaging data and patient demographics can accurately identify cardiac amyloidosis and differentiate subtypes.

Synthetic multi-inversion time magnetic resonance images for visualization of subcortical structures

Savannah P. Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E. Dewey, Jiachen Zhuo, Ellen M. Mowry, Scott D. Newsome Jerry L. Prince, Aaron Carass

arxiv logopreprintJun 4 2025
Purpose: Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning.While multi-inversion time (multi-TI) T$_1$-weighted (T$_1$-w) magnetic resonance (MR) imaging improves visualization, it is rarely acquired in clinical settings. Approach: We present SyMTIC (Synthetic Multi-TI Contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired T$_1$-w, T$_2$-weighted (T$_2$-w), and FLAIR images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time (T$_1$) and proton density (PD) maps. These maps are then used to compute multi-TI images with arbitrary inversion times. Results: SyMTIC was trained using paired MPRAGE and FGATIR images along with T$_2$-w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data.The synthetic images, especially for TI values between 400-800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei. Conclusion: SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. It generalizes well to varied clinical datasets, including those with missing FLAIR images or unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.

Rad-Path Correlation of Deep Learning Models for Prostate Cancer Detection on MRI

Verde, A. S. C., de Almeida, J. G., Mendes, F., Pereira, M., Lopes, R., Brito, M. J., Urbano, M., Correia, P. S., Gaivao, A. M., Firpo-Betancourt, A., Fonseca, J., Matos, C., Regge, D., Marias, K., Tsiknakis, M., ProCAncer-I Consortium,, Conceicao, R. C., Papanikolaou, N.

medrxiv logopreprintJun 4 2025
While Deep Learning (DL) models trained on Magnetic Resonance Imaging (MRI) have shown promise for prostate cancer detection, their lack of direct biological validation often undermines radiologists trust and hinders clinical adoption. Radiologic-histopathologic (rad-path) correlation has the potential to validate MRI-based lesion detection using digital histopathology. This study uses automated and manually annotated digital histopathology slides as a standard of reference to evaluate the spatial extent of lesion annotations derived from both radiologist interpretations and DL models previously trained on prostate bi-parametric MRI (bp-MRI). 117 histopathology slides were used as reference. Prospective patients with clinically significant prostate cancer performed a bp-MRI examination before undergoing a robotic radical prostatectomy, and each prostate specimen was sliced using a 3D-printed patient-specific mold to ensure a direct comparison between pre-operative imaging and histopathology slides. The histopathology slides and their corresponding T2-weighted MRI images were co-registered. We trained DL models for cancer detection on large retrospective datasets of T2-w MRI only, bp-MRI and histopathology images and did inference in a prospective patient cohort. We evaluated the spatial extent between detected lesions and between detected lesions and the histopathological and radiological ground-truth, using the Dice similarity coefficient (DSC). The DL models trained on digital histopathology tiles and MRI images demonstrated promising capabilities in lesion detection. A low overlap was observed between the lesion detection masks generated by the histopathology and bp-MRI models, with a DSC = 0.10. However, the overlap was equivalent (DSC = 0.08) between radiologist annotations and histopathology ground truth. A rad-path correlation pipeline was established in a prospective patient cohort with prostate cancer undergoing surgery. The correlation between rad-path DL models was low but comparable to the overlap between annotations. While DL models show promise in prostate cancer detection, challenges remain in integrating MRI-based predictions with histopathological findings.

Validation study comparing Artificial intelligence for fully automatic aortic aneurysms Segmentation and diameter Measurements On contrast and non-contrast enhanced computed Tomography (ASMOT).

Gatinot A, Caradu C, Stephan L, Foret T, Rinckenbach S

pubmed logopapersJun 4 2025
Accurate aortic diameter measurements are essential for diagnosis, surveillance, and procedural planning in aortic disease. Semi-automatic methods remain widely used but require manual corrections, which can be time-consuming and operator-dependent. Artificial intelligence (AI)-driven fully automatic methods may offer improved efficiency and measurement accuracy. This study aims to validate a fully automatic method against a semi-automatic approach using computed tomography angiography (CTA) and non-contrast CT scans. A monocentric retrospective comparative study was conducted on patients who underwent endovascular aortic repair (EVAR) for infrarenal, juxta-renal or thoracic aneurysms and a control group. Maximum aortic wall-to-wall diameters were measured before and after repair using a fully automatic software (PRAEVAorta2®, Nurea, Bordeaux, France) and compared to measurements performed by two vascular surgeons using a semi-automatic approach on CTA and non-contrast CT scans. Correlation coefficients (Pearson's R) and absolute differences were calculated to assess agreement. A total of 120 CT scans (60 CTA and 60 non-contrast CT) were included, comprising 23 EVAR, 4 thoracic EVAR, 1 fenestrated EVAR, and 4 control cases. Strong correlations were observed between the fully automatic and semi-automatic measurements in both CTA and non-contrast CT. For CTA, correlation coefficients ranged from 0.94 to 0.96 (R<sup>2</sup> = 0.88-0.92), while for non-contrast CT, they ranged from 0.87 to 0.89 (R<sup>2</sup> = 0.76-0.79). Median absolute differences in aortic diameter measurements varied between 1.1 mm and 4.2 mm across the different anatomical locations. The fully automatic method demonstrated a significantly faster processing time, with a median execution time of 73 seconds (IQR: 57-91) compared to 700 (IQR: 613-800) for the semi-automatic method (p < 0.001). The fully automatic method demonstrated strong agreement with semi-automatic measurements for both CTA and non-contrast CT, before and after endovascular repair in different aortic locations, with significantly reduced analysis time. This method could improve workflow efficiency in clinical practice and research applications.

Artificial intelligence vs human expertise: A comparison of plantar fascia thickness measurements through MRI imaging.

Alyanak B, Çakar İ, Dede BT, Yıldızgören MT, Bağcıer F

pubmed logopapersJun 3 2025
This study aims to evaluate the reliability of plantar fascia thickness measurements performed by ChatGPT-4 using magnetic resonance imaging (MRI) compared to those obtained by an experienced clinician. In this retrospective, single-center study, foot MRI images from the hospital archive were analysed. Plantar fascia thickness was measured under both blinded and non-blinded conditions by an experienced clinician and ChatGPT-4 at two separate time points. Measurement reliability was assessed using the intraclass correlation coefficient (ICC), mean absolute error (MAE), and mean relative error (MRE). A total of 41 participants (32 females, 9 males) were included. The average plantar fascia thickness measured by the clinician was 4.20 ± 0.80 mm and 4.25 ± 0.92 mm under blinded and non-blinded conditions, respectively, while ChatGPT-4's measurements were 6.47 ± 1.30 mm and 6.46 ± 1.31 mm, respectively. Human evaluators demonstrated excellent agreement (ICC = 0.983-0.989), whereas ChatGPT-4 exhibited low reliability (ICC = 0.391-0.432). In thin plantar fascia cases, ChatGPT-4's error rate was higher, with MAE = 2.70 mm, MRE = 77.17 % under blinded conditions, and MAE = 2.91 mm, MRE = 87.02 % under non-blinded conditions. ChatGPT-4 demonstrated lower reliability in plantar fascia thickness measurements compared to an experienced clinician, with increased error rates in thin structures. These findings highlight the limitations of AI-based models in medical image analysis and emphasize the need for further refinement before clinical implementation.

Ultra-High-Resolution Photon-Counting-Detector CT with a Dedicated Denoising Convolutional Neural Network for Enhanced Temporal Bone Imaging.

Chang S, Benson JC, Lane JI, Bruesewitz MR, Swicklik JR, Thorne JE, Koons EK, Carlson ML, McCollough CH, Leng S

pubmed logopapersJun 3 2025
Ultra-high-resolution (UHR) photon-counting-detector (PCD) CT improves image resolution but increases noise, necessitating the use of smoother reconstruction kernels that reduce resolution below the 0.125-mm maximum spatial resolution. A denoising convolutional neural network (CNN) was developed to reduce noise in images reconstructed with the available sharpest reconstruction kernel while preserving resolution for enhanced temporal bone visualization to address this issue. With institutional review board approval, the CNN was trained on 6 patient cases of clinical temporal bone imaging (1885 images) and tested on 20 independent cases using a dual-source PCD-CT (NAEOTOM Alpha). Images were reconstructed using quantum iterative reconstruction at strength 3 (QIR3) with both a clinical routine kernel (Hr84) and the sharpest available head kernel (Hr96). The CNN was applied to images reconstructed with Hr96 and QIR1 kernel. For each case, three series of images (Hr84-QIR3, Hr96-QIR3, and Hr96-CNN) were randomized for review by 2 neuroradiologists assessing the overall quality and delineating the modiolus, stapes footplate, and incudomallear joint. The CNN reduced noise by 80% compared with Hr96-QIR3 and by 50% relative to Hr84-QIR3, while maintaining high resolution. Compared with the conventional method at the same kernel (Hr96-QIR3), Hr96-CNN significantly decreased image noise (from 204.63 to 47.35 HU) and improved its structural similarity index (from 0.72 to 0.99). Hr96-CNN images ranked higher than Hr84-QIR3 and Hr96-QIR3 in overall quality (<i>P</i> < .001). Readers preferred Hr96-CNN for all 3 structures. The proposed CNN significantly reduced image noise in UHR PCD-CT, enabling the use of the sharpest kernel. This combination greatly enhanced diagnostic image quality and anatomic visualization.

Deep learning reveals pathology-confirmed neuroimaging signatures in Alzheimer's, vascular and Lewy body dementias.

Wang D, Honnorat N, Toledo JB, Li K, Charisis S, Rashid T, Benet Nirmala A, Brandigampala SR, Mojtabai M, Seshadri S, Habes M

pubmed logopapersJun 3 2025
Concurrent neurodegenerative and vascular pathologies pose a diagnostic challenge in the clinical setting, with histopathology remaining the definitive modality for dementia-type diagnosis. To address this clinical challenge, we introduce a neuropathology-based, data-driven, multi-label deep-learning framework to identify and quantify in vivo biomarkers for Alzheimer's disease (AD), vascular dementia (VD) and Lewy body dementia (LBD) using antemortem T1-weighted MRI scans of 423 demented and 361 control participants from National Alzheimer's Coordinating Center and Alzheimer's Disease Neuroimaging Initiative datasets. Based on the best-performing deep-learning model, explainable heat maps were extracted to visualize disease patterns, and the novel Deep Signature of Pathology Atrophy REcognition (DeepSPARE) indices were developed, where a higher DeepSPARE score indicates more brain alterations associated with that specific pathology. A substantial discrepancy in clinical and neuropathological diagnosis was observed in the demented patients: 71% had more than one pathology, but 67% were diagnosed clinically as AD only. Based on these neuropathological diagnoses and leveraging cross-validation principles, the deep-learning model achieved the best performance, with a balanced accuracy of 0.844, 0.839 and 0.623 for AD, VD and LBD, respectively, and was used to generate the explainable deep-learning heat maps and DeepSPARE indices. The explainable deep-learning heat maps revealed distinct neuroimaging brain alteration patterns for each pathology: (i) the AD heat map highlighted bilateral hippocampal regions; (ii) the VD heat map emphasized white matter regions; and (iii) the LBD heat map exposed occipital alterations. The DeepSPARE indices were validated by examining their associations with cognitive testing and neuropathological and neuroimaging measures using linear mixed-effects models. The DeepSPARE-AD index was associated with Mini-Mental State Examination, the Trail Making Test B, memory, hippocampal volume, Braak stages, Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scores and Thal phases [false-discovery rate (FDR)-adjusted P < 0.05]. The DeepSPARE-VD index was associated with white matter hyperintensity volume and cerebral amyloid angiopathy (FDR-adjusted P < 0.001), and the DeepSPARE-LBD index was associated with Lewy body stages (FDR-adjusted P < 0.05). The findings were replicated in an out-of-sample Alzheimer's Disease Neuroimaging Initiative dataset by testing associations with cognitive, imaging, plasma and CSF measures. CSF and plasma tau phosphorylated at threonine-181 (pTau181) were significantly associated with DeepSPARE-AD in the AD and mild cognitive impairment amyloid-β positive (AD/MCIΑβ+) group (FDR-adjusted P < 0.001), and CSF α-synuclein was associated solely with DeepSPARE-LBD (FDR-adjusted P = 0.036). Overall, these findings demonstrate the advantages of our innovative deep-learning framework in detecting antemortem neuroimaging signatures linked to different pathologies. The newly deep-learning-derived DeepSPARE indices are precise, pathology-sensitive and single-valued non-invasive neuroimaging metrics, bridging the traditional widely available in vivo T1 imaging with histopathology.
Page 286 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.