Sort by:
Page 58 of 81808 results

Average Calibration Losses for Reliable Uncertainty in Medical Image Segmentation

Theodore Barfoot, Luis C. Garcia-Peraza-Herrera, Samet Akcay, Ben Glocker, Tom Vercauteren

arxiv logopreprintJun 4 2025
Deep neural networks for medical image segmentation are often overconfident, compromising both reliability and clinical utility. In this work, we propose differentiable formulations of marginal L1 Average Calibration Error (mL1-ACE) as an auxiliary loss that can be computed on a per-image basis. We compare both hard- and soft-binning approaches to directly improve pixel-wise calibration. Our experiments on four datasets (ACDC, AMOS, KiTS, BraTS) demonstrate that incorporating mL1-ACE significantly reduces calibration errors, particularly Average Calibration Error (ACE) and Maximum Calibration Error (MCE), while largely maintaining high Dice Similarity Coefficients (DSCs). We find that the soft-binned variant yields the greatest improvements in calibration, over the Dice plus cross-entropy loss baseline, but often compromises segmentation performance, with hard-binned mL1-ACE maintaining segmentation performance, albeit with weaker calibration improvement. To gain further insight into calibration performance and its variability across an imaging dataset, we introduce dataset reliability histograms, an aggregation of per-image reliability diagrams. The resulting analysis highlights improved alignment between predicted confidences and true accuracies. Overall, our approach not only enhances the trustworthiness of segmentation predictions but also shows potential for safer integration of deep learning methods into clinical workflows. We share our code here: https://github.com/cai4cai/Average-Calibration-Losses

Diffusion Transformer-based Universal Dose Denoising for Pencil Beam Scanning Proton Therapy

Yuzhen Ding, Jason Holmes, Hongying Feng, Martin Bues, Lisa A. McGee, Jean-Claude M. Rwigema, Nathan Y. Yu, Terence S. Sio, Sameer R. Keole, William W. Wong, Steven E. Schild, Jonathan B. Ashman, Sujay A. Vora, Daniel J. Ma, Samir H. Patel, Wei Liu

arxiv logopreprintJun 4 2025
Purpose: Intensity-modulated proton therapy (IMPT) offers precise tumor coverage while sparing organs at risk (OARs) in head and neck (H&N) cancer. However, its sensitivity to anatomical changes requires frequent adaptation through online adaptive radiation therapy (oART), which depends on fast, accurate dose calculation via Monte Carlo (MC) simulations. Reducing particle count accelerates MC but degrades accuracy. To address this, denoising low-statistics MC dose maps is proposed to enable fast, high-quality dose generation. Methods: We developed a diffusion transformer-based denoising framework. IMPT plans and 3D CT images from 80 H&N patients were used to generate noisy and high-statistics dose maps using MCsquare (1 min and 10 min per plan, respectively). Data were standardized into uniform chunks with zero-padding, normalized, and transformed into quasi-Gaussian distributions. Testing was done on 10 H&N, 10 lung, 10 breast, and 10 prostate cancer cases, preprocessed identically. The model was trained with noisy dose maps and CT images as input and high-statistics dose maps as ground truth, using a combined loss of mean square error (MSE), residual loss, and regional MAE (focusing on top/bottom 10% dose voxels). Performance was assessed via MAE, 3D Gamma passing rate, and DVH indices. Results: The model achieved MAEs of 0.195 (H&N), 0.120 (lung), 0.172 (breast), and 0.376 Gy[RBE] (prostate). 3D Gamma passing rates exceeded 92% (3%/2mm) across all sites. DVH indices for clinical target volumes (CTVs) and OARs closely matched the ground truth. Conclusion: A diffusion transformer-based denoising framework was developed and, though trained only on H&N data, generalizes well across multiple disease sites.

Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 2: Analysis of DALL-E 3.

Currie G, Hewis J, Hawk E, Rohren E

pubmed logopapersJun 4 2025
Disparity among gender and ethnicity remains an issue across medicine and health science. Only 26%-35% of trainee radiologists are female, despite more than 50% of medical students' being female. Similar gender disparities are evident across the medical imaging professions. Generative artificial intelligence text-to-image production could reinforce or amplify gender biases. <b>Methods:</b> In March 2024, DALL-E 3 was utilized via GPT-4 to generate a series of individual and group images of medical imaging professionals: radiologist, nuclear medicine physician, radiographer, nuclear medicine technologist, medical physicist, radiopharmacist, and medical imaging nurse. Multiple iterations of images were generated using a variety of prompts. Collectively, 120 images were produced for evaluation of 524 characters. All images were independently analyzed by 3 expert reviewers from medical imaging professions for apparent gender and skin tone. <b>Results:</b> Collectively (individual and group images), 57.4% (<i>n</i> = 301) of medical imaging professionals were depicted as male, 42.4% (<i>n</i> = 222) as female, and 91.2% (<i>n</i> = 478) as having a light skin tone. The male gender representation was 65% for radiologists, 62% for nuclear medicine physicians, 52% for radiographers, 56% for nuclear medicine technologists, 62% for medical physicists, 53% for radiopharmacists, and 26% for medical imaging nurses. For all professions, this overrepresents men compared with women. There was no representation of persons with a disability. <b>Conclusion:</b> This evaluation reveals a significant overrepresentation of the male gender associated with generative artificial intelligence text-to-image production using DALL-E 3 across the medical imaging professions. Generated images have a disproportionately high representation of white men, which is not representative of the diversity of the medical imaging professions.

Digital removal of dermal denticle layer using geometric AI from 3D CT scans of shark craniofacial structures enhances anatomical precision.

Kim SW, Yuen AHL, Kim HW, Lee S, Lee SB, Lee YM, Jung WJ, Poon CTC, Park D, Kim S, Kim SG, Kang JW, Kwon J, Jo SJ, Giri SS, Park H, Seo JP, Kim DS, Kim BY, Park SC

pubmed logopapersJun 4 2025
Craniofacial morphometrics in sharks provide crucial insights into evolutionary history, geographical variation, sexual dimorphism, and developmental patterns. However, the fragile cartilaginous nature of shark craniofacial skeleton poses significant challenges for traditional specimen preparation, often resulting in damaged cranial landmarks and compromised measurement accuracy. While computed tomography (CT) offers a non-invasive alternative for anatomical observation, the high electron density of dermal denticles in sharks creates a unique challenge, obstructing clear visualization of internal structures in three-dimensional volume-rendered images (3DVRI). This study presents an artificial intelligence (AI)-based solution using machine-learning algorithms for digitally removing dermal denticle layer from CT scans of shark craniofacial skeleton. We developed a geometric AI-driven software (SKINPEELER) that selectively removes high-intensity voxels corresponding to dermal denticle layer while preserving underlying anatomical structures. We evaluated this approach using CT scans from 20 sharks (16 Carcharhinus brachyurus, 2 Alopias vulpinus, 1 Sphyrna lewini, and 1 Prionace glauca), applying our AI-driven software to process the Digital Imaging and Communications in Medicine (DICOM) images. The processed scans were reconstructed using bone reconstruction algorithms to enable precise craniofacial measurements. We assessed the accuracy of our method by comparing measurements from the processed 3DVRIs with traditional manual measurements. The AI-assisted approach demonstrated high accuracy (86.16-98.52%) relative to manual measurements. Additionally, we evaluated reproducibility and repeatability using intraclass correlation coefficients (ICC), finding high reproducibility (ICC: 0.456-0.998) and repeatability (ICC: 0.985-1.000 for operator 1 and 0.882-0.999 for operator 2). Our results indicate that this AI-enhanced digital denticle removal technique, combined with 3D CT reconstruction, provides a reliable and non-destructive alternative to traditional specimen preparation methods for investigating shark craniofacial morphology. This novel approach enhances measurement precision while preserving specimen integrity, potentially advancing various aspects of shark research including evolutionary studies, conservation efforts, and anatomical investigations.

Latent space reconstruction for missing data problems in CT.

Kabelac A, Eulig E, Maier J, Hammermann M, Knaup M, Kachelrieß M

pubmed logopapersJun 4 2025
The reconstruction of a computed tomography (CT) image can be compromised by artifacts, which, in many cases, reduce the diagnostic value of the image. These artifacts often result from missing or corrupt regions in the projection data, for example, by truncation, metal, or limited angle acquisitions. In this work, we introduce a novel deep learning-based framework, latent space reconstruction (LSR), which enables correction of various types of artifacts arising from missing or corrupted data. First, we train a generative neural network on uncorrupted CT images. After training, we iteratively search for the point in the latent space of this network that best matches the compromised projection data we measured. Once an optimal point is found, forward-projection of the generated CT image can be used to inpaint the corrupted or incomplete regions of the measured raw data. We used LSR to correct for truncation and metal artifacts. For the truncation artifact correction, images corrected by LSR show effective artifact suppression within the field of measurement (FOM), alongside a substantial high-quality extension of the FOM compared to other methods. For the metal artifact correction, images corrected by LSR demonstrate effective artifact reduction, providing a clearer view of the surrounding tissues and anatomical details. The results indicate that LSR is effective in correcting metal and truncation artifacts. Furthermore, the versatility of LSR allows its application to various other types of artifacts resulting from missing or corrupt data.

AI-powered segmentation of bifid mandibular canals using CBCT.

Gumussoy I, Demirezer K, Duman SB, Haylaz E, Bayrakdar IS, Celik O, Syed AZ

pubmed logopapersJun 4 2025
Accurate segmentation of the mandibular and bifid canals is crucial in dental implant planning to ensure safe implant placement, third molar extractions and other surgical interventions. The objective of this study is to develop and validate an innovative artificial intelligence tool for the efficient, and accurate segmentation of the mandibular and bifid canals on CBCT. CBCT data were screened to identify patients with clearly visible bifid canal variations, and their DICOM files were extracted. These DICOM files were then imported into the 3D Slicer<sup>®</sup> open-source software, where bifid canals and mandibular canals were annotated. The annotated data, along with the raw DICOM files, were processed using the nnU-Netv2 training model by CranioCatch AI software team. 69 anonymized CBCT volumes in DICOM format were converted to NIfTI file format. The method, utilizing nnU-Net v2, accurately predicted the voxels associated with the mandibular canal, achieving an intersection of over 50% in nearly all samples. The accuracy, Dice score, precision, and recall scores for the mandibular canal/bifid canal were determined to be 0.99/0.99, 0.82/0.46, 0.85/0.70, and 0.80/0.42, respectively. Despite the bifid canal segmentation not meeting the expected level of success, the findings indicate that the proposed method shows promising and has the potential to be utilized as a supplementary tool for mandibular canal segmentation. Due to the significance of accurately evaluating the mandibular canal before surgery, the use of artificial intelligence could assist in reducing the burden on practitioners by automating the complicated and time-consuming process of tracing and segmenting this structure. Being able to distinguish bifid channels with artificial intelligence will help prevent neurovascular problems that may occur before or after surgery.

Deep learning based rapid X-ray fluorescence signal extraction and image reconstruction for preclinical benchtop X-ray fluorescence computed tomography applications.

Kaphle A, Jayarathna S, Cho SH

pubmed logopapersJun 4 2025
Recent research advances have resulted in an experimental benchtop X-ray fluorescence computed tomography (XFCT) system that likely meets the imaging dose/scan time constraints for benchtop XFCT imaging of live mice injected with gold nanoparticles (GNPs). For routine in vivo benchtop XFCT imaging, however, additional challenges, most notably the need for rapid/near-real-time handling of X-ray fluorescence (XRF) signal extraction and XFCT image reconstruction, must be successfully addressed. Here we propose a novel end-to-end deep learning (DL) framework that integrates a one-dimensional convolutional neural network (1D CNN) for rapid XRF signal extraction with a U-Net model for XFCT image reconstruction. We trained the models using a comprehensive dataset including experimentally-acquired and augmented XRF/scatter photon spectra from various GNP concentrations and imaging scenarios, including phantom and synthetic mouse models. The DL framework demonstrated exceptional performance in both tasks. The 1D CNN achieved a high coefficient-of-determination (R² > 0.9885) and a low mean-absolute-error (MAE < 0.6248) in XRF signal extraction. The U-Net model achieved an average structural-similarity-index-measure (SSIM) of 0.9791 and a peak signal-to-noise ratio (PSNR) of 39.11 in XFCT image reconstruction, closely matching ground truth images. Notably, the DL approach (vs. the conventional approach) reduced the total post-processing time per slice from approximately 6 min to just 1.25 s.

Recent Advances in Medical Image Classification

Loan Dao, Ngoc Quoc Ly

arxiv logopreprintJun 4 2025
Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.

Co-Evidential Fusion with Information Volume for Medical Image Segmentation

Yuanpeng He, Lijian Li, Tianxiang Zhan, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintJun 3 2025
Although existing semi-supervised image segmentation methods have achieved good performance, they cannot effectively utilize multiple sources of voxel-level uncertainty for targeted learning. Therefore, we propose two main improvements. First, we introduce a novel pignistic co-evidential fusion strategy using generalized evidential deep learning, extended by traditional D-S evidence theory, to obtain a more precise uncertainty measure for each voxel in medical samples. This assists the model in learning mixed labeled information and establishing semantic associations between labeled and unlabeled data. Second, we introduce the concept of information volume of mass function (IVUM) to evaluate the constructed evidence, implementing two evidential learning schemes. One optimizes evidential deep learning by combining the information volume of the mass function with original uncertainty measures. The other integrates the learning pattern based on the co-evidential fusion strategy, using IVUM to design a new optimization objective. Experiments on four datasets demonstrate the competitive performance of our method.

Lymph node ultrasound in lymphoproliferative disorders: clinical characteristics and applications.

Tavarozzi R, Lombardi A, Scarano F, Staiano L, Trattelli G, Farro M, Castellino A, Coppola C

pubmed logopapersJun 3 2025
Superficial lymph node (LN) enlargement is a common ultrasonographic finding and can be associated with a broad spectrum of conditions, from benign reactive hyperplasia to malignant lymphoproliferative disorders (LPDs). LPDs, which include various hematologic malignancies affecting lymphoid tissue, present with diverse immune-morphological and clinical features, making differentiation from other malignant causes of lymphadenopathy challenging. Radiologic assessment is crucial in characterizing lymphadenopathy, with ultrasonography serving as a noninvasive and widely available imaging modality. High-resolution ultrasound allows the evaluation of key features such as LN size, shape, border definition, echogenicity, and the presence of abnormal cortical thickening, loss of the fatty hilum, or altered vascular patterns, which aid in distinguishing benign from malignant processes. This review aims to describe the ultrasonographic characteristics of lymphadenopathy, offering essential diagnostic insights to differentiate malignant disorders, particularly LPDs. We will discuss standard ultrasound techniques, including grayscale imaging and Doppler ultrasound, and explore more advanced methods such as contrast-enhanced ultrasound (CEUS), elastography, and artificial intelligence-assisted imaging, which are gaining prominence in LN evaluation. By highlighting these imaging modalities, we aim to enhance the diagnostic accuracy of ultrasonography in lymphadenopathy assessment and improve early detection of LPDs and other malignant conditions.
Page 58 of 81808 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.