Sort by:
Page 46 of 3993982 results

Adapting foundation models for rapid clinical response: intracerebral hemorrhage segmentation in emergency settings.

Gerbasi A, Mazzacane F, Ferrari F, Del Bello B, Cavallini A, Bellazzi R, Quaglini S

pubmed logopapersAug 3 2025
Intracerebral hemorrhage (ICH) is a medical emergency that demands rapid and accurate diagnosis for optimal patient management. Hemorrhagic lesions' segmentation on CT scans is a necessary first step for acquiring quantitative imaging data that are becoming increasingly useful in the clinical setting. However, traditional manual segmentation is time-consuming and prone to inter-rater variability, creating a need for automated solutions. This study introduces a novel approach combining advanced deep learning models to segment extensive and morphologically variable ICH lesions in non-contrast CT scans. We propose a two-step methodology that begins with a user-defined loose bounding box around the lesion, followed by a fine-tuned YOLOv8-S object detection model to generate precise, slice-specific bounding boxes. These bounding boxes are then used to prompt the Medical Segment Anything Model for accurate lesion segmentation. Our pipeline achieves high segmentation accuracy with minimal supervision, demonstrating strong potential as a practical alternative to task-specific models. We evaluated the model on a dataset of 252 CT scans demonstrating high performance in segmentation accuracy and robustness. Finally, the resulting segmentation tool is integrated into a user-friendly web application prototype, offering clinicians a simple interface for lesion identification and radiomic quantification.

Medical Image De-Identification Resources: Synthetic DICOM Data and Tools for Validation

Michael W. Rutherford, Tracy Nolan, Linmin Pei, Ulrike Wagner, Qinyan Pan, Phillip Farmer, Kirk Smith, Benjamin Kopchick, Laura Opsahl-Ong, Granger Sutton, David Clunie, Keyvan Farahani, Fred Prior

arxiv logopreprintAug 3 2025
Medical imaging research increasingly depends on large-scale data sharing to promote reproducibility and train Artificial Intelligence (AI) models. Ensuring patient privacy remains a significant challenge for open-access data sharing. Digital Imaging and Communications in Medicine (DICOM), the global standard data format for medical imaging, encodes both essential clinical metadata and extensive protected health information (PHI) and personally identifiable information (PII). Effective de-identification must remove identifiers, preserve scientific utility, and maintain DICOM validity. Tools exist to perform de-identification, but few assess its effectiveness, and most rely on subjective reviews, limiting reproducibility and regulatory confidence. To address this gap, we developed an openly accessible DICOM dataset infused with synthetic PHI/PII and an evaluation framework for benchmarking image de-identification workflows. The Medical Image de-identification (MIDI) dataset was built using publicly available de-identified data from The Cancer Imaging Archive (TCIA). It includes 538 subjects (216 for validation, 322 for testing), 605 studies, 708 series, and 53,581 DICOM image instances. These span multiple vendors, imaging modalities, and cancer types. Synthetic PHI and PII were embedded into structured data elements, plain text data elements, and pixel data to simulate real-world identity leaks encountered by TCIA curation teams. Accompanying evaluation tools include a Python script, answer keys (known truth), and mapping files that enable automated comparison of curated data against expected transformations. The framework is aligned with the HIPAA Privacy Rule "Safe Harbor" method, DICOM PS3.15 Confidentiality Profiles, and TCIA best practices. It supports objective, standards-driven evaluation of de-identification workflows, promoting safer and more consistent medical image sharing.

LoRA-based methods on Unet for transfer learning in Subarachnoid Hematoma Segmentation

Cristian Minoccheri, Matthew Hodgman, Haoyuan Ma, Rameez Merchant, Emily Wittrup, Craig Williamson, Kayvan Najarian

arxiv logopreprintAug 3 2025
Aneurysmal subarachnoid hemorrhage (SAH) is a life-threatening neurological emergency with mortality rates exceeding 30%. Transfer learning from related hematoma types represents a potentially valuable but underexplored approach. Although Unet architectures remain the gold standard for medical image segmentation due to their effectiveness on limited datasets, Low-Rank Adaptation (LoRA) methods for parameter-efficient transfer learning have been rarely applied to convolutional neural networks in medical imaging contexts. We implemented a Unet architecture pre-trained on computed tomography scans from 124 traumatic brain injury patients across multiple institutions, then fine-tuned on 30 aneurysmal SAH patients from the University of Michigan Health System using 3-fold cross-validation. We developed a novel CP-LoRA method based on tensor CP-decomposition and introduced DoRA variants (DoRA-C, convDoRA, CP-DoRA) that decompose weight matrices into magnitude and directional components. We compared these approaches against existing LoRA methods (LoRA-C, convLoRA) and standard fine-tuning strategies across different modules on a multi-view Unet model. LoRA-based methods consistently outperformed standard Unet fine-tuning. Performance varied by hemorrhage volume, with all methods showing improved accuracy for larger volumes. CP-LoRA achieved comparable performance to existing methods while using significantly fewer parameters. Over-parameterization with higher ranks consistently yielded better performance than strictly low-rank adaptations. This study demonstrates that transfer learning between hematoma types is feasible and that LoRA-based methods significantly outperform conventional Unet fine-tuning for aneurysmal SAH segmentation.

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation

Andrea Dosi, Semanto Mondal, Rajib Chandra Ghosh, Massimo Brescia, Giuseppe Longo

arxiv logopreprintAug 3 2025
This work presents the results of a methodological transfer from remote sensing to healthcare, adapting AMBER -- a transformer-based model originally designed for multiband images, such as hyperspectral data -- to the task of 3D medical datacube segmentation. In this study, we use the AMBER architecture with Adaptive Fourier Neural Operators (AFNO) in place of the multi-head self-attention mechanism. While existing models rely on various forms of attention to capture global context, AMBER-AFNO achieves this through frequency-domain mixing, enabling a drastic reduction in model complexity. This design reduces the number of trainable parameters by over 80% compared to UNETR++, while maintaining a FLOPs count comparable to other state-of-the-art architectures. Model performance is evaluated on two benchmark 3D medical datasets -- ACDC and Synapse -- using standard metrics such as Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD), demonstrating that AMBER-AFNO achieves competitive or superior accuracy with significant gains in training efficiency, inference speed, and memory usage.

M$^3$AD: Multi-task Multi-gate Mixture of Experts for Alzheimer's Disease Diagnosis with Conversion Pattern Modeling

Yufeng Jiang, Hexiao Ding, Hongzhao Chen, Jing Lan, Xinzhi Teng, Gerald W. Y. Cheng, Zongxi Li, Haoran Xie, Jung Sun Yoo, Jing Cai

arxiv logopreprintAug 3 2025
Alzheimer's disease (AD) progression follows a complex continuum from normal cognition (NC) through mild cognitive impairment (MCI) to dementia, yet most deep learning approaches oversimplify this into discrete classification tasks. This study introduces M$^3$AD, a novel multi-task multi-gate mixture of experts framework that jointly addresses diagnostic classification and cognitive transition modeling using structural MRI. We incorporate three key innovations: (1) an open-source T1-weighted sMRI preprocessing pipeline, (2) a unified learning framework capturing NC-MCI-AD transition patterns with demographic priors (age, gender, brain volume) for improved generalization, and (3) a customized multi-gate mixture of experts architecture enabling effective multi-task learning with structural MRI alone. The framework employs specialized expert networks for diagnosis-specific pathological patterns while shared experts model common structural features across the cognitive continuum. A two-stage training protocol combines SimMIM pretraining with multi-task fine-tuning for joint optimization. Comprehensive evaluation across six datasets comprising 12,037 T1-weighted sMRI scans demonstrates superior performance: 95.13% accuracy for three-class NC-MCI-AD classification and 99.15% for binary NC-AD classification, representing improvements of 4.69% and 0.55% over state-of-the-art approaches. The multi-task formulation simultaneously achieves 97.76% accuracy in predicting cognitive transition. Our framework outperforms existing methods using fewer modalities and offers a clinically practical solution for early intervention. Code: https://github.com/csyfjiang/M3AD.

Joint Lossless Compression and Steganography for Medical Images via Large Language Models

Pengcheng Zheng, Xiaorong Pu, Kecheng Chen, Jiaxin Huang, Meng Yang, Bai Feng, Yazhou Ren, Jianan Jiang

arxiv logopreprintAug 3 2025
Recently, large language models (LLMs) have driven promis ing progress in lossless image compression. However, di rectly adopting existing paradigms for medical images suf fers from an unsatisfactory trade-off between compression performance and efficiency. Moreover, existing LLM-based compressors often overlook the security of the compres sion process, which is critical in modern medical scenarios. To this end, we propose a novel joint lossless compression and steganography framework. Inspired by bit plane slicing (BPS), we find it feasible to securely embed privacy messages into medical images in an invisible manner. Based on this in sight, an adaptive modalities decomposition strategy is first devised to partition the entire image into two segments, pro viding global and local modalities for subsequent dual-path lossless compression. During this dual-path stage, we inno vatively propose a segmented message steganography algo rithm within the local modality path to ensure the security of the compression process. Coupled with the proposed anatom ical priors-based low-rank adaptation (A-LoRA) fine-tuning strategy, extensive experimental results demonstrate the su periority of our proposed method in terms of compression ra tios, efficiency, and security. The source code will be made publicly available.

TopoImages: Incorporating Local Topology Encoding into Deep Learning Models for Medical Image Classification

Pengfei Gu, Hongxiao Wang, Yejia Zhang, Huimin Li, Chaoli Wang, Danny Chen

arxiv logopreprintAug 3 2025
Topological structures in image data, such as connected components and loops, play a crucial role in understanding image content (e.g., biomedical objects). % Despite remarkable successes of numerous image processing methods that rely on appearance information, these methods often lack sensitivity to topological structures when used in general deep learning (DL) frameworks. % In this paper, we introduce a new general approach, called TopoImages (for Topology Images), which computes a new representation of input images by encoding local topology of patches. % In TopoImages, we leverage persistent homology (PH) to encode geometric and topological features inherent in image patches. % Our main objective is to capture topological information in local patches of an input image into a vectorized form. % Specifically, we first compute persistence diagrams (PDs) of the patches, % and then vectorize and arrange these PDs into long vectors for pixels of the patches. % The resulting multi-channel image-form representation is called a TopoImage. % TopoImages offers a new perspective for data analysis. % To garner diverse and significant topological features in image data and ensure a more comprehensive and enriched representation, we further generate multiple TopoImages of the input image using various filtration functions, which we call multi-view TopoImages. % The multi-view TopoImages are fused with the input image for DL-based classification, with considerable improvement. % Our TopoImages approach is highly versatile and can be seamlessly integrated into common DL frameworks. Experiments on three public medical image classification datasets demonstrate noticeably improved accuracy over state-of-the-art methods.

The dosimetric impacts of ct-based deep learning autocontouring algorithm for prostate cancer radiotherapy planning dosimetric accuracy of DirectORGANS.

Dinç SÇ, Üçgül AN, Bora H, Şentürk E

pubmed logopapersAug 2 2025
In study, we aimed to dosimetrically evaluate the usability of a new generation autocontouring algorithm (DirectORGANS) that automatically identifies organs and contours them directly in the computed tomography (CT) simulator before creating prostate radiotherapy plans. The CT images of 10 patients were used in this study. The prostates, bladder, rectum, and femoral heads of 10 patients were automatically contoured based on DirectORGANS algorithm at the CT simulator. On the same CT image sets, the same target volumes and contours of organs at risk were manually contoured by an experienced physician using MRI images and used as a reference structure. The doses of manually delineated contours of the target volume and organs at risk and the doses of auto contours of the target volume and organs at risk were obtained from the dose volume histogram of the same plan. Conformity index (CI) and homogeneity index (HI) were calculated to evaluate the target volumes. In critical organ structures, V<sub>60,</sub> V<sub>65,</sub> V<sub>70</sub> for the rectum, V<sub>65,</sub> V70, V75, and V<sub>80</sub> for the bladder, and maximum doses for femoral heads were evaluated. The Mann-Whitney U test was used for statistical comparison with statistical package SPSS (P < 0.05). Compared to the doses of the manual contours (MC) with auto contours (AC), there was no significant difference between the doses of the organs at risk. However, there were statistically significant differences between HI and CI values due to differences in prostate contouring (P < 0.05). The study showed that the need for clinicians to edit target volumes using MRI before treatment planning. However, it demonstrated that delineating organs at risk was used safely without the need for correction. DirectORGANS algorithm is suitable for use in RT planning to minimize differences between physicians and shorten the duration of this contouring step.

Transfer learning based deep architecture for lung cancer classification using CT image with pattern and entropy based feature set.

R N, C M V

pubmed logopapersAug 2 2025
Early detection of lung cancer, which remains one of the leading causes of death worldwide, is important for improved prognosis, and CT scanning is an important diagnostic modality. Lung cancer classification according to CT scan is challenging since the disease is characterized by very variable features. A hybrid deep architecture, ILN-TL-DM, is presented in this paper for precise classification of lung cancer from CT scan images. Initially, an Adaptive Gaussian filtering method is applied during pre-processing to eliminate noise and enhance the quality of the CT image. This is followed by an Improved Attention-based ResU-Net (P-ResU-Net) model being utilized during the segmentation process to accurately isolate the lung and tumor areas from the remaining image. During the process of feature extraction, various features are derived from the segmented images, such as Local Gabor Transitional Pattern (LGTrP), Pyramid of Histograms of Oriented Gradients (PHOG), deep features and improved entropy-based features, all intended to improve the representation of the tumor areas. Finally, classification exploits a hybrid deep learning architecture integrating an improved LeNet structure with Transfer Learning (ILN-TL) and a DeepMaxout (DM) structure. Both model outputs are finally merged with the help of a soft voting strategy, which results in the final classification result that separates cancerous and non-cancerous tissues. The strategy greatly enhances lung cancer detection's accuracy and strength, showcasing how combining sophisticated neural network structures with feature engineering and ensemble methods could be used to achieve better medical image classification. The ILN-TL-DM model consistently outperforms the conventional methods with greater accuracy (0.962), specificity (0.955) and NPV (0.964).

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.
Page 46 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.