Sort by:
Page 11 of 1021015 results

CT-Based 3D Super-Resolution Radiomics for the Differential Diagnosis of Brucella <i>vs.</i> Tuberculous Spondylitis using Deep Learning.

Wang K, Qi L, Li J, Zhang M, Du H

pubmed logopapersAug 4 2025
This study aims to improve the accuracy of distinguishing Tuberculous Spondylitis (TBS) from Brucella Spondylitis (BS) by developing radiomics models using Deep Learning and CT images enhanced with Super-Resolution (SR). A total of 94 patients diagnosed with BS or TBS were randomly divided into training (n=65) and validation (n=29) groups in a 7:3 ratio. In the training set, there were 40 BS and 25 TBS patients, with a mean age of 58.34 ± 12.53 years. In the validation set, there were 17 BS and 12 TBS patients, with a mean age of 58.48 ± 12.29 years. Standard CT images were enhanced using SR, improving spatial resolution and image quality. The lesion regions (ROIs) were manually segmented, and radiomics features were extracted. ResNet18 and ResNet34 were used for deep learning feature extraction and model training. Four multi-layer perceptron (MLP) models were developed: clinical, radiomics (Rad), deep learning (DL), and a combined model. Model performance was assessed using five-fold cross-validation, ROC, and decision curve analysis (DCA). Statistical significance was assessed, with key clinical and imaging features showing significant differences between TBS and BS (e.g., gender, p=0.0038; parrot beak appearance, p<0.001; dead bone, p<0.001; deformities of the spinal posterior process, p=0.0044; psoas abscess, p<0.001). The combined model outperformed others, achieving the highest AUC (0.952), with ResNet34 and SR-enhanced images further boosting performance. Sensitivity reached 0.909, and Specificity was 0.941. DCA confirmed clinical applicability. The integration of SR-enhanced CT imaging and deep learning radiomics appears to improve diagnostic differentiation between BS and TBS. The combined model, especially when using ResNet34 and GAN-based super-resolution, demonstrated better predictive performance. High-resolution imaging may facilitate better lesion delineation and more robust feature extraction. Nevertheless, further validation with larger, multicenter cohorts is needed to confirm generalizability and reduce potential bias from retrospective design and imaging heterogeneity. This study suggests that integrating Deep Learning Radiomics with Super-Resolution may improve the differentiation between TBS and BS compared to standard CT imaging. However, prospective multi-center studies are necessary to validate its clinical applicability.

Adapting foundation models for rapid clinical response: intracerebral hemorrhage segmentation in emergency settings.

Gerbasi A, Mazzacane F, Ferrari F, Del Bello B, Cavallini A, Bellazzi R, Quaglini S

pubmed logopapersAug 3 2025
Intracerebral hemorrhage (ICH) is a medical emergency that demands rapid and accurate diagnosis for optimal patient management. Hemorrhagic lesions' segmentation on CT scans is a necessary first step for acquiring quantitative imaging data that are becoming increasingly useful in the clinical setting. However, traditional manual segmentation is time-consuming and prone to inter-rater variability, creating a need for automated solutions. This study introduces a novel approach combining advanced deep learning models to segment extensive and morphologically variable ICH lesions in non-contrast CT scans. We propose a two-step methodology that begins with a user-defined loose bounding box around the lesion, followed by a fine-tuned YOLOv8-S object detection model to generate precise, slice-specific bounding boxes. These bounding boxes are then used to prompt the Medical Segment Anything Model for accurate lesion segmentation. Our pipeline achieves high segmentation accuracy with minimal supervision, demonstrating strong potential as a practical alternative to task-specific models. We evaluated the model on a dataset of 252 CT scans demonstrating high performance in segmentation accuracy and robustness. Finally, the resulting segmentation tool is integrated into a user-friendly web application prototype, offering clinicians a simple interface for lesion identification and radiomic quantification.

LoRA-based methods on Unet for transfer learning in Subarachnoid Hematoma Segmentation

Cristian Minoccheri, Matthew Hodgman, Haoyuan Ma, Rameez Merchant, Emily Wittrup, Craig Williamson, Kayvan Najarian

arxiv logopreprintAug 3 2025
Aneurysmal subarachnoid hemorrhage (SAH) is a life-threatening neurological emergency with mortality rates exceeding 30%. Transfer learning from related hematoma types represents a potentially valuable but underexplored approach. Although Unet architectures remain the gold standard for medical image segmentation due to their effectiveness on limited datasets, Low-Rank Adaptation (LoRA) methods for parameter-efficient transfer learning have been rarely applied to convolutional neural networks in medical imaging contexts. We implemented a Unet architecture pre-trained on computed tomography scans from 124 traumatic brain injury patients across multiple institutions, then fine-tuned on 30 aneurysmal SAH patients from the University of Michigan Health System using 3-fold cross-validation. We developed a novel CP-LoRA method based on tensor CP-decomposition and introduced DoRA variants (DoRA-C, convDoRA, CP-DoRA) that decompose weight matrices into magnitude and directional components. We compared these approaches against existing LoRA methods (LoRA-C, convLoRA) and standard fine-tuning strategies across different modules on a multi-view Unet model. LoRA-based methods consistently outperformed standard Unet fine-tuning. Performance varied by hemorrhage volume, with all methods showing improved accuracy for larger volumes. CP-LoRA achieved comparable performance to existing methods while using significantly fewer parameters. Over-parameterization with higher ranks consistently yielded better performance than strictly low-rank adaptations. This study demonstrates that transfer learning between hematoma types is feasible and that LoRA-based methods significantly outperform conventional Unet fine-tuning for aneurysmal SAH segmentation.

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation

Andrea Dosi, Semanto Mondal, Rajib Chandra Ghosh, Massimo Brescia, Giuseppe Longo

arxiv logopreprintAug 3 2025
This work presents the results of a methodological transfer from remote sensing to healthcare, adapting AMBER -- a transformer-based model originally designed for multiband images, such as hyperspectral data -- to the task of 3D medical datacube segmentation. In this study, we use the AMBER architecture with Adaptive Fourier Neural Operators (AFNO) in place of the multi-head self-attention mechanism. While existing models rely on various forms of attention to capture global context, AMBER-AFNO achieves this through frequency-domain mixing, enabling a drastic reduction in model complexity. This design reduces the number of trainable parameters by over 80% compared to UNETR++, while maintaining a FLOPs count comparable to other state-of-the-art architectures. Model performance is evaluated on two benchmark 3D medical datasets -- ACDC and Synapse -- using standard metrics such as Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD), demonstrating that AMBER-AFNO achieves competitive or superior accuracy with significant gains in training efficiency, inference speed, and memory usage.

The dosimetric impacts of ct-based deep learning autocontouring algorithm for prostate cancer radiotherapy planning dosimetric accuracy of DirectORGANS.

Dinç SÇ, Üçgül AN, Bora H, Şentürk E

pubmed logopapersAug 2 2025
In study, we aimed to dosimetrically evaluate the usability of a new generation autocontouring algorithm (DirectORGANS) that automatically identifies organs and contours them directly in the computed tomography (CT) simulator before creating prostate radiotherapy plans. The CT images of 10 patients were used in this study. The prostates, bladder, rectum, and femoral heads of 10 patients were automatically contoured based on DirectORGANS algorithm at the CT simulator. On the same CT image sets, the same target volumes and contours of organs at risk were manually contoured by an experienced physician using MRI images and used as a reference structure. The doses of manually delineated contours of the target volume and organs at risk and the doses of auto contours of the target volume and organs at risk were obtained from the dose volume histogram of the same plan. Conformity index (CI) and homogeneity index (HI) were calculated to evaluate the target volumes. In critical organ structures, V<sub>60,</sub> V<sub>65,</sub> V<sub>70</sub> for the rectum, V<sub>65,</sub> V70, V75, and V<sub>80</sub> for the bladder, and maximum doses for femoral heads were evaluated. The Mann-Whitney U test was used for statistical comparison with statistical package SPSS (P < 0.05). Compared to the doses of the manual contours (MC) with auto contours (AC), there was no significant difference between the doses of the organs at risk. However, there were statistically significant differences between HI and CI values due to differences in prostate contouring (P < 0.05). The study showed that the need for clinicians to edit target volumes using MRI before treatment planning. However, it demonstrated that delineating organs at risk was used safely without the need for correction. DirectORGANS algorithm is suitable for use in RT planning to minimize differences between physicians and shorten the duration of this contouring step.

Transfer learning based deep architecture for lung cancer classification using CT image with pattern and entropy based feature set.

R N, C M V

pubmed logopapersAug 2 2025
Early detection of lung cancer, which remains one of the leading causes of death worldwide, is important for improved prognosis, and CT scanning is an important diagnostic modality. Lung cancer classification according to CT scan is challenging since the disease is characterized by very variable features. A hybrid deep architecture, ILN-TL-DM, is presented in this paper for precise classification of lung cancer from CT scan images. Initially, an Adaptive Gaussian filtering method is applied during pre-processing to eliminate noise and enhance the quality of the CT image. This is followed by an Improved Attention-based ResU-Net (P-ResU-Net) model being utilized during the segmentation process to accurately isolate the lung and tumor areas from the remaining image. During the process of feature extraction, various features are derived from the segmented images, such as Local Gabor Transitional Pattern (LGTrP), Pyramid of Histograms of Oriented Gradients (PHOG), deep features and improved entropy-based features, all intended to improve the representation of the tumor areas. Finally, classification exploits a hybrid deep learning architecture integrating an improved LeNet structure with Transfer Learning (ILN-TL) and a DeepMaxout (DM) structure. Both model outputs are finally merged with the help of a soft voting strategy, which results in the final classification result that separates cancerous and non-cancerous tissues. The strategy greatly enhances lung cancer detection's accuracy and strength, showcasing how combining sophisticated neural network structures with feature engineering and ensemble methods could be used to achieve better medical image classification. The ILN-TL-DM model consistently outperforms the conventional methods with greater accuracy (0.962), specificity (0.955) and NPV (0.964).

Deep learning-driven incidental detection of vertebral fractures in cancer patients: advancing diagnostic precision and clinical management.

Mniai EM, Laletin V, Tselikas L, Assi T, Bonnet B, Camez AO, Zemmouri A, Muller S, Moussa T, Chaibi Y, Kiewsky J, Quenet S, Avare C, Lassau N, Balleyguier C, Ayobi A, Ammari S

pubmed logopapersAug 2 2025
Vertebral compression fractures (VCFs) are the most prevalent skeletal manifestations of osteoporosis in cancer patients. Yet, they are frequently missed or not reported in routine clinical radiology, adversely impacting patient outcomes and quality of life. This study evaluates the diagnostic performance of a deep-learning (DL)-based application and its potential to reduce the miss rate of incidental VCFs in a high-risk cancer population. We retrospectively analysed thoraco-abdomino-pelvic (TAP) CT scans from 1556 patients with stage IV cancer collected consecutively over a 4-month period (September-December 2023) in a tertiary cancer center. A DL-based application flagged cases positive for VCFs, which were subsequently reviewed by two expert radiologists for validation. Additionally, grade 3 fractures identified by the application were independently assessed by two expert interventional radiologists to determine their eligibility for vertebroplasty. Of the 1556 cases, 501 were flagged as positive for VCF by the application, with 436 confirmed as true positives by expert review, yielding a positive predictive value (PPV) of 87%. Common causes of false positives included sclerotic vertebral metastases, scoliosis, and vertebrae misidentification. Notably, 83.5% (364/436) of true positive VCFs were absent from radiology reports, indicating a substantial non-report rate in routine practice. Ten grade 3 fractures were overlooked or not reported by radiologists. Among them, 9 were deemed suitable for vertebroplasty by expert interventional radiologists. This study underscores the potential of DL-based applications to improve the detection of VCFs. The analyzed tool can assist radiologists in detecting more incidental vertebral fractures in adult cancer patients, optimising timely treatment and reducing associated morbidity and economic burden. Moreover, it might enhance patient access to interventional treatments such as vertebroplasty. These findings highlight the transformative role that DL can play in optimising clinical management and outcomes for osteoporosis-related VCFs in cancer patients.

BEA-CACE: branch-endpoint-aware double-DQN for coronary artery centerline extraction in CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersAug 1 2025
In order to automate the centerline extraction of the coronary tree, three challenges must be addressed: tracking branches automatically, passing through plaques successfully, and detecting endpoints accurately. This study aims to develop a method to solve the three challenges. We propose a branch-endpoint-aware coronary centerline extraction framework. The framework consists of a deep reinforcement learning-based tracker and a 3D dilated CNN-based detector. The tracker is designed to predict the actions of an agent with the objective of tracking the centerline. The detector identifies bifurcation points and endpoints, assisting the tracker in tracking branches and terminating the tracking process automatically. The detector can also estimate the radius values of the coronary artery. The method achieves the state-of-the-art performance in both the centerline extraction and radius estimate. Furthermore, the method necessitates minimal user interaction to extract a coronary tree, a feature that surpasses other interactive methods. The method can track branches automatically, pass through plaques successfully and detect endpoints accurately. Compared with other interactive methods that require multiple seeds, our method only needs one seed to extract the entire coronary tree.

Light Convolutional Neural Network to Detect Chronic Obstructive Pulmonary Disease (COPDxNet): A Multicenter Model Development and External Validation Study.

Rabby ASA, Chaudhary MFA, Saha P, Sthanam V, Nakhmani A, Zhang C, Barr RG, Bon J, Cooper CB, Curtis JL, Hoffman EA, Paine R, Puliyakote AK, Schroeder JD, Sieren JC, Smith BM, Woodruff PG, Reinhardt JM, Bhatt SP, Bodduluri S

pubmed logopapersAug 1 2025
Approximately 70% of adults with chronic obstructive pulmonary disease (COPD) remain undiagnosed. Opportunistic screening using chest computed tomography (CT) scans, commonly acquired in clinical practice, may be used to improve COPD detection through simple, clinically applicable deep-learning models. We developed a lightweight, convolutional neural network (COPDxNet) that utilizes minimally processed chest CT scans to detect COPD. We analyzed 13,043 inspiratory chest CT scans from the COPDGene participants, (9,675 standard-dose and 3,368 low-dose scans), which we randomly split into training (70%) and test (30%) sets at the participant level to no individual contributed to both sets. COPD was defined by postbronchodilator FEV /FVC < 0.70. We constructed a simple, four-block convolutional model that was trained on pooled data and validated on the held-out standard- and low-dose test sets. External validation was performed using standard-dose CT scans from 2,890 SPIROMICS participants and low-dose CT scans from 7,893 participants in the National Lung Screening Trial (NLST). We evaluated performance using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, Brier scores, and calibration curves. On COPDGene standard-dose CT scans, COPDxNet achieved an AUC of 0.92 (95% CI: 0.91 to 0.93), sensitivity of 80.2%, and specificity of 89.4%. On low-dose scans, AUC was 0.88 (95% CI: 0.86 to 0.90). When the COPDxNet model was applied to external validation datasets, it showed an AUC of 0.92 (95% CI: 0.91 to 0.93) in SPIROMICS and 0.82 (95% CI: 0.81 to 0.83) on NLST. The model was well-calibrated, with Brier scores of 0.11 for standard- dose and 0.13 for low-dose CT scans in COPDGene, 0.12 in SPIROMICS, and 0.17 in NLST. COPDxNet demonstrates high discriminative accuracy and generalizability for detecting COPD on standard- and low-dose chest CT scans, supporting its potential for clinical and screening applications across diverse populations.

Establishing a Deep Learning Model That Integrates Pretreatment and Midtreatment Computed Tomography to Predict Treatment Response in Non-Small Cell Lung Cancer.

Chen X, Meng F, Zhang P, Wang L, Yao S, An C, Li H, Zhang D, Li H, Li J, Wang L, Liu Y

pubmed logopapersAug 1 2025
Patients with identical stages or similar tumor volumes can vary significantly in their responses to radiation therapy (RT) due to individual characteristics, making personalized RT for non-small cell lung cancer (NSCLC) challenging. This study aimed to develop a deep learning model by integrating pretreatment and midtreatment computed tomography (CT) to predict the treatment response in NSCLC patients. We retrospectively collected data from 168 NSCLC patients across 3 hospitals. Data from Shanghai General Hospital (SGH, 35 patients) and Shanxi Cancer Hospital (SCH, 93 patients) were used for model training and internal validation, while data from Linfen Central Hospital (LCH, 40 patients) were used for external validation. Deep learning, radiomics, and clinical features were extracted to establish a varying time interval long short-term memory network for response prediction. Furthermore, we derived a model-deduced personalize dose escalation (DE) for patients predicted to have suboptimal gross tumor volume regression. The area under the receiver operating characteristic curve (AUC) and predicted absolute error were used to evaluate the predictive Response Evaluation Criteria in Solid Tumors classification and the proportion of gross tumor volume residual. DE was calculated as the biological equivalent dose using an /α/β ratio of 10 Gy. The model using only pretreatment CT achieved the highest AUC of 0.762 and 0.687 in internal and external validation respectively, whereas the model integrating both pretreatment and midtreatment CT achieved AUC of 0.869 and 0.798, with predicted absolute error of 0.137 and 0.185, respectively. We performed personalized DE for 29 patients. Their original biological equivalent dose was approximately 72 Gy, within the range of 71.6 Gy to 75 Gy. DE ranged from 77.7 to 120 Gy for 29 patients, with 17 patients exceeding 100 Gy and 8 patients reaching the model's preset upper limit of 120 Gy. Combining pretreatment and midtreatment CT enhances prediction performance for RT response and offers a promising approach for personalized DE in NSCLC.
Page 11 of 1021015 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.