Sort by:
Page 74 of 1421416 results

Multi Source COVID-19 Detection via Kernel-Density-based Slice Sampling

Chia-Ming Lee, Bo-Cheng Qiu, Ting-Yao Chen, Ming-Han Sun, Fang-Ying Lin, Jung-Tse Tsai, I-An Tsai, Yu-Fan Lin, Chih-Chung Hsu

arxiv logopreprintJul 2 2025
We present our solution for the Multi-Source COVID-19 Detection Challenge, which classifies chest CT scans from four distinct medical centers. To address multi-source variability, we employ the Spatial-Slice Feature Learning (SSFL) framework with Kernel-Density-based Slice Sampling (KDS). Our preprocessing pipeline combines lung region extraction, quality control, and adaptive slice sampling to select eight representative slices per scan. We compare EfficientNet and Swin Transformer architectures on the validation set. The EfficientNet model achieves an F1-score of 94.68%, compared to the Swin Transformer's 93.34%. The results demonstrate the effectiveness of our KDS-based pipeline on multi-source data and highlight the importance of dataset balance in multi-institutional medical imaging evaluation.

BronchoGAN: Anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy

Ahmad Soliman, Ron Keuth, Marian Himstedt

arxiv logopreprintJul 2 2025
The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains -- virtual bronchoscopy, phantom as well as in-vivo and ex-vivo image data -- is pivotal for clinical applications. This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover our intermediate depth image representation allows to easily construct paired image data for training. Our experiments showed that input images from different domains (e.g. virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e. bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Through foundation models for intermediate depth representations, bronchial orifice segmentation integrated as anatomical constraints into conditional GANs we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Multichannel deep learning prediction of major pathological response after neoadjuvant immunochemotherapy in lung cancer: a multicenter diagnostic study.

Geng Z, Li K, Mei P, Gong Z, Yan R, Huang Y, Zhang C, Zhao B, Lu M, Yang R, Wu G, Ye G, Liao Y

pubmed logopapersJul 2 2025
This study aimed to develop a pretreatment CT-based multichannel predictor integrating deep learning features encoded by Transformer models for preoperative diagnosis of major pathological response (MPR) in non-small cell lung cancer (NSCLC) patients receiving neoadjuvant immunochemotherapy. This multicenter diagnostic study retrospectively included 332 NSCLC patients from four centers. Pretreatment computed tomography images were preprocessed and segmented into region of interest cubes for radiomics modeling. These cubes were cropped into four groups of 2 dimensional image modules. GoogLeNet architecture was trained independently on each group within a multichannel framework, with gradient-weighted class activation mapping and SHapley Additive exPlanations value‌ for visualization. Deep learning features were carefully extracted and fused across the four image groups using the Transformer fusion model. After models training, model performance was evaluated via the area under the curve (AUC), sensitivity, specificity, F1 score, confusion matrices, calibration curves, decision curve analysis, integrated discrimination improvement, net reclassification improvement, and DeLong test. The dataset was allocated into training (n = 172, Center 1), internal validation (n = 44, Center 1), and external test (n = 116, Centers 2-4) cohorts. Four optimal deep learning models and the best Transformer fusion model were developed. In the external test cohort, traditional radiomics model exhibited an AUC of 0.736 [95% confidence interval (CI): 0.645-0.826]. The‌ optimal deep learning imaging ‌module‌ showed superior AUC of 0.855 (95% CI: 0.777-0.934). The fusion model named Transformer_GoogLeNet further improved classification accuracy (AUC = 0.924, 95% CI: 0.875-0.973). The new method of fusing multichannel deep learning with the Transformer Encoder can accurately diagnose whether NSCLC patients receiving neoadjuvant immunochemotherapy will achieve MPR. Our findings may support improved surgical planning and contribute to better treatment outcomes through more accurate preoperative assessment.

Performance of two different artificial intelligence models in dental implant planning among four different implant planning software: a comparative study.

Roongruangsilp P, Narkbuakaew W, Khongkhunthian P

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in dental implant planning has emerged as a transformative approach to enhance diagnostic accuracy and efficiency. This study aimed to evaluate the performance of two object detection models, Faster R-CNN and YOLOv7 in analyzing cross-sectional and panoramic images derived from DICOM files processed by four distinct dental imaging software platforms. The dataset consisted of 332 implant position images derived from DICOM files of 184 CBCT scans. Three hundred images were processed using DentiPlan Pro 3.7 software (NECTEC, NSTDA, Thailand) for the development of Faster R-CNN and YOLOv7 models for dental implant planning. For model testing, 32 additional implant position images, which were not included in the training set, were processed using four different software programs: DentiPlan Pro 3.7, DentiPlan Pro Plus 5.0 (DTP; NECTEC, NSTDA, Thailand), Implastation (ProDigiDent USA, USA), and Romexis 6.0 (Planmeca, Finland). The performance of the models was evaluated using detection rate, accuracy, precision, recall, F1 score, and the Jaccard Index (JI). Faster R-CNN achieved superior accuracy across imaging modalities, while YOLOv7 demonstrated higher detection rates, albeit with lower precision. The impact of image rendering algorithms on model performance underscores the need for standardized preprocessing pipelines. Although Faster R-CNN demonstrated relatively higher performance metrics, statistical analysis revealed no significant differences between the models (p-value > 0.05). This study emphasizes the potential of AI-driven solutions in dental implant planning and advocates the need for further research in this area. The absence of statistically significant differences between Faster R-CNN and YOLOv7 suggests that both models can be effectively utilized, depending on the specific requirements for accuracy or detection. Furthermore, the variations in imaging rendering algorithms across different software platforms significantly influenced the model outcomes. AI models for DICOM analysis should rely on standardized image rendering to ensure consistent performance.

Clinical value of the 70-kVp ultra-low-dose CT pulmonary angiography with deep learning image reconstruction.

Zhang Y, Wang L, Yuan D, Qi K, Zhang M, Zhang W, Gao J, Liu J

pubmed logopapersJul 2 2025
This study aims to assess the feasibility of "double-low," low radiation dosage and low contrast media dosage, CT pulmonary angiography (CTPA) based on deep-learning image reconstruction (DLIR) algorithms. One hundred consecutive patients (41 females; average age 60.9 years, range 18-90) were prospectively scanned on multi-detector CT systems. Fifty patients in the conventional-dose group (CD group) underwent CTPA with 100 kV protocol using the traditional iterative reconstruction algorithm, and 50 patients in the low-dose group (LD group) underwent CTPA with a 70 kVp DLIR protocol. Radiation and contrast agent doses were recorded and compared between groups. Objective parameters were measured and compared. Two radiologists evaluated images for overall image quality, artifacts, and image contrast separately on a 5-point scale. The furthest visible branches were compared between groups. Compared to the control group, the study group reduced the dose-length product by 80.3% (p < 0.01) and the contrast media dose by 33.3%. CT values, SD values, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) showed no statistically significant differences (all p > 0.05) between the LD and CD groups. The overall image quality scores were comparable between the LD and CD groups (p > 0.05), with good in-reader agreement (k = 0.75). More peripheral pulmonary vessels could be assessed in the LD group compared with the CD group. 70 kVp combined with DLIR reconstruction for CTPA can further reduce radiation and contrast agent dose while maintaining image quality and increasing the visibility on the pulmonary artery distal branches. Question Elevated radiation exposure and substantial doses of contrast media during CT pulmonary angiography (CTPA) augment patient risks. Findings The "double-low" CT pulmonary angiography protocol can diminish radiation doses by 80.3% and minimize contrast doses by one-third while maintaining image quality. Clinical relevance With deep learning algorithms, we confirmed that CTPA images maintained excellent quality despite reduced radiation and contrast dosages, helping to reduce radiation exposure and kidney burden on patients. The "double-low" CTPA protocol, complemented by deep learning image reconstruction, prioritizes patient safety.

Integrating CT radiomics and clinical features using machine learning to predict post-COVID pulmonary fibrosis.

Zhao Q, Li Y, Zhao C, Dong R, Tian J, Zhang Z, Huang L, Huang J, Yan J, Yang Z, Ruan J, Wang P, Yu L, Qu J, Zhou M

pubmed logopapersJul 2 2025
The lack of reliable biomarkers for the early detection and risk stratification of post-COVID-19 pulmonary fibrosis (PCPF) underscores the urgency advanced predictive tools. This study aimed to develop a machine learning-based predictive model integrating quantitative CT (qCT) radiomics and clinical features to assess the risk of lung fibrosis in COVID-19 patients. A total of 204 patients with confirmed COVID-19 pneumonia were included in the study. Of these, 93 patients were assigned to the development cohort (74 for training and 19 for internal validation), while 111 patients from three independent hospitals constituted the external validation cohort. Chest CT images were analyzed using qCT software. Clinical data and laboratory parameters were obtained from electronic health records. Least absolute shrinkage and selection operator (LASSO) regression with 5-fold cross-validation was used to select the most predictive features. Twelve machine learning algorithms were independently trained. Their performances were evaluated by receiver operating characteristic (ROC) curves, area under the curve (AUC) values, sensitivity, and specificity. Seventy-eight features were extracted and reduced to ten features for model development. These included two qCT radiomics signatures: (1) whole lung_reticulation (%) interstitial lung disease (ILD) texture analysis, (2) interstitial lung abnormality (ILA)_Num of lung zones ≥ 5%_whole lung_ILA. Among 12 machine learning algorithms evaluated, the support vector machine (SVM) model demonstrated the best predictive performance, with AUCs of 0.836 (95% CI: 0.830-0.842) in the training cohort, 0.796 (95% CI: 0.777-0.816) in the internal validation cohort, and 0.797 (95% CI: 0.691-0.873) in the external validation cohort. The integration of CT radiomics, clinical and laboratory variables using machine learning provides a robust tool for predicting pulmonary fibrosis progression in COVID-19 patients, facilitating early risk assessment and intervention.

Ensemble methods and partially-supervised learning for accurate and robust automatic murine organ segmentation.

Daenen LHBA, de Bruijn J, Staut N, Verhaegen F

pubmed logopapersJul 2 2025
Delineation of multiple organs in murine µCT images is crucial for preclinical studies but requires manual volumetric segmentation, a tedious and time-consuming process prone to inter-observer variability. Automatic deep learning-based segmentation can improve speed and reproducibility. While 2D and 3D deep learning models have been developed for anatomical segmentation, their generalization to external datasets has not been extensively investigated. Furthermore, ensemble learning, combining predictions of multiple 2D models, and partially-supervised learning (PSL), enabling training on partially-labeled datasets, have not been explored for preclinical purposes. This study demonstrates the first use of PSL frameworks and the superiority of 3D models in accuracy and generalizability to external datasets. Ensemble methods performed on par or better than the best individual 2D network, but only 3D models consistently generalized to external datasets (Dice Similarity Coefficient (DSC) > 0.8). PSL frameworks showed promising results across various datasets and organs, but its generalization to external data can be improved for some organs. This work highlights the superiority of 3D models over 2D and ensemble counterparts in accuracy and generalizability for murine µCT image segmentation. Additionally, a promising PSL framework is presented for leveraging multiple datasets without complete annotations. Our model can increase time-efficiency and improve reproducibility in preclinical radiotherapy workflows by circumventing manual contouring bottlenecks. Moreover, high segmentation accuracy of 3D models allows monitoring multiple organs over time using repeated µCT imaging, potentially reducing the number of mice sacrificed in studies, adhering to the 3R principle, specifically Reduction and Refinement.

A federated learning-based privacy-preserving image processing framework for brain tumor detection from CT scans.

Al-Saleh A, Tejani GG, Mishra S, Sharma SK, Mousavirad SJ

pubmed logopapersJul 2 2025
The detection of brain tumors is crucial in medical imaging, because accurate and early diagnosis can have a positive effect on patients. Because traditional deep learning models store all their data together, they raise questions about privacy, complying with regulations and the different types of data used by various institutions. We introduce the anisotropic-residual capsule hybrid Gorilla Badger optimized network (Aniso-ResCapHGBO-Net) framework for detecting brain tumors in a privacy-preserving, decentralized system used by many healthcare institutions. ResNet-50 and capsule networks are incorporated to achieve better feature extraction and maintain the structure of images' spatial data. To get the best results, the hybrid Gorilla Badger optimization algorithm (HGBOA) is applied for selecting the key features. Preprocessing techniques include anisotropic diffusion filtering, morphological operations, and mutual information-based image registration. Updates to the model are made secure and tamper-evident on the Ethereum network with its private blockchain and SHA-256 hashing scheme. The project is built using Python, TensorFlow and PyTorch. The model displays 99.07% accuracy, 98.54% precision and 99.82% sensitivity on assessments from benchmark CT imaging of brain tumors. This approach also helps to reduce the number of cases where no disease is found when there is one and vice versa. The framework ensures that patients' data is protected and does not decrease the accuracy of brain tumor detection.

SealPrint: The Anatomically Replicated Seal-and-Support Socket Abutment Technique A Proof-of-Concept with 12 months follow-up.

Lahoud P, Castro A, Walter E, Jacobs W, De Greef A, Jacobs R

pubmed logopapersJul 2 2025
This study aimed at investigating a novel technique for designing and manufacturing a sealing socket abutment (SSA) using artificial intelligence (AI)-driven tooth segmentation and 3D printing technologies. A validated AI-powered module was used to segment the tooth to be replaced on the presurgical Cone Beam Computed Tomography (CBCT) scan. Following virtual surgical planning, the CBCT and intraoral scan (IOS) were imported into Mimics software. The AI-segmented tooth was aligned with the IOS, sliced horizontally at the temporary abutment's neck, and further trimmed 2 mm above the gingival margin to capture the emergence profile. A conical cut, 2 mm wider than the temporary abutment with a 5° taper, was applied for a passive fit. This process produced a custom sealing socket abutment, which was then 3D-printed. After atraumatic tooth extraction and immediate implant placement, the temporary abutment was positioned, followed by the SealPrint atop. A flowable composite was used to fill the gap between the temporary abutment and the SealPrint; the whole structure sealing the extraction socket, providing by design support for the interdental papilla and protecting the implant and (bio)materials used. True to planning, the SealPrint passively fits on the temporary abutment. It provides an optimal seal over the entire surface of the extraction socket, preserving the emergence profile of the extracted tooth, protecting the dental implant and stabilizing the graft material and blood clot. The SealPrint technique provides a reliable and fast solution for protection and preservation of the soft-, hard-tissues and emergence profile following immediate implant placement.
Page 74 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.