Sort by:
Page 249 of 3843834 results

FFLUNet: Feature Fused Lightweight UNet for brain tumor segmentation.

Kundu S, Dutta S, Mukhopadhyay J, Chakravorty N

pubmed logopapersJun 14 2025
Brain tumors, particularly glioblastoma multiforme, are considered one of the most threatening types of tumors in neuro-oncology. Segmenting brain tumors is a crucial part of medical imaging. It plays a key role in diagnosing conditions, planning treatments, and keeping track of patients' progress. This paper presents a novel lightweight deep convolutional neural network (CNN) model specifically designed for accurate and efficient brain tumor segmentation from magnetic resonance imaging (MRI) scans. Our model leverages a streamlined architecture that reduces computational complexity while maintaining high segmentation accuracy. We have introduced several novel approaches, including optimized convolutional layers that capture both local and global features with minimal parameters. A layerwise adaptive weighting feature fusion technique is implemented that enhances comprehensive feature representation. By incorporating shifted windowing, the model achieves better generalization across data variations. Dynamic weighting is introduced in skip connections that allows backpropagation to determine the ideal balance between semantic and positional features. To evaluate our approach, we conducted experiments on publicly available MRI datasets and compared our model against state-of-the-art segmentation methods. Our lightweight model has an efficient architecture with 1.45 million parameters - 95% fewer than nnUNet (30.78M), 91% fewer than standard UNet (16.21M), and 85% fewer than a lightweight hybrid CNN-transformer network (Liu et al., 2024) (9.9M). Coupled with a 4.9× faster GPU inference time (0.904 ± 0.002 s vs. nnUNet's 4.416 ± 0.004 s), the design enables real-time deployment on resource-constrained devices while maintaining competitive segmentation accuracy. Code is available at: FFLUNet.

Hierarchical Deep Feature Fusion and Ensemble Learning for Enhanced Brain Tumor MRI Classification

Zahid Ullah, Jihie Kim

arxiv logopreprintJun 14 2025
Accurate brain tumor classification is crucial in medical imaging to ensure reliable diagnosis and effective treatment planning. This study introduces a novel double ensembling framework that synergistically combines pre-trained deep learning (DL) models for feature extraction with optimized machine learning (ML) classifiers for robust classification. The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI), followed by deep feature extraction using transfer learning with pre-trained Vision Transformer (ViT) networks. The novelty lies in the dual-level ensembling strategy: feature-level ensembling, which integrates deep features from the top-performing ViT models, and classifier-level ensembling, which aggregates predictions from hyperparameter-optimized ML classifiers. Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods, underscoring the importance of feature and classifier fusion. The proposed methodology also highlights the critical roles of hyperparameter optimization (HPO) and advanced preprocessing techniques in improving diagnostic accuracy and reliability, advancing the integration of DL and ML for clinically relevant medical image analysis.

Automated quantification of T1 and T2 relaxation times in liver mpMRI using deep learning: a sequence-adaptive approach.

Zbinden L, Erb S, Catucci D, Doorenbos L, Hulbert L, Berzigotti A, Brönimann M, Ebner L, Christe A, Obmann VC, Sznitman R, Huber AT

pubmed logopapersJun 14 2025
To evaluate a deep learning sequence-adaptive liver multiparametric MRI (mpMRI) assessment with validation in different populations using total and segmental T1 and T2 relaxation time maps. A neural network was trained to label liver segmental parenchyma and its vessels on noncontrast T1-weighted gradient-echo Dixon in-phase acquisitions on 200 liver mpMRI examinations. Then, 120 unseen liver mpMRI examinations of patients with primary sclerosing cholangitis or healthy controls were assessed by coregistering the labels to noncontrast and contrast-enhanced T1 and T2 relaxation time maps for optimization and internal testing. The algorithm was externally tested in a segmental and total liver analysis of previously unseen 65 patients with biopsy-proven liver fibrosis and 25 healthy volunteers. Measured relaxation times were compared to manual measurements using intraclass correlation coefficient (ICC) and Wilcoxon test. Comparison of manual and deep learning-generated segmental areas on different T1 and T2 maps was excellent for segmental (ICC = 0.95 ± 0.1; p < 0.001) and total liver assessment (0.97 ± 0.02, p < 0.001). The resulting median of the differences between automated and manual measurements among all testing populations and liver segments was 1.8 ms for noncontrast T1 (median 835 versus 842 ms), 2.0 ms for contrast-enhanced T1 (median 518 versus 519 ms), and 0.3 ms for T2 (median 37 versus 37 ms). Automated quantification of liver mpMRI is highly effective across different patient populations, offering excellent reliability for total and segmental T1 and T2 maps. Its scalable, sequence-adaptive design could foster comprehensive clinical decision-making. The proposed automated, sequence-adaptive algorithm for total and segmental analysis of liver mpMRI may be co-registered to any combination of parametric sequences, enabling comprehensive quantitative analysis of liver mpMRI without sequence-specific training. A deep learning-based algorithm automatically quantified segmental T1 and T2 relaxation times in liver mpMRI. The two-step approach of segmentation and co-registration allowed to assess arbitrary sequences. The algorithm demonstrated high reliability with manual reader quantification. No additional sequence-specific training is required to assess other parametric sequences. The DL algorithm has the potential to enhance individual liver phenotyping.

Beyond Benchmarks: Towards Robust Artificial Intelligence Bone Segmentation in Socio-Technical Systems

Xie, K., Gruber, L. J., Crampen, M., Li, Y., Ferreira, A., Tappeiner, E., Gillot, M., Schepers, J., Xu, J., Pankert, T., Beyer, M., Shahamiri, N., ten Brink, R., Dot, G., Weschke, C., van Nistelrooij, N., Verhelst, P.-J., Guo, Y., Xu, Z., Bienzeisler, J., Rashad, A., Flügge, T., Cotton, R., Vinayahalingam, S., Ilesan, R., Raith, S., Madsen, D., Seibold, C., Xi, T., Berge, S., Nebelung, S., Kodym, O., Sundqvist, O., Thieringer, F., Lamecker, H., Coppens, A., Potrusil, T., Kraeima, J., Witjes, M., Wu, G., Chen, X., Lambrechts, A., Cevidanes, L. H. S., Zachow, S., Hermans, A., Truhn, D., Alves,

medrxiv logopreprintJun 13 2025
Despite the advances in automated medical image segmentation, AI models still underperform in various clinical settings, challenging real-world integration. In this multicenter evaluation, we analyzed 20 state-of-the-art mandibular segmentation models across 19,218 segmentations of 1,000 clinically resampled CT/CBCT scans. We show that segmentation accuracy varies by up to 25% depending on socio-technical factors such as voxel size, bone orientation, and patient conditions such as osteosynthesis or pathology. Higher sharpness, isotropic smaller voxels, and neutral orientation significantly improved results, while metallic osteosynthesis and anatomical complexity led to significant degradation. Our findings challenge the common view of AI models as "plug-and-play" tools and suggest evidence-based optimization recommendations for both clinicians and developers. This will in turn boost the integration of AI segmentation tools in routine healthcare.

Impact of Deep Learning-Based Image Conversion on Fully Automated Coronary Artery Calcium Scoring Using Thin-Slice, Sharp-Kernel, Non-Gated, Low-Dose Chest CT Scans: A Multi-Center Study.

Kim C, Hong S, Choi H, Yoo WS, Kim JY, Chang S, Park CH, Hong SJ, Yang DH, Yong HS, van Assen M, De Cecco CN, Suh YJ

pubmed logopapersJun 13 2025
To evaluate the impact of deep learning-based image conversion on the accuracy of automated coronary artery calcium quantification using thin-slice, sharp-kernel, non-gated, low-dose chest computed tomography (LDCT) images collected from multiple institutions. A total of 225 pairs of LDCT and calcium scoring CT (CSCT) images scanned at 120 kVp and acquired from the same patient within a 6-month interval were retrospectively collected from four institutions. Image conversion was performed for LDCT images using proprietary software programs to simulate conventional CSCT. This process included 1) deep learning-based kernel conversion of low-dose, high-frequency, sharp kernels to simulate standard-dose, low-frequency kernels, and 2) thickness conversion using the raysum method to convert 1-mm or 1.25-mm thickness images to 3-mm thickness. Automated Agaston scoring was conducted on the LDCT scans before (LDCT-Org<sub>auto</sub>) and after the image conversion (LDCT-CONV<sub>auto</sub>). Manual scoring was performed on the CSCT images (CSCT<sub>manual</sub>) and used as a reference standard. The accuracy of automated Agaston scores and risk severity categorization based on the automated scoring on LDCT scans was analyzed compared to the reference standard, using the Bland-Altman analysis, concordance correlation coefficient (CCC), and weighted kappa (κ) statistic. LDCT-CONV<sub>auto</sub> demonstrated a reduced bias for Agaston score, compared with CSCT<sub>manual</sub>, than LDCT-Org<sub>auto</sub> did (-3.45 vs. 206.7). LDCT-CONV<sub>auto</sub> showed a higher CCC than LDCT-Org<sub>auto</sub> did (0.881 [95% confidence interval {CI}, 0.750-0.960] vs. 0.269 [95% CI, 0.129-0.430]). In terms of risk category assignment, LDCT-Org<sub>auto</sub> exhibited poor agreement with CSCT<sub>manual</sub> (weighted κ = 0.115 [95% CI, 0.082-0.154]), whereas LDCT-CONV<sub>auto</sub> achieved good agreement (weighted κ = 0.792 [95% CI, 0.731-0.847]). Deep learning-based conversion of LDCT images originally obtained with thin slices and a sharp kernel can enhance the accuracy of automated coronary artery calcium score measurement using the images.

BraTS orchestrator : Democratizing and Disseminating state-of-the-art brain tumor image analysis

Florian Kofler, Marcel Rosier, Mehdi Astaraki, Ujjwal Baid, Hendrik Möller, Josef A. Buchner, Felix Steinbauer, Eva Oswald, Ezequiel de la Rosa, Ivan Ezhov, Constantin von See, Jan Kirschke, Anton Schmick, Sarthak Pati, Akis Linardos, Carla Pitarch, Sanyukta Adap, Jeffrey Rudie, Maria Correia de Verdier, Rachit Saluja, Evan Calabrese, Dominic LaBella, Mariam Aboian, Ahmed W. Moawad, Nazanin Maleki, Udunna Anazodo, Maruf Adewole, Marius George Linguraru, Anahita Fathi Kazerooni, Zhifan Jiang, Gian Marco Conte, Hongwei Li, Juan Eugenio Iglesias, Spyridon Bakas, Benedikt Wiestler, Marie Piraud, Bjoern Menze

arxiv logopreprintJun 13 2025
The Brain Tumor Segmentation (BraTS) cluster of challenges has significantly advanced brain tumor image analysis by providing large, curated datasets and addressing clinically relevant tasks. However, despite its success and popularity, algorithms and models developed through BraTS have seen limited adoption in both scientific and clinical communities. To accelerate their dissemination, we introduce BraTS orchestrator, an open-source Python package that provides seamless access to state-of-the-art segmentation and synthesis algorithms for diverse brain tumors from the BraTS challenge ecosystem. Available on GitHub (https://github.com/BrainLesion/BraTS), the package features intuitive tutorials designed for users with minimal programming experience, enabling both researchers and clinicians to easily deploy winning BraTS algorithms for inference. By abstracting the complexities of modern deep learning, BraTS orchestrator democratizes access to the specialized knowledge developed within the BraTS community, making these advances readily available to broader neuro-radiology and neuro-oncology audiences.

Uncovering ethical biases in publicly available fetal ultrasound datasets.

Fiorentino MC, Moccia S, Cosmo MD, Frontoni E, Giovanola B, Tiribelli S

pubmed logopapersJun 13 2025
We explore biases present in publicly available fetal ultrasound (US) imaging datasets, currently at the disposal of researchers to train deep learning (DL) algorithms for prenatal diagnostics. As DL increasingly permeates the field of medical imaging, the urgency to critically evaluate the fairness of benchmark public datasets used to train them grows. Our thorough investigation reveals a multifaceted bias problem, encompassing issues such as lack of demographic representativeness, limited diversity in clinical conditions depicted, and variability in US technology used across datasets. We argue that these biases may significantly influence DL model performance, which may lead to inequities in healthcare outcomes. To address these challenges, we recommend a multilayered approach. This includes promoting practices that ensure data inclusivity, such as diversifying data sources and populations, and refining model strategies to better account for population variances. These steps will enhance the trustworthiness of DL algorithms in fetal US analysis.

Long-term prognostic value of the CT-derived fractional flow reserve combined with atherosclerotic burden in patients with non-obstructive coronary artery disease.

Wang Z, Li Z, Xu T, Wang M, Xu L, Zeng Y

pubmed logopapersJun 13 2025
The long-term prognostic significance of the coronary computed tomography angiography (CCTA)-derived fractional flow reserve (CT-FFR) for non-obstructive coronary artery disease (CAD) is uncertain. We aimed to investigate the additional prognostic value of CT-FFR beyond CCTA-defined atherosclerotic burden for long-term outcomes. Consecutive patients with suspected stable CAD were candidates for this retrospective cohort study. Deep-learning-based vessel-specific CT-FFR was calculated. All patients enrolled were followed for at least 5 years. The primary outcome was major adverse cardiovascular events (MACE). Predictive abilities for MACE were compared among three models (model 1, constructed using clinical variables; model 2, model 1 + CCTA-derived atherosclerotic burden (Leiden risk score and segment involvement score); and model 3, model 2 + CT-FFR). A total of 1944 patients (median age, 59 (53-65) years; 53.0% men) were included. During a median follow-up time of 73.4 (71.2-79.7) months, 64 patients (3.3%) experienced MACE. In multivariate-adjusted Cox models, CT-FFR ≤ 0.80 (HR: 7.18; 95% CI: 4.25-12.12; p < 0.001) was a robust and independent predictor for MACE. The discriminant ability was higher in model 2 than in model 1 (C-index, 0.76 vs. 0.68; p = 0.001) and was further promoted by adding CT-FFR to model 3 (C-index, 0.83 vs. 0.76; p < 0.001). Integrated discrimination improvement (IDI) was 0.033 (p = 0.022) for model 2 beyond model 1. Of note, compared with model 2, model 3 also exhibited improved discrimination (IDI = 0.056; p < 0.001). In patients with non-obstructive CAD, CT-FFR provides robust and incremental prognostic information for predicting long-term outcomes. The combined model including CT-FFR and CCTA-defined atherosclerotic burden exhibits improved prediction abilities, which is helpful for risk stratification. Question Prognostic significance of the CT-fractional flow reserve (FFR) in non-obstructive coronary artery disease for long-term outcomes merits further investigation. Findings Our data strongly emphasized the independent and additional predictive value of CT-FFR beyond coronary CTA-defined atherosclerotic burden and clinical risk factors. Clinical relevance The new combined predictive model incorporating CT-FFR can be satisfactorily used for risk stratification of patients with non-obstructive coronary artery disease by identifying those who are truly suitable for subsequent high-intensity preventative therapies and extensive follow-up for prognostic reasons.

Enhancing Privacy: The Utility of Stand-Alone Synthetic CT and MRI for Tumor and Bone Segmentation

André Ferreira, Kunpeng Xie, Caroline Wilpert, Gustavo Correia, Felix Barajas Ordonez, Tiago Gil Oliveira, Maike Bode, Robert Siepmann, Frank Hölzle, Rainer Röhrig, Jens Kleesiek, Daniel Truhn, Jan Egger, Victor Alves, Behrus Puladi

arxiv logopreprintJun 13 2025
AI requires extensive datasets, while medical data is subject to high data protection. Anonymization is essential, but poses a challenge for some regions, such as the head, as identifying structures overlap with regions of clinical interest. Synthetic data offers a potential solution, but studies often lack rigorous evaluation of realism and utility. Therefore, we investigate to what extent synthetic data can replace real data in segmentation tasks. We employed head and neck cancer CT scans and brain glioma MRI scans from two large datasets. Synthetic data were generated using generative adversarial networks and diffusion models. We evaluated the quality of the synthetic data using MAE, MS-SSIM, Radiomics and a Visual Turing Test (VTT) performed by 5 radiologists and their usefulness in segmentation tasks using DSC. Radiomics indicates high fidelity of synthetic MRIs, but fall short in producing highly realistic CT tissue, with correlation coefficient of 0.8784 and 0.5461 for MRI and CT tumors, respectively. DSC results indicate limited utility of synthetic data: tumor segmentation achieved DSC=0.064 on CT and 0.834 on MRI, while bone segmentation a mean DSC=0.841. Relation between DSC and correlation is observed, but is limited by the complexity of the task. VTT results show synthetic CTs' utility, but with limited educational applications. Synthetic data can be used independently for the segmentation task, although limited by the complexity of the structures to segment. Advancing generative models to better tolerate heterogeneous inputs and learn subtle details is essential for enhancing their realism and expanding their application potential.

Taming Stable Diffusion for Computed Tomography Blind Super-Resolution

Chunlei Li, Yilei Shi, Haoxi Hu, Jingliang Hu, Xiao Xiang Zhu, Lichao Mou

arxiv logopreprintJun 13 2025
High-resolution computed tomography (CT) imaging is essential for medical diagnosis but requires increased radiation exposure, creating a critical trade-off between image quality and patient safety. While deep learning methods have shown promise in CT super-resolution, they face challenges with complex degradations and limited medical training data. Meanwhile, large-scale pre-trained diffusion models, particularly Stable Diffusion, have demonstrated remarkable capabilities in synthesizing fine details across various vision tasks. Motivated by this, we propose a novel framework that adapts Stable Diffusion for CT blind super-resolution. We employ a practical degradation model to synthesize realistic low-quality images and leverage a pre-trained vision-language model to generate corresponding descriptions. Subsequently, we perform super-resolution using Stable Diffusion with a specialized controlling strategy, conditioned on both low-resolution inputs and the generated text descriptions. Extensive experiments show that our method outperforms existing approaches, demonstrating its potential for achieving high-quality CT imaging at reduced radiation doses. Our code will be made publicly available.
Page 249 of 3843834 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.