Sort by:
Page 542 of 6276269 results

Ji Y, Mei X, Tan R, Zhang W, Ma Y, Peng Y, Zhang Y

pubmed logopapersMay 30 2025
Accurate vertebrae segmentation is crucial for modern surgical technologies, and deep learning networks provide valuable tools for this task. This study explores the application of advanced deep learning-based methods for segmenting vertebrae in computed tomography (CT) images of adolescent idiopathic scoliosis (AIS) patients. In this study, we collected a dataset of 31 samples from AIS patients, covering a wide range of spinal regions from cervical to lumbar vertebrae. High-resolution CT images were obtained for each sample, forming the basis of our segmentation analysis. We utilized 2 popular neural networks, U-Net and Attention U-Net, to segment the vertebrae in these CT images. Segmentation performance was rigorously evaluated using 2 key metrics: the Dice Coefficient Score to measure overlap between segmented and ground truth regions, and the Hausdorff distance (HD) to assess boundary dissimilarity. Both networks performed well, with U-Net achieving an average Dice coefficient of 92.2 ± 2.4% and an HD of 9.80 ± 1.34 mm. Attention U-Net showed similar results, with a Dice coefficient of 92.3 ± 2.9% and an HD of 8.67 ± 3.38 mm. When applied to the challenging anatomy of AIS, our findings align with literature results from advanced 3D U-Nets on healthy spines. Although no significant overall difference was observed between the 2 networks (P > .05), Attention U-Net exhibited an improved Dice coefficient (91.5 ± 0.0% vs 88.8 ± 0.1%, P = .151) and a significantly better HD (9.04 ± 4.51 vs. 13.60 ± 2.26 mm, P = .027) in critical scoliosis sites (mid-thoracic region), suggesting enhanced suitability for complex anatomy. Our study indicates that U-Net neural networks are feasible and effective for automated vertebrae segmentation in AIS patients using clinical 3D CT images. Attention U-Net demonstrated improved performance in thoracic levels, which are primary sites of scoliosis and may be more suitable for challenging anatomical regions.

Kumar Mondal P, Jahan MK, Byeon H

pubmed logopapersMay 30 2025
Breast cancer is a serious public health problem and is one of the leading causes of cancer-related deaths in women worldwide. Early detection of the disease can significantly increase the chances of survival. However, manual analysis of mammogram mastery images is complex and time-consuming, which can lead to disagreements among experts. For this reason, automated diagnostic systems can play a significant role in increasing the accuracy and efficiency of diagnosis. In this study, we present an effective deep learning (DL) method, which classifies mammogram mastery images into cancer and noncancer categories using a collected dataset. Our model is pretrained based on the Inception V3 architecture. First, we run 5-fold cross-validation tests on the fully trained and fine-tuned Inception V3 model. Next, we apply a combined method based on likelihood and mean, where the fine-tuned Inception V3 model demonstrated superior performance in classification. Our DL model achieved 99% accuracy and 99% F1 score. In addition, interpretable AI techniques were used to enhance the transparency of the classification process. The finely tuned Inception V3 model demonstrated the highest performance in classification, confirming its effectiveness in automatic breast cancer detection. The experimental results clearly indicate that our proposed DL-based method for breast cancer image classification is highly effective, especially its application in image-based diagnostic methods. This study brings to the fore the huge potential of AI-based solutions, which can play a significant role in increasing the accuracy and reliability of breast cancer diagnosis.

Dahdal J, Jukema RA, Maaniitty T, Nurmohamed NS, Raijmakers PG, Hoek R, Driessen RS, Twisk JWR, Bär S, Planken RN, van Royen N, Nijveldt R, Bax JJ, Saraste A, van Rosendael AR, Knaapen P, Knuuti J, Danad I

pubmed logopapersMay 30 2025
To assess the prognostic utility of coronary artery calcium (CAC) scoring and coronary computed tomography angiography (CCTA)-derived quantitative plaque metrics for predicting adverse cardiovascular outcomes. The study enrolled 2404 patients with suspected coronary artery disease (CAD) but without a prior history of CAD. All participants underwent CAC scoring and CCTA, with plaque metrics quantified using an artificial intelligence (AI)-based tool (Cleerly, Inc). Percent atheroma volume (PAV) and non-calcified plaque volume percentage (NCPV%), reflecting total plaque burden and the proportion of non-calcified plaque volume normalized to vessel volume, were evaluated. The primary endpoint was a composite of all-cause mortality and non-fatal myocardial infarction (MI). Cox proportional hazard models, adjusted for clinical risk factors and early revascularization, were employed for analysis. During a median follow-up of 7.0 years, 208 patients (8.7%) experienced the primary endpoint, including 73 cases of MI (3%). The model incorporating PAV demonstrated superior discriminatory power for the composite endpoint (AUC = 0.729) compared to CAC scoring (AUC = 0.706, P = 0.016). In MI prediction, PAV (AUC = 0.791) significantly outperformed CAC (AUC = 0.699, P < 0.001), with NCPV% showing the highest prognostic accuracy (AUC = 0.814, P < 0.001). AI-driven assessment of coronary plaque burden enhances prognostic accuracy for future adverse cardiovascular events, highlighting the critical role of comprehensive plaque characterization in refining risk stratification strategies.

Oncu E, Ciftci F

pubmed logopapersMay 30 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide, emphasizing the critical need for accurate and early diagnostic solutions. This study introduces a novel multimodal artificial intelligence (AI) framework that integrates Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to improve lung cancer classification and severity assessment. The CNN model, trained on 1019 preprocessed CT images, classified lung tissue into four histological categories, adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal, with a weighted accuracy of 92 %. Interpretability is enhanced using Gradient-weighted Class Activation Mapping (Grad-CAM), which highlights the salient image regions influencing the model's predictions. In parallel, an ANN trained on clinical data from 999 patients-spanning 24 key features such as demographic, symptomatic, and genetic factors-achieves 99 % accuracy in predicting cancer severity (low, medium, high). SHapley Additive exPlanations (SHAP) are employed to provide both global and local interpretability of the ANN model, enabling transparent decision-making. Both models were rigorously validated using k-fold cross-validation to ensure robustness and reduce overfitting. This hybrid approach effectively combines spatial imaging data and structured clinical information, demonstrating strong predictive performance and offering an interpretable and comprehensive AI-based solution for lung cancer diagnosis and management.

Islam Sumon MS, Chowdhury MEH, Bhuiyan EH, Rahman MS, Khan MM, Al-Hashimi I, Mushtak A, Zoghoul SB

pubmed logopapersMay 30 2025
Digital pathology relies on the morphological architecture of prostate glands to recognize cancerous tissue. Prostate cancer (PCa) originates in walnut shaped prostate gland in the male reproductive system. Deep learning (DL) pipelines can assist in identifying these regions with advanced segmentation techniques which are effective in diagnosing and treating prostate diseases. This facilitates early detection, targeted biopsy, and accurate treatment planning, ensuring consistent, reproducible results while minimizing human error. Automated segmentation techniques trained on MRI datasets can aid in monitoring disease progression which leads to clinical support by developing patient-specific models for personalized medicine. In this study, we present multiclass segmentation models designed to localize the prostate gland and its zonal regions-specifically the peripheral zone (PZ), transition zone (TZ), and the whole gland-by combining EfficientNetB4 encoders with Self-organized Operational Neural Network (Self-ONN)-based decoders. Traditional convolutional neural networks (CNNs) rely on linear neuron models, which limit their ability to capture the complex dynamics of biological neural systems. In contrast, Operational Neural Networks (ONNs), particularly Self-ONNs, address this limitation by incorporating nonlinear and adaptive operations at the neuron level. We evaluated various encoder-decoder configurations and identified that the combination of an EfficientNet-based encoder with a Self-ONN-based decoder yielded the best performance. To further enhance segmentation accuracy, we employed the STAPLE method to ensemble the top three performing models. Our approach was tested on the large-scale, recently updated PI-CAI Challenge dataset using 5-fold cross-validation, achieving Dice scores of 95.33 % for the whole gland and 92.32 % for the combined PZ and TZ regions. These advanced segmentation techniques significantly improve the quality of PCa diagnosis and treatment, contributing to better patient care and outcomes.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Jing Huang, Yongkang Zhao, Yuhan Li, Zhitao Dai, Cheng Chen, Qiying Lai

arxiv logopreprintMay 30 2025
The U-shaped encoder-decoder architecture with skip connections has become a prevailing paradigm in medical image segmentation due to its simplicity and effectiveness. While many recent works aim to improve this framework by designing more powerful encoders and decoders, employing advanced convolutional neural networks (CNNs) for local feature extraction, Transformers or state space models (SSMs) such as Mamba for global context modeling, or hybrid combinations of both, these methods often struggle to fully utilize pretrained vision backbones (e.g., ResNet, ViT, VMamba) due to structural mismatches. To bridge this gap, we introduce ACM-UNet, a general-purpose segmentation framework that retains a simple UNet-like design while effectively incorporating pretrained CNNs and Mamba models through a lightweight adapter mechanism. This adapter resolves architectural incompatibilities and enables the model to harness the complementary strengths of CNNs and SSMs-namely, fine-grained local detail extraction and long-range dependency modeling. Additionally, we propose a hierarchical multi-scale wavelet transform module in the decoder to enhance feature fusion and reconstruction fidelity. Extensive experiments on the Synapse and ACDC benchmarks demonstrate that ACM-UNet achieves state-of-the-art performance while remaining computationally efficient. Notably, it reaches 85.12% Dice Score and 13.89mm HD95 on the Synapse dataset with 17.93G FLOPs, showcasing its effectiveness and scalability. Code is available at: https://github.com/zyklcode/ACM-UNet.

Abdul-mojeed Olabisi Ilyas, Adeleke Maradesa, Jamal Banzi, Jianpan Huang, Henry K. F. Mak, Kannie W. Y. Chan

arxiv logopreprintMay 30 2025
Medical imaging is critical for diagnostics, but clinical adoption of advanced AI-driven imaging faces challenges due to patient variability, image artifacts, and limited model generalization. While deep learning has transformed image analysis, 3D medical imaging still suffers from data scarcity and inconsistencies due to acquisition protocols, scanner differences, and patient motion. Traditional augmentation uses a single pipeline for all transformations, disregarding the unique traits of each augmentation and struggling with large data volumes. To address these challenges, we propose a Multi-encoder Augmentation-Aware Learning (MEAL) framework that leverages four distinct augmentation variants processed through dedicated encoders. Three fusion strategies such as concatenation (CC), fusion layer (FL), and adaptive controller block (BD) are integrated to build multi-encoder models that combine augmentation-specific features before decoding. MEAL-BD uniquely preserves augmentation-aware representations, enabling robust, protocol-invariant feature learning. As demonstrated in a Computed Tomography (CT)-to-T1-weighted Magnetic Resonance Imaging (MRI) translation study, MEAL-BD consistently achieved the best performance on both unseen- and predefined-test data. On both geometric transformations (like rotations and flips) and non-augmented inputs, MEAL-BD outperformed other competing methods, achieving higher mean peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) scores. These results establish MEAL as a reliable framework for preserving structural fidelity and generalizing across clinically relevant variability. By reframing augmentation as a source of diverse, generalizable features, MEAL supports robust, protocol-invariant learning, advancing clinically reliable medical imaging solutions.

Junyu Chen, Shuwen Wei, Yihao Liu, Aaron Carass, Yong Du

arxiv logopreprintMay 30 2025
Recent advances in deep learning-based medical image registration have shown that training deep neural networks~(DNNs) does not necessarily require medical images. Previous work showed that DNNs trained on randomly generated images with carefully designed noise and contrast properties can still generalize well to unseen medical data. Building on this insight, we propose using registration between random images as a proxy task for pretraining a foundation model for image registration. Empirical results show that our pretraining strategy improves registration accuracy, reduces the amount of domain-specific data needed to achieve competitive performance, and accelerates convergence during downstream training, thereby enhancing computational efficiency.

Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass

arxiv logopreprintMay 30 2025
Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.
Page 542 of 6276269 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.