Sort by:
Page 44 of 6156144 results

Ergün U, Çoban T, Kayadibi İ

pubmed logopapersOct 13 2025
Breast cancer remains one of the leading causes of cancer-related deaths globally, affecting both women and men. This study aims to develop a novel deep learning (DL)-based architecture, the Breast Cancer Ensemble Convolutional Neural Network (BCECNN), to enhance the diagnostic accuracy and interpretability of breast cancer detection systems. The BCECNN architecture incorporates two ensemble learning (EL) structures: Triple Ensemble CNN (TECNN) and Quintuple Ensemble CNN (QECNN). These ensemble models integrate the predictions of multiple CNN architectures-AlexNet, VGG16, ResNet-18, EfficientNetB0, and XceptionNet-using a majority voting mechanism. These models were trained using transfer learning (TL) and evaluated on five distinct sub-datasets generated from the Artificial Intelligence Smart Solution Laboratory (AISSLab) dataset, which consists of 266 mammography images labeled and validated by radiologists. To improve transparency and interpretability, Explainable Artificial Intelligence (XAI) techniques, including Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME), were applied. Additionally, explainability was assessed through clinical evaluation by an experienced radiologist. Experimental results demonstrated that the TECNN model-comprising AlexNet, VGG16, and EfficientNetB0-achieved the highest accuracy of 98.75% on the AISSLab-v2 dataset. The integration of XAI methods substantially enhanced the interpretability of the model, enabling clinicians to better understand and validate the model's decision-making process. Clinical evaluation confirmed that the XAI outputs aligned well with expert assessments, underscoring the practical utility of the model in a diagnostic setting. The BCECNN model presents a promising solution for improving both the accuracy and interpretability of breast cancer diagnostic systems. Unlike many previous studies that rely on single architectures or large datasets, BCECNN leverages the strengths of an ensemble of CNN models and performs robustly even with limited data. It integrates advanced XAI techniques-such as Grad-CAM and LIME-to provide visual justifications for model decisions, enhancing clinical interpretability. Moreover, the model was validated using AISSLab dataset, designed to reflect real-world diagnostic challenges. This combination of EL, interpretability, and robust performance on small yet clinically relevant data positions BCECNN as a novel and reliable decision support tool for AI-assisted breast cancer diagnostics.

Park SG, Moon S, Kim YJ, Kim KG

pubmed logopapersOct 13 2025
Osteophytes and ossification of the posterior longitudinal ligament (OPLL) are major contributors to degenerative cervical myelopathy (DCM), a leading cause of spinal cord dysfunction in adults. Accurate assessment of these lesions is essential for surgical planning, particularly in patients considered for artificial disc replacement (ADR), where the extent of ossification critically influences surgical eligibility. This study evaluated the performance of YOLO (You Only Look Once)-based deep learning models for automated detection of osteophytes and OPLL in cervical computed tomography (CT) images. A total of 2691 sagittal cervical CT images were retrospectively analyzed using YOLOv5, YOLOv7, and YOLOv8 models. Detection performance was assessed using precision, recall, and mean average precision at intersection over union thresholds of 0.5 (mAP@50) and 0.5-0.95 (mAP@50-95). Among the images, 79.5% (2137/2691) demonstrated co-occurrence of osteophytes and OPLL. YOLOv5 exhibited the highest performance, achieving a precision of 67.42%, recall of 68.36%, mAP@50 of 71.56%, and mAP@50-95 of 28.90%. Detection accuracy for osteophytes consistently outperformed that for OPLL, with statistically significant differences across mAP metrics (p < 0.05). These findings suggest that YOLO-based models demonstrate potential as objective, reproducible tools for automated lesion detection in cervical CT, supporting preoperative planning and ADR eligibility evaluation.

Zhang Y, Hao X, Ma J

pubmed logopapersOct 13 2025
Addressing the challenge of geometric feature modeling for heart failure subtype classification in cardiac magnetic resonance imaging (MRI), this study proposes an Ejection Fraction (EF) Gated 3D Capsule Network (EG3D-CapsNet). Traditional 3D convolutional neural networks (e.g., Res3DNet-50) achieve only 46.00% (±9.75%) classification accuracy on the Automated Cardiac Diagnosis Challenge (ACDC) dataset due to parameter redundancy and inability to integrate clinical indicators. The method of this study breaks through this performance bottleneck via three core innovations: (1) a spatially decoupled dynamic routing mechanism that independently processes feature interactions at each anatomical location, enhancing local geometric modeling capability; (2) an EF-gated attention module that achieves fine-grained alignment of imaging features and clinical indicators through a learnable biomarker scaling strategy; (3) an orthogonal initialization scheme for capsule transformation matrices, improving routing stability in high-dimensional feature spaces. Experiments on 150 five-class cardiac MRI cases show that EG3D-CapsNet achieves an average accuracy of 65.33% (±6.53%) with a 91% reduction in parameters, representing an absolute improvement of 19.33 percentage points over the baseline model. Ablation studies confirm the EF gating mechanism contributes an 18.04% accuracy gain, and visualization analysis reveals high correlation between capsule activation regions and myocardial pathological features. This method provides a new paradigm for cardiac imaging-assisted diagnosis that is high-precision, lightweight, and interpretable.

Angelone F, Franco A, Ponsiglione AM, Ricciardi C, Belfiore MP, Gatta G, Grassi R, Sansone M, Amato F

pubmed logopapersOct 13 2025
Full-field digital mammography (FFDM) is the most common imaging technique for breast cancer screening programs. Still, it is limited by noise from quantum effects, electronic issues, and X-ray scattering, affecting the image quality. Traditional denoising methods based on filters and transformations perform poorly due to the complex, tissue-dependent nature of noise, while supervised deep learning methods require extensive, often unavailable datasets with paired noisy and noiseless images. Consequently, unsupervised denoising methods, which do not require clean images as ground truth, are gaining attention. However, their application to FFDM is poorly explored. This study investigates the use of Noise2Void (N2V), an unsupervised denoising approach adapted to digital mammography images for the first time. N2V employs blind spot masking to remove noise without requiring noiseless images. The method was assessed using different metrics on real clinical images and artificially noised images: contrast-to-noise ratio (CNR), and structural similarity index (SSIM). A qualitative evaluation was also made based on a questionnaire provided to radiologists. The results show that evaluated metrics increase on N2V images; these results are comparable with traditional methods. Despite showing quantitative performance comparable to traditional methods, N2V retains potential for clinical application as a flexible, annotation-free approach for retrospective, low-dose mammography imaging.

Zhang H, Zhang B, Shi G, Zhou Y, Zhou G, Zhang Z, Tian J

pubmed logopapersOct 13 2025
<i>Objective.</i>Magnetic Particle Imaging (MPI) is a promising medical imaging technique that has been widely applied in preclinical stages. However, when expanding to human body scanning, cases often arise where superparamagnetic iron oxide nanoparticles (SPIOs) are located outside the field of view (FOV). In such cases, signal contributions from SPIOs outside the FOV generate boundary artifacts in the reconstructed images, compromising image accuracy. Therefore, restoring the affected images is crucial for the clinical translation of the MPI technology. Existing methods, such as overlapping scanning trajectories or joint reconstruction, effectively mitigate boundary artifacts but may still require further improvements in real-time imaging capabilities.<i>Approach.</i>In this study, we explore and utilize the spectral differences between SPIO signals inside and outside the FOV to design a dual-domain joint learning network for accurate restoration of MPI images. The network simultaneously takes as input both the affected images and their corresponding time-frequency map. Through feature extraction and adaptive weighted fusion, the network enhances its own ability to restore images.<i>Main results.</i>Our proposed Joint Frequency-Image Domain Network (JFI-Net) outperforms existing methods on the publicly available OpenMPI dataset and simulation datasets. Additionally, the network is applied to an in-house handheld MPI system, improving its imaging accuracy for large-sized vessel phantoms. Ablation experiments confirm the effectiveness of the proposed feature extraction and feature fusion modules within the network.<i>Significance.</i>This study presents an innovative solution to overcome boundary artifacts in MPI, significantly enhancing its quantitative accuracy for clinical applications. The proposed JFI-Net offers an efficient image restoration method that can contribute to the application of MPI technology in clinical practice.

Tel A, Bolognesi F, Michelutti L, Biglioli F, Robiony M

pubmed logopapersOct 13 2025
Transformer-based large language models (LLMs), such as ChatGPT-4, are increasingly used to streamline clinical practice, of which radiology reporting is a prominent aspect. However, their performance in interpreting complex anatomical regions from MRI data remains largely unexplored. This study investigates the capability of ChatGPT-4 to produce clinically reliable reports based on orbital MR images, applying a multimetric, quantitative evaluation framework in 25 patients with orbital lesions. Due to inherent limitations of current version of GPT-4, the model was not fed with MR volumetric data, but key 2D images only. For each case, ChatGPT-4 generated a free-text report, which was then compared to the corresponding ground-truth report authored by a board-certified radiologist. Evaluation included established NLP metrics (BLEU-4, ROUGE-L, BERTScore), clinical content recognition scores (RadGraph F1, CheXbert), and expert human judgment. Among the automated metrics, BERTScore demonstrated the highest language similarity, while RadGraph F1 best captured clinical entity recognition. Clinician assessment revealed moderate agreement with the LLM outputs, with performance decreasing in complex or infiltrative cases. The study highlights both the promise and current limitations of LLMs in radiology, particularly regarding their inability to process volumetric data and maintain spatial consistency. These findings suggest that while LLMs may assist in structured reporting, effective integration into diagnostic imaging workflows will require coupling with advanced vision models capable of full 3D interpretation.

Yuan X, Chen C, Shi Z, Liu W, Zhang X, Yang M, Zhu M, Yu J, Liu F, Li J, Zhang Y, Jiang H, Chen B, Lu J, Shao C, Bian Y

pubmed logopapersOct 13 2025
Pancreatic cystic neoplasms (PCN) are critical precursors for early pancreatic cancer detection, yet current diagnostic methods lack accuracy and consistency. This multicenter study developed and validated an artificial intelligence (AI)-powered CT model (PCN-AI) for improved assessment. Using contrast-enhanced CT images from 1835 patients, PCN-AI extracted 63 quantitative features to classify PCN subtypes through four hierarchical tasks. A multi-reader, multi-case (MRMC) study demonstrated that AI assistance significantly improved radiologists' diagnostic accuracy (AUC: 0.786 to 0.845; p < 0.05) and reduced interpretation time by 23.7% (5.28 vs. 4.03 minutes/case). Radiologists accepted AI recommendations in 87.14% of cases. In a prospective real-world cohort, PCN-AI outperformed radiologist double-reading, providing actionable diagnostic benefits to 45.45% of patients (5/11) by correctly identifying missed malignant PCN cases, enabling timely intervention, and simultaneously reducing clinical workload by 39.3%. PCN-AI achieved robust performance across tasks (AUCs: 0.845-0.988), demonstrating its potential to enhance early detection, precision management, and diagnostic efficiency in clinical practice.

Ahmadi M, Chen H, Lin M, Biswas D, Doulgeris J, Tang Y, Engeberg ED, Hashemi J, Pires G, Vrionis FD

pubmed logopapersOct 13 2025
Advancing our understanding of spinal biomechanics through Finite Element Analysis (FEA) is essential for clinical decision-making and biomechanical research. Traditional FEA workflows are hindered by manual segmentation and meshing, introducing inconsistencies, user variability, and lengthy processing times. This study presents a streamlined, patient-specific modeling methodology for the lumbar spine that fundamentally transforms the FEA preprocessing pipeline. By integrating deep learning-based segmentation with advanced computational tools such as the GIBBON library and FEBio, our approach minimizes manual intervention, accelerates model preparation, and enhances both accuracy and reproducibility. The proposed workflow enables precise extraction and meshing of key anatomical structures including cortical and cancellous bone, intervertebral discs, ligaments, and cartilage directly from clinical CT imaging data. Robust segmentation techniques ensure accurate identification and separation of these components, which are subsequently converted into high-resolution surface and volumetric meshes. To optimize model fidelity and computational efficiency, the pipeline incorporates geometric smoothing and adaptive mesh decimation. Ligament attachment is addressed through an innovative coordinate-based framework that leverages anatomical landmarks for automated placement and orientation, overcoming a major challenge in FEA preprocessing. The results demonstrate that the resulting subject-specific models reproduce physiological biomechanics with high fidelity. Range of Motion and stress distribution outcomes closely match experimental data and established numerical models, confirming the pipeline's accuracy. Importantly, preparation time is reduced from days to just hours, delivering an efficient, reproducible workflow. By unifying segmentation, meshing, and ligament modeling in a single efficient framework, this study establishes a scalable platform for rapid, reliable, and anatomically accurate FEA of the lumbar spine, with significant implications for clinical diagnostics and preoperative planning.

Chen J, Liang Z, Lu X

pubmed logopapersOct 13 2025
Medical image segmentation is a crucial technology for disease diagnosis and treatment planning. However, current approaches face challenges in capturing global semantic dependencies and integrating cross-layer features. While Convolutional Neural Networks (CNNs) excel at extracting local features, they struggle with long-range dependencies; Transformers effectively model global context but may compromise spatial details. To address these limitations, this paper proposes a novel hybrid CNN-Transformer architecture, Dual Attention and Cross-layer Fusion Network (DCF-Net). Based on an encoder-decoder framework, DCF-Net introduces two key modules: the Channel-Adaptive Sparse Attention (CASA) module and the Synergistic Skip-connection and Cross-layer Fusion (SSCF) module. Specifically, CASA enhances semantic modeling by filtering critical features and focusing on anatomically important regions, while SSCF enables effective hierarchical feature fusion by bridging encoder-decoder representations. Extensive experiments on the Synapse, ACDC, and ISIC2017 datasets demonstrate that DCF-Net achieves state-of-the-art performance without pre-training. This work highlights the value of cross-layer fusion and attention mechanism, providing a robust and generalizable solution for medical image segmentation tasks.

Swiderska K, Blackie CA, Murakami D, Appenteng EO, Read ML, Maldonado-Codina C, Morgan PB

pubmed logopapersOct 13 2025
Recent advances in artificial intelligence (AI) have enhanced the capabilities of meibography by enabling objective and quantitative assessment of meibomian gland structure. This review explores the clinical utility of AI-based meibography in the diagnosis and management of Meibomian Gland Dysfunction (MGD), with a focus on segmentation, morphological analysis, and disease staging. Developments in deep learning have enabled more precise gland feature extraction, including gland dropout, density, and tortuosity, supporting efforts towards standardised and reproducible clinical evaluation. Although not the focus of this review, insights from traditional image processing techniques are referenced to highlight potential areas of improvement in current AI models. Key issues such as limited modelling of regional gland variation, restricted dataset diversity, and lack of standardised image quality control are discussed. Although significant progress has been made, further work is needed to ensure AI-driven meibography tools are generalisable, interpretable, and suitable for broad clinical implementation.
Page 44 of 6156144 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.