Sort by:
Page 62 of 2252246 results

ThreeF-Net: Fine-grained feature fusion network for breast ultrasound image segmentation.

Bian X, Liu J, Xu S, Liu W, Mei L, Xiao C, Yang F

pubmed logopapersJun 14 2025
Convolutional Neural Networks (CNNs) have achieved remarkable success in breast ultrasound image segmentation, but they still face several challenges when dealing with breast lesions. Due to the limitations of CNNs in modeling long-range dependencies, they often perform poorly in handling issues such as similar intensity distributions, irregular lesion shapes, and blurry boundaries, leading to low segmentation accuracy. To address these issues, we propose the ThreeF-Net, a fine-grained feature fusion network. This network combines the advantages of CNNs and Transformers, aiming to simultaneously capture local features and model long-range dependencies, thereby improving the accuracy and stability of segmentation tasks. Specifically, we designed a Transformer-assisted Dual Encoder Architecture (TDE), which integrates convolutional modules and self-attention modules to achieve collaborative learning of local and global features. Additionally, we designed a Global Group Feature Extraction (GGFE) module, which effectively fuses the features learned by CNNs and Transformers, enhancing feature representation ability. To further improve model performance, we also introduced a Dynamic Fine-grained Convolution (DFC) module, which significantly improves lesion boundary segmentation accuracy by dynamically adjusting convolution kernels and capturing multi-scale features. Comparative experiments with state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that ThreeF-Net outperforms existing methods across multiple key evaluation metrics.

Hierarchical Deep Feature Fusion and Ensemble Learning for Enhanced Brain Tumor MRI Classification

Zahid Ullah, Jihie Kim

arxiv logopreprintJun 14 2025
Accurate brain tumor classification is crucial in medical imaging to ensure reliable diagnosis and effective treatment planning. This study introduces a novel double ensembling framework that synergistically combines pre-trained deep learning (DL) models for feature extraction with optimized machine learning (ML) classifiers for robust classification. The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI), followed by deep feature extraction using transfer learning with pre-trained Vision Transformer (ViT) networks. The novelty lies in the dual-level ensembling strategy: feature-level ensembling, which integrates deep features from the top-performing ViT models, and classifier-level ensembling, which aggregates predictions from hyperparameter-optimized ML classifiers. Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods, underscoring the importance of feature and classifier fusion. The proposed methodology also highlights the critical roles of hyperparameter optimization (HPO) and advanced preprocessing techniques in improving diagnostic accuracy and reliability, advancing the integration of DL and ML for clinically relevant medical image analysis.

Sex-estimation method for three-dimensional shapes of the skull and skull parts using machine learning.

Imaizumi K, Usui S, Nagata T, Hayakawa H, Shiotani S

pubmed logopapersJun 14 2025
Sex estimation is an indispensable test for identifying skeletal remains in the field of forensic anthropology. We developed a novel sex-estimation method for skulls and several parts of the skull using machine learning. A total of 240 skull shapes were obtained from postmortem computed tomography scans. The shapes of the whole skull, cranium, and mandible were simplified by wrapping them with virtual elastic film. These were then transformed into homologous shape models. Homologous models of the cranium and mandible were segmented into six regions containing well-known sexually dimorphic areas. Shape data were reduced in dimensionality by principal component analysis (PCA) or partial least squares regression (PLS). The components of PCA and PLS were applied to a support vector machine (SVM), and the accuracy rates of sex estimation were assessed. High accuracy rates in sex estimation were observed in SVM after reducing the dimensionality of data with PLS. The rates exceeded 90 % in two of the nine regions examined, whereas the SVM with PCA components did not reach 90 % in any region. Virtual shapes created from very large and small scores of the first principal components of PLS closely resembled masculine and feminine models created by emphasizing the shape difference between the averaged shape of male and female skulls. Such similarities were observed in all skull regions examined, particularly in sexually dimorphic areas. Estimation models also achieved high estimation accuracies in newly prepared skull shapes, suggesting that the estimation method developed here may be sufficiently applicable to actual casework.

The Machine Learning Models in Major Cardiovascular Adverse Events Prediction Based on Coronary Computed Tomography Angiography: Systematic Review.

Ma Y, Li M, Wu H

pubmed logopapersJun 13 2025
Coronary computed tomography angiography (CCTA) has emerged as the first-line noninvasive imaging test for patients at high risk of coronary artery disease (CAD). When combined with machine learning (ML), it provides more valid evidence in diagnosing major adverse cardiovascular events (MACEs). Radiomics provides informative multidimensional features that can help identify high-risk populations and can improve the diagnostic performance of CCTA. However, its role in predicting MACEs remains highly debated. We evaluated the diagnostic value of ML models constructed using radiomic features extracted from CCTA in predicting MACEs, and compared the performance of different learning algorithms and models, thereby providing clinical recommendations for the diagnosis, treatment, and prognosis of MACEs. We comprehensively searched 5 online databases, Cochrane Library, Web of Science, Elsevier, CNKI, and PubMed, up to September 10, 2024, for original studies that used ML models among patients who underwent CCTA to predict MACEs and reported clinical outcomes and endpoints related to it. Risk of bias in the ML models was assessed by the Prediction Model Risk of Bias Assessment Tool, while the radiomics quality score (RQS) was used to evaluate the methodological quality of the radiomics prediction model development and validation. We also followed the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) guidelines to ensure transparency of ML models included. Meta-analysis was performed using Meta-DiSc software (version 1.4), which included the I² score and Cochran Q test, along with StataMP 17 (StataCorp) to assess heterogeneity and publication bias. Due to the high heterogeneity observed, subgroup analysis was conducted based on different model groups. Ten studies were included in the analysis, 5 (50%) of which differentiated between training and testing groups, where the training set collected 17 kinds of models and the testing set gathered 26 models. The pooled area under the receiver operating characteristic (AUROC) curve for ML models predicting MACEs was 0.7879 in the training set and 0.7981 in the testing set. Logistic regression (LR), the most commonly used algorithm, achieved an AUROC of 0.8229 in the testing group and 0.7983 in the training group. Non-LR models yielded AUROCs of 0.7390 in the testing set and 0.7648 in the training set, while the random forest (RF) models reached an AUROC of 0.8444 in the training group. Study limitations included a limited number of studies, high heterogeneity, and the types of included studies. The performance of ML models for predicting MACEs was found to be superior to that of general models based on basic feature extraction and integration from CCTA. Specifically, LR-based ML diagnostic models demonstrated significant clinical potential, particularly when combined with clinical features, and are worth further validation through more clinical trials. PROSPERO CRD42024596364; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024596364.

Enhancing Free-hand 3D Photoacoustic and Ultrasound Reconstruction using Deep Learning.

Lee S, Kim S, Seo M, Park S, Imrus S, Ashok K, Lee D, Park C, Lee S, Kim J, Yoo JH, Kim M

pubmed logopapersJun 13 2025
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging. Standard PAUS imaging is often limited by a narrow field of view (FoV) and the inability to effectively visualize complex 3D structures. The 3D freehand technique, which aligns sequential 2D images for 3D reconstruction, faces significant challenges in accurate motion estimation without relying on external positional sensors. MoGLo-Net addresses these limitations through an innovative adaptation of the self-attention mechanism, which effectively exploits the critical regions, such as fully-developed speckle areas or high-echogenic tissue regions within successive ultrasound images to accurately estimate the motion parameters. This facilitates the extraction of intricate features from individual frames. Additionally, we employ a patch-wise correlation operation to generate a correlation volume that is highly correlated with the scanning motion. A custom loss function was also developed to ensure robust learning with minimized bias, leveraging the characteristics of the motion parameters. Experimental evaluations demonstrated that MoGLo-Net surpasses current state-of-the-art methods in both quantitative and qualitative performance metrics. Furthermore, we expanded the application of 3D reconstruction technology beyond simple B-mode ultrasound volumes to incorporate Doppler ultrasound and photoacoustic imaging, enabling 3D visualization of vasculature. The source code for this study is publicly available at: https://github.com/pnu-amilab/US3D.

High-Fidelity 3D Imaging of Dental Scenes Using Gaussian Splatting.

Jin CX, Li MX, Yu H, Gao Y, Guo YP, Xia GS, Huang C

pubmed logopapersJun 13 2025
Three-dimensional visualization is increasingly used in dentistry for diagnostics, education, and treatment design. The accurate replication of geometry and color is crucial for these applications. Image-based rendering, which uses 2-dimensional photos to generate photo-realistic 3-dimensional representations, provides an affordable and practical option, aiding both regular and remote health care. This study explores an advanced novel view synthesis (NVS) method called Gaussian splatting (GS), a differentiable image-based rendering approach, to assess its feasibility for dental scene capturing. The rendering quality and resource usage were compared with representative NVS methods. In addition, the linear measurement trueness of extracted craniofacial meshes was evaluated against a commercial facial scanner and 3 smartphone facial scanning apps, while teeth meshes were assessed against 2 intraoral scanners and a desktop scanner. GS-based representation demonstrated superior rendering quality, achieving the highest visual quality, fastest rendering speed, and lowest resource usage. The craniofacial measurements showed similar trueness to commercial facial scanners. The dental measurements had larger deviations than intraoral and desktop scanners did, although all deviations remained within clinically acceptable limits. The GS-based representation shows great potential for developing a convenient and cost-effective method of capturing dental scenes, offering a balance between color fidelity and trueness suitable for clinical applications.

Radiomic Analysis of Molecular Magnetic Resonance Imaging of Aortic Atherosclerosis in Rabbits.

Lee H

pubmed logopapersJun 13 2025
Atherosclerosis involves not only the narrowing of blood vessels and plaque accumulation but also changes in plaque composition and stability, all of which are critical for disease progression. Conventional imaging techniques such as magnetic resonance angiography (MRA) and digital subtraction angiography (DSA) primarily assess luminal narrowing and plaque size, but have limited capability in identifying plaque instability and inflammation within the vascular muscle wall. This study aimed to develop and evaluate a novel imaging approach using ligand-modified nanomagnetic contrast (lmNMC) nanoprobes in combination with molecular magnetic resonance imaging (mMRI) to visualize and quantify vascular inflammation and plaque characteristics in a rabbit model of atherosclerosis. A rabbit model of atherosclerosis was established and underwent mMRI before and after administration of lmNMC nanoprobes. Radiomic features were extracted from segmented images using discrete wavelet transform (DWT) to assess spatial frequency changes and gray-level co-occurrence matrix (GLCM) analysis to evaluate textural properties. Further radiomic analysis was performed using neural network-based regression and clustering, including the application of self-organizing maps (SOMs) to validate the consistency of radiomic pattern between training and testing data. Radiomic analysis revealed significant changes in spatial frequency between pre- and post-contrast images in both the horizontal and vertical directions. GLCM analysis showed an increase in contrast from 0.08463 to 0.1021 and a slight decrease in homogeneity from 0.9593 to 0.9540. Energy values declined from 0.2256 to 0.2019, while correlation increased marginally from 0.9659 to 0.9708. Neural network regression demonstrated strong convergence between target and output coordinates. Additionally, SOM clustering revealed consistent weight locations and neighbor distances across datasets, supporting the reliability of the radiomic validation. The integration of lmNMC nanoprobes with mMRI enables detailed visualization of atherosclerotic plaques and surrounding vascular inflammation in a preclinical model. This method shows promise for enhancing the characterization of unstable plaques and may facilitate early detection of high-risk atherosclerotic lesions, potentially improving diagnostic and therapeutic strategies.

Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)

Comiter, C., Chen, X., Vaishnav, E. D., Kobayashi-Kirschvink, K. J., Ciapmricotti, M., Zhang, K., Murray, J., Monticolo, F., Qi, J., Tanaka, R., Brodowska, S. E., Li, B., Yang, Y., Rodig, S. J., Karatza, A., Quintanal Villalonga, A., Turner, M., Pfaff, K. L., Jane-Valbuena, J., Slyper, M., Waldman, J., Vigneau, S., Wu, J., Blosser, T. R., Segerstolpe, A., Abravanel, D., Wagle, N., Demehri, S., Zhuang, X., Rudin, C. M., Klughammer, J., Rozenblatt-Rosen, O., Stultz, C. M., Shu, J., Regev, A.

biorxiv logopreprintJun 13 2025
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single cell profiling methods, such as single cell RNA-seq (scRNA-seq) and spatial transcriptomics, and histology imaging data, such as Hematoxylin-and-Eosin (H&E) stains. While single cell profiles provide rich molecular information, they can be challenging to collect routinely in the clinic and either lack spatial resolution or high gene throughput. Conversely, histological H&E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we leverage vision transformers and adversarial deep learning to develop the Single Cell omics from Histology Analysis Framework (SCHAF), which generates a tissue sample's spatially-resolved whole transcriptome single cell omics dataset from its H&E histology image. We demonstrate SCHAF on a variety of tissues--including lung cancer, metastatic breast cancer, placentae, and whole mouse pups--training with matched samples analyzed by sc/snRNA-seq, H&E staining, and, when available, spatial transcriptomics. SCHAF generated appropriate single cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-Seq, expert pathologist annotations, or direct spatial transcriptomic measurements, with some limitations. SCHAF opens the way to next-generation H&E analyses and an integrated understanding of cell and tissue biology in health and disease.

Long-term prognostic value of the CT-derived fractional flow reserve combined with atherosclerotic burden in patients with non-obstructive coronary artery disease.

Wang Z, Li Z, Xu T, Wang M, Xu L, Zeng Y

pubmed logopapersJun 13 2025
The long-term prognostic significance of the coronary computed tomography angiography (CCTA)-derived fractional flow reserve (CT-FFR) for non-obstructive coronary artery disease (CAD) is uncertain. We aimed to investigate the additional prognostic value of CT-FFR beyond CCTA-defined atherosclerotic burden for long-term outcomes. Consecutive patients with suspected stable CAD were candidates for this retrospective cohort study. Deep-learning-based vessel-specific CT-FFR was calculated. All patients enrolled were followed for at least 5 years. The primary outcome was major adverse cardiovascular events (MACE). Predictive abilities for MACE were compared among three models (model 1, constructed using clinical variables; model 2, model 1 + CCTA-derived atherosclerotic burden (Leiden risk score and segment involvement score); and model 3, model 2 + CT-FFR). A total of 1944 patients (median age, 59 (53-65) years; 53.0% men) were included. During a median follow-up time of 73.4 (71.2-79.7) months, 64 patients (3.3%) experienced MACE. In multivariate-adjusted Cox models, CT-FFR ≤ 0.80 (HR: 7.18; 95% CI: 4.25-12.12; p < 0.001) was a robust and independent predictor for MACE. The discriminant ability was higher in model 2 than in model 1 (C-index, 0.76 vs. 0.68; p = 0.001) and was further promoted by adding CT-FFR to model 3 (C-index, 0.83 vs. 0.76; p < 0.001). Integrated discrimination improvement (IDI) was 0.033 (p = 0.022) for model 2 beyond model 1. Of note, compared with model 2, model 3 also exhibited improved discrimination (IDI = 0.056; p < 0.001). In patients with non-obstructive CAD, CT-FFR provides robust and incremental prognostic information for predicting long-term outcomes. The combined model including CT-FFR and CCTA-defined atherosclerotic burden exhibits improved prediction abilities, which is helpful for risk stratification. Question Prognostic significance of the CT-fractional flow reserve (FFR) in non-obstructive coronary artery disease for long-term outcomes merits further investigation. Findings Our data strongly emphasized the independent and additional predictive value of CT-FFR beyond coronary CTA-defined atherosclerotic burden and clinical risk factors. Clinical relevance The new combined predictive model incorporating CT-FFR can be satisfactorily used for risk stratification of patients with non-obstructive coronary artery disease by identifying those who are truly suitable for subsequent high-intensity preventative therapies and extensive follow-up for prognostic reasons.

Quantitative and qualitative assessment of ultra-low-dose paranasal sinus CT using deep learning image reconstruction: a comparison with hybrid iterative reconstruction.

Otgonbaatar C, Lee D, Choi J, Jang H, Shim H, Ryoo I, Jung HN, Suh S

pubmed logopapersJun 13 2025
This study aimed to evaluate the quantitative and qualitative performances of ultra-low-dose computed tomography (CT) with deep learning image reconstruction (DLR) compared with those of hybrid iterative reconstruction (IR) for preoperative paranasal sinus (PNS) imaging. This retrospective analysis included 132 patients who underwent non-contrast ultra-low-dose sinus CT (0.03 mSv). Images were reconstructed using hybrid IR and DLR. Objective image quality metrics, including image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), noise power spectrum (NPS), and no-reference perceptual image sharpness, were assessed. Two board-certified radiologists independently performed subjective image quality evaluations. The ultra-low-dose CT protocol achieved a low radiation dose (effective dose: 0.03 mSv). DLR showed significantly lower image noise (28.62 ± 4.83 Hounsfield units) compared to hybrid IR (140.70 ± 16.04, p < 0.001), with DLR yielding smoother and more uniform images. DLR demonstrated significantly improved SNR (22.47 ± 5.82 vs 9.14 ± 2.45, p < 0.001) and CNR (71.88 ± 14.03 vs 11.81 ± 1.50, p < 0.001). NPS analysis revealed that DLR reduced the noise magnitude and NPS peak values. Additionally, DLR demonstrated significantly sharper images (no-reference perceptual sharpness metric: 0.56 ± 0.04) compared to hybrid IR (0.36 ± 0.01). Radiologists rated DLR as superior in overall image quality, bone structure visualization, and diagnostic confidence compared to hybrid IR at ultra-low-dose CT. DLR significantly outperformed hybrid IR in ultra-low-dose PNS CT by reducing image noise, improving SNR and CNR, enhancing image sharpness, and maintaining critical anatomical visualization, demonstrating its potential for effective preoperative planning with minimal radiation exposure. Question Ultra-low-dose CT for paranasal sinuses is essential for patients requiring repeated scans and functional endoscopic sinus surgery (FESS) planning to reduce cumulative radiation exposure. Findings DLR outperformed hybrid IR in ultra-low-dose paranasal sinus CT. Clinical relevance Ultra-low-dose CT with DLR delivers sufficient image quality for detailed surgical planning, effectively minimizing unnecessary radiation exposure to enhance patient safety.
Page 62 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.