Sort by:
Page 21 of 2252246 results

ResNet-Transformer deep learning model-aided detection of dens evaginatus.

Wang S, Liu J, Li S, He P, Zhou X, Zhao Z, Zheng L

pubmed logopapersJul 1 2025
Dens evaginatus is a dental morphological developmental anomaly. Failing to detect it may lead to tubercles fracture and pulpal/periapical disease. Consequently, early detection and intervention of dens evaginatus are significant to preserve vital pulp. This study aimed to develop a deep learning model to assist dentists in early diagnosing dens evaginatus, thereby supporting early intervention and mitigating the risk of severe consequences. In this study, a deep learning model was developed utilizing panoramic radiograph images sourced from 1410 patients aged 3-16 years, with high-quality annotations to enable the automatic detection of dens evaginatus. Model performance and model's efficacy in aiding dentists were evaluated. The findings indicated that the current deep learning model demonstrated commendable sensitivity (0.8600) and specificity (0.9200), outperforming dentists in detecting dens evaginatus with an F1-score of 0.8866 compared to their average F1-score of 0.8780, indicating that the model could detect dens evaginatus with greater precision. Furthermore, with its support, young dentists heightened their focus on dens evaginatus in tooth germs and achieved improved diagnostic accuracy. Based on these results, the integration of deep learning for dens evaginatus detection holds significance and can augment dentists' proficiency in identifying such anomaly.

A Longitudinal Analysis of Pre- and Post-Operative Dysmorphology in Metopic Craniosynostosis.

Beiriger JW, Tao W, Irgebay Z, Smetona J, Dvoracek L, Kass NM, Dixon A, Zhang C, Mehta M, Whitaker R, Goldstein JA

pubmed logopapersJul 1 2025
The purpose of this study is to objectively quantify the degree of overcorrection in our current practice and to evaluate longitudinal morphological changes using CranioRate<sup>TM</sup>, a novel machine learning skull morphology assessment tool.  Design:Retrospective cohort study across multiple time points. Tertiary care children's hospital. Patients with preoperative and postoperative CT scans who underwent fronto-orbital advancement (FOA) for metopic craniosynostosis. We evaluated preoperative, postoperative, and two-year follow-up skull morphology using CranioRate<sup>TM</sup> to generate a Metopic Severity Score (MSS), a measure of degree of metopic dysmorphology, and Cranial Morphology Deviation (CMD) score, a measure of deviation from normal skull morphology. Fifty-five patients were included, average age at surgery was 1.3 years. Sixteen patients underwent follow-up CT imaging at an average of 3.1 years. Preoperative MSS was 6.3 ± 2.5 (CMD 199.0 ± 39.1), immediate postoperative MSS was -2.0 ± 1.9 (CMD 208.0 ± 27.1), and longitudinal MSS was 1.3 ± 1.1 (CMD 179.8 ± 28.1). MSS approached normal at two-year follow-up (defined as MSS = 0). There was a significant relationship between preoperative MSS and follow-up MSS (R<sup>2 </sup>= 0.70). MSS quantifies overcorrection and normalization of head shape, as patients with negative values were less "metopic" than normal postoperatively and approached 0 at 2-year follow-up. CMD worsened postoperatively due to postoperative bony changes associated with surgical displacements following FOA. All patients had similar postoperative metopic dysmorphology, with no significant association with preoperative severity. More severe patients had worse longitudinal dysmorphology, reinforcing that regression to the metopic shape is a postoperative risk which increases with preoperative severity.

Embryonic cranial cartilage defects in the Fgfr3<sup>Y367C</sup> <sup>/+</sup> mouse model of achondroplasia.

Motch Perrine SM, Sapkota N, Kawasaki K, Zhang Y, Chen DZ, Kawasaki M, Durham EL, Heuzé Y, Legeai-Mallet L, Richtsmeier JT

pubmed logopapersJul 1 2025
Achondroplasia, the most common chondrodysplasia in humans, is caused by one of two gain of function mutations localized in the transmembrane domain of fibroblast growth factor receptor 3 (FGFR3) leading to constitutive activation of FGFR3 and subsequent growth plate cartilage and bone defects. Phenotypic features of achondroplasia include macrocephaly with frontal bossing, midface hypoplasia, disproportionate shortening of the extremities, brachydactyly with trident configuration of the hand, and bowed legs. The condition is defined primarily on postnatal effects on bone and cartilage, and embryonic development of tissues in affected individuals is not well studied. Using the Fgfr3<sup>Y367C/+</sup> mouse model of achondroplasia, we investigated the developing chondrocranium and Meckel's cartilage (MC) at embryonic days (E)14.5 and E16.5. Sparse hand annotations of chondrocranial and MC cartilages visualized in phosphotungstic acid enhanced three-dimensional (3D) micro-computed tomography (microCT) images were used to train our automatic deep learning-based 3D segmentation model and produce 3D isosurfaces of the chondrocranium and MC. Using 3D coordinates of landmarks measured on the 3D isosurfaces, we quantified differences in the chondrocranium and MC of Fgfr3<sup>Y367C/+</sup> mice relative to those of their unaffected littermates. Statistically significant differences in morphology and growth of the chondrocranium and MC were found, indicating direct effects of this Fgfr3 mutation on embryonic cranial and pharyngeal cartilages, which in turn can secondarily affect cranial dermal bone development. Our results support the suggestion that early therapeutic intervention during cartilage formation may lessen the effects of this condition.

Automatic Multiclass Tissue Segmentation Using Deep Learning in Brain MR Images of Tumor Patients.

Kandpal A, Kumar P, Gupta RK, Singh A

pubmed logopapersJun 30 2025
Precise delineation of brain tissues, including lesions, in MR images is crucial for data analysis and objectively assessing conditions like neurological disorders and brain tumors. Existing methods for tissue segmentation often fall short in addressing patients with lesions, particularly those with brain tumors. This study aimed to develop and evaluate a robust pipeline utilizing convolutional neural networks for rapid and automatic segmentation of whole brain tissues, including tumor lesions. The proposed pipeline was developed using BraTS'21 data (1251 patients) and tested on local hospital data (100 patients). Ground truth masks for lesions as well as brain tissues were generated. Two convolutional neural networks based on deep residual U-Net framework were trained for segmenting brain tissues and tumor lesions. The performance of the pipeline was evaluated on independent test data using dice similarity coefficient (DSC) and volume similarity (VS). The proposed pipeline achieved a mean DSC of 0.84 and a mean VS of 0.93 on the BraTS'21 test data set. On the local hospital test data set, it attained a mean DSC of 0.78 and a mean VS of 0.91. The proposed pipeline also generated satisfactory masks in cases where the SPM12 software performed inadequately. The proposed pipeline offers a reliable and automatic solution for segmenting brain tissues and tumor lesions in MR images. Its adaptability makes it a valuable tool for both research and clinical applications, potentially streamlining workflows and enhancing the precision of analyses in neurological and oncological studies.

Scout-Dose-TCM: Direct and Prospective Scout-Based Estimation of Personalized Organ Doses from Tube Current Modulated CT Exams

Maria Jose Medrano, Sen Wang, Liyan Sun, Abdullah-Al-Zubaer Imran, Jennie Cao, Grant Stevens, Justin Ruey Tse, Adam S. Wang

arxiv logopreprintJun 30 2025
This study proposes Scout-Dose-TCM for direct, prospective estimation of organ-level doses under tube current modulation (TCM) and compares its performance to two established methods. We analyzed contrast-enhanced chest-abdomen-pelvis CT scans from 130 adults (120 kVp, TCM). Reference doses for six organs (lungs, kidneys, liver, pancreas, bladder, spleen) were calculated using MC-GPU and TotalSegmentator. Based on these, we trained Scout-Dose-TCM, a deep learning model that predicts organ doses corresponding to discrete cosine transform (DCT) basis functions, enabling real-time estimates for any TCM profile. The model combines a feature learning module that extracts contextual information from lateral and frontal scouts and scan range with a dose learning module that output DCT-based dose estimates. A customized loss function incorporated the DCT formulation during training. For comparison, we implemented size-specific dose estimation per AAPM TG 204 (Global CTDIvol) and its organ-level TCM-adapted version (Organ CTDIvol). A 5-fold cross-validation assessed generalizability by comparing mean absolute percentage dose errors and r-squared correlations with benchmark doses. Average absolute percentage errors were 13% (Global CTDIvol), 9% (Organ CTDIvol), and 7% (Scout-Dose-TCM), with bladder showing the largest discrepancies (15%, 13%, and 9%). Statistical tests confirmed Scout-Dose-TCM significantly reduced errors vs. Global CTDIvol across most organs and improved over Organ CTDIvol for the liver, bladder, and pancreas. It also achieved higher r-squared values, indicating stronger agreement with Monte Carlo benchmarks. Scout-Dose-TCM outperformed Global CTDIvol and was comparable to or better than Organ CTDIvol, without requiring organ segmentations at inference, demonstrating its promise as a tool for prospective organ-level dose estimation in CT.

Brain Tumor Detection through Thermal Imaging and MobileNET

Roham Maiti, Debasmita Bhoumik

arxiv logopreprintJun 30 2025
Brain plays a crucial role in regulating body functions and cognitive processes, with brain tumors posing significant risks to human health. Precise and prompt detection is a key factor in proper treatment and better patient outcomes. Traditional methods for detecting brain tumors, that include biopsies, MRI, and CT scans often face challenges due to their high costs and the need for specialized medical expertise. Recent developments in machine learning (ML) and deep learning (DL) has exhibited strong capabilities in automating the identification and categorization of brain tumors from medical images, especially MRI scans. However, these classical ML models have limitations, such as high computational demands, the need for large datasets, and long training times, which hinder their accessibility and efficiency. Our research uses MobileNET model for efficient detection of these tumors. The novelty of this project lies in building an accurate tumor detection model which use less computing re-sources and runs in less time followed by efficient decision making through the use of image processing technique for accurate results. The suggested method attained an average accuracy of 98.5%.

Deep Learning-Based Semantic Segmentation for Real-Time Kidney Imaging and Measurements with Augmented Reality-Assisted Ultrasound

Gijs Luijten, Roberto Maria Scardigno, Lisle Faray de Paiva, Peter Hoyer, Jens Kleesiek, Domenico Buongiorno, Vitoantonio Bevilacqua, Jan Egger

arxiv logopreprintJun 30 2025
Ultrasound (US) is widely accessible and radiation-free but has a steep learning curve due to its dynamic nature and non-standard imaging planes. Additionally, the constant need to shift focus between the US screen and the patient poses a challenge. To address these issues, we integrate deep learning (DL)-based semantic segmentation for real-time (RT) automated kidney volumetric measurements, which are essential for clinical assessment but are traditionally time-consuming and prone to fatigue. This automation allows clinicians to concentrate on image interpretation rather than manual measurements. Complementing DL, augmented reality (AR) enhances the usability of US by projecting the display directly into the clinician's field of view, improving ergonomics and reducing the cognitive load associated with screen-to-patient transitions. Two AR-DL-assisted US pipelines on HoloLens-2 are proposed: one streams directly via the application programming interface for a wireless setup, while the other supports any US device with video output for broader accessibility. We evaluate RT feasibility and accuracy using the Open Kidney Dataset and open-source segmentation models (nnU-Net, Segmenter, YOLO with MedSAM and LiteMedSAM). Our open-source GitHub pipeline includes model implementations, measurement algorithms, and a Wi-Fi-based streaming solution, enhancing US training and diagnostics, especially in point-of-care settings.

Three-dimensional end-to-end deep learning for brain MRI analysis

Radhika Juglan, Marta Ligero, Zunamys I. Carrero, Asier Rabasco, Tim Lenz, Leo Misera, Gregory Patrick Veldhuizen, Paul Kuntke, Hagen H. Kitzler, Sven Nebelung, Daniel Truhn, Jakob Nikolas Kather

arxiv logopreprintJun 30 2025
Deep learning (DL) methods are increasingly outperforming classical approaches in brain imaging, yet their generalizability across diverse imaging cohorts remains inadequately assessed. As age and sex are key neurobiological markers in clinical neuroscience, influencing brain structure and disease risk, this study evaluates three of the existing three-dimensional architectures, namely Simple Fully Connected Network (SFCN), DenseNet, and Shifted Window (Swin) Transformers, for age and sex prediction using T1-weighted MRI from four independent cohorts: UK Biobank (UKB, n=47,390), Dallas Lifespan Brain Study (DLBS, n=132), Parkinson's Progression Markers Initiative (PPMI, n=108 healthy controls), and Information eXtraction from Images (IXI, n=319). We found that SFCN consistently outperformed more complex architectures with AUC of 1.00 [1.00-1.00] in UKB (internal test set) and 0.85-0.91 in external test sets for sex classification. For the age prediction task, SFCN demonstrated a mean absolute error (MAE) of 2.66 (r=0.89) in UKB and 4.98-5.81 (r=0.55-0.70) across external datasets. Pairwise DeLong and Wilcoxon signed-rank tests with Bonferroni corrections confirmed SFCN's superiority over Swin Transformer across most cohorts (p<0.017, for three comparisons). Explainability analysis further demonstrates the regional consistency of model attention across cohorts and specific to each task. Our findings reveal that simpler convolutional networks outperform the denser and more complex attention-based DL architectures in brain image analysis by demonstrating better generalizability across different datasets.

MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction

Lingtong Zhang, Mengdie Song, Xiaohan Hao, Huayu Mai, Bensheng Qiu

arxiv logopreprintJun 30 2025
Magnetic Resonance Imaging (MRI) reconstruction is essential in medical diagnostics. As the latest generative models, diffusion models (DMs) have struggled to produce high-fidelity images due to their stochastic nature in image domains. Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution. Inspired by this, we propose Multi-domain Diffusion Prior Guidance (MDPG) provided by pre-trained LDMs to enhance data consistency in MRI reconstruction tasks. Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images. Then pre-trained LDMs are integrated to provide conditional priors in both latent and image domains. A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains. Simultaneously, to effectively utilize a prior in both the k-space and image domain, under-sampled images are fused with generated full-sampled images by the Dual-domain Fusion Branch (DFB) for self-adaption guidance. Lastly, to further enhance the data consistency, we propose a k-space regularization strategy based on the non-auto-calibration signal (NACS) set. Extensive experiments on two public MRI datasets fully demonstrate the effectiveness of the proposed methodology. The code is available at https://github.com/Zolento/MDPG.

Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Bowen Xin, Rohan Hickey, Tamara Blake, Jin Jin, Claire E Wainwright, Thomas Benkert, Alto Stemmer, Peter Sly, David Coman, Jason Dowling

arxiv logopreprintJun 30 2025
Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).
Page 21 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.