Sort by:
Page 146 of 3993984 results

Machine learning and discriminant analysis model for predicting benign and malignant pulmonary nodules.

Li Z, Zhang W, Huang J, Lu L, Xie D, Zhang J, Liang J, Sui Y, Liu L, Zou J, Lin A, Yang L, Qiu F, Hu Z, Wu M, Deng Y, Zhang X, Lu J

pubmed logopapersJul 18 2025
Pulmonary Nodules (PNs) are a trend considered as the early manifestation of lung cancer. Among them, PNs that remain stable for more than two years or whose pathological results suggest not being lung cancer are considered benign PNs (BPNs), while PNs that conform to the growth pattern of tumors or whose pathological results indicate lung cancer are considered malignant PNs (MPNs). Currently, more than 90% of PNs detected by screening tests are benign, with a false positive rate of up to 96.4%. While a range of predictive models have been developed for the identification of MPNs, there are still some challenges in distinguishing between BPNs and MPNs. We included a total of 5197 patients for the case-control study according to the preset exclusion criteria and sample size. Among them, 4735 with BPNs and 2509 with MPNs were randomly divided into training, validation, and test sets according to a 7:1.5:1.5 ratio. Three widely applicable machine learning algorithms (Random Forests, Gradient Boosting Machine, and XGBoost) were used to screen the metrics, and then the corresponding predictive models were constructed using discriminative analysis, and the best performing model was selected as the target model. The model is internally validated with 10-fold cross validation and compared with PKUPH and Block models. We collated information from chest CT examinations performed from 2018 to 2021 in the physical examination population and found that the detection rate of PNs was 21.57% and showed an overall upward trend. The GMU_D model constructed by discriminative analysis based on machine learning screening features had an excellent discriminative performance (AUC = 0.866, 95% CI: 0.858-0.874), and higher accuracy than the PKUPH model (AUC = 0.559, 95% CI: 0.552-0.567) and the Block model (AUC = 0.823, 95% CI: 0.814-0.833). Moreover, the cross-validation results also exhibit excellent performance (AUC = 0.866, 95% CI: 0.858-0.874). The detection rate of PNs was 21.57% in the physical examination population undergoing chest CT. Meanwhile, based on real-world studies of PNs, a greater prediction tool was developed and validated that can be used to accurately distinguish between BPNs and MPNs with the excellent predictive performance and differentiation.

Open-access ultrasonic diaphragm dataset and an automatic diaphragm measurement using deep learning network.

Li Z, Mao L, Jia F, Zhang S, Han C, Fu S, Zheng Y, Chu Y, Chen Z, Wang D, Duan H, Zheng Y

pubmed logopapersJul 18 2025
The assessment of diaphragm function is crucial for effective clinical management and the prevention of complications associated with diaphragmatic dysfunction. However, current measurement methodologies rely on manual techniques that are susceptible to human error: How does the performance of an automatic diaphragm measurement system based on a segmentation neural network focusing on diaphragm thickness and excursion compare with existing methodologies? The proposed system integrates segmentation and parameter measurement, leveraging a newly established ultrasound diaphragm dataset. This dataset comprises B-mode ultrasound images and videos for diaphragm thickness assessment, as well as M-mode images and videos for movement measurement. We introduce a novel deep learning-based segmentation network, the Multi-ratio Dilated U-Net (MDRU-Net), to enable accurate diaphragm measurements. The system additionally incorporates a comprehensive implementation plan for automated measurement. Automatic measurement results are compared against manual assessments conducted by clinicians, revealing an average error of 8.12% in diaphragm thickening fraction measurements and a mere 4.3% average relative error in diaphragm excursion measurements. The results indicate overall minor discrepancies and enhanced potential for clinical detection of diaphragmatic conditions. Additionally, we design a user-friendly automatic measurement system for assessing diaphragm parameters and an accompanying method for measuring ultrasound-derived diaphragm parameters. In this paper, we constructed a diaphragm ultrasound dataset of thickness and excursion. Based on the U-Net architecture, we developed an automatic diaphragm segmentation algorithm and designed an automatic parameter measurement scheme. A comparative error analysis was conducted against manual measurements. Overall, the proposed diaphragm ultrasound segmentation algorithm demonstrated high segmentation performance and efficiency. The automatic measurement scheme based on this algorithm exhibited high accuracy, eliminating subjective influence and enhancing the automation of diaphragm ultrasound parameter assessment, thereby providing new possibilities for diaphragm evaluation.

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

AI-Driven segmentation and morphogeometric profiling of epicardial adipose tissue in type 2 diabetes.

Feng F, Hasaballa AI, Long T, Sun X, Fernandez J, Carlhäll CJ, Zhao J

pubmed logopapersJul 18 2025
Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D. A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier. EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703. This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.

Divide and Conquer: A Large-Scale Dataset and Model for Left-Right Breast MRI Segmentation

Maximilian Rokuss, Benjamin Hamm, Yannick Kirchhoff, Klaus Maier-Hein

arxiv logopreprintJul 18 2025
We introduce the first publicly available breast MRI dataset with explicit left and right breast segmentation labels, encompassing more than 13,000 annotated cases. Alongside this dataset, we provide a robust deep-learning model trained for left-right breast segmentation. This work addresses a critical gap in breast MRI analysis and offers a valuable resource for the development of advanced tools in women's health. The dataset and trained model are publicly available at: www.github.com/MIC-DKFZ/BreastDivider

Localized FNO for Spatiotemporal Hemodynamic Upsampling in Aneurysm MRI

Kyriakos Flouris, Moritz Halter, Yolanne Y. R. Lee, Samuel Castonguay, Luuk Jacobs, Pietro Dirix, Jonathan Nestmann, Sebastian Kozerke, Ender Konukoglu

arxiv logopreprintJul 18 2025
Hemodynamic analysis is essential for predicting aneurysm rupture and guiding treatment. While magnetic resonance flow imaging enables time-resolved volumetric blood velocity measurements, its low spatiotemporal resolution and signal-to-noise ratio limit its diagnostic utility. To address this, we propose the Localized Fourier Neural Operator (LoFNO), a novel 3D architecture that enhances both spatial and temporal resolution with the ability to predict wall shear stress (WSS) directly from clinical imaging data. LoFNO integrates Laplacian eigenvectors as geometric priors for improved structural awareness on irregular, unseen geometries and employs an Enhanced Deep Super-Resolution Network (EDSR) layer for robust upsampling. By combining geometric priors with neural operator frameworks, LoFNO de-noises and spatiotemporally upsamples flow data, achieving superior velocity and WSS predictions compared to interpolation and alternative deep learning methods, enabling more precise cerebrovascular diagnostics.

Detecting Fifth Metatarsal Fractures on Radiographs through the Lens of Smartphones: A FIXUS AI Algorithm

Taseh, A., Shah, A., Eftekhari, M., Flaherty, A., Ebrahimi, A., Jones, S., Nukala, V., Nazarian, A., Waryasz, G., Ashkani-Esfahani, S.

medrxiv logopreprintJul 18 2025
BackgroundFifth metatarsal (5MT) fractures are common but challenging to diagnose, particularly with limited expertise or subtle fractures. Deep learning shows promise but faces limitations due to image quality requirements. This study develops a deep learning model to detect 5MT fractures from smartphone-captured radiograph images, enhancing accessibility of diagnostic tools. MethodsA retrospective study included patients aged >18 with 5MT fractures (n=1240) and controls (n=1224). Radiographs (AP, oblique, lateral) from Electronic Health Records (EHR) were obtained and photographed using a smartphone, creating a new dataset (SP). Models using ResNet 152V2 were trained on EHR, SP, and combined datasets, then evaluated on a separate smartphone test dataset (SP-test). ResultsOn validation, the SP model achieved optimal performance (AUROC: 0.99). On the SP-test dataset, the EHR models performance decreased (AUROC: 0.83), whereas SP and combined models maintained high performance (AUROC: 0.99). ConclusionsSmartphone-specific deep learning models effectively detect 5MT fractures, suggesting their practical utility in resource-limited settings.

Converting T1-weighted MRI from 3T to 7T quality using deep learning

Malo Gicquel, Ruoyi Zhao, Anika Wuestefeld, Nicola Spotorno, Olof Strandberg, Kalle Åström, Yu Xiao, Laura EM Wisse, Danielle van Westen, Rik Ossenkoppele, Niklas Mattsson-Carlgren, David Berron, Oskar Hansson, Gabrielle Flood, Jacob Vogel

arxiv logopreprintJul 18 2025
Ultra-high resolution 7 tesla (7T) magnetic resonance imaging (MRI) provides detailed anatomical views, offering better signal-to-noise ratio, resolution and tissue contrast than 3T MRI, though at the cost of accessibility. We present an advanced deep learning model for synthesizing 7T brain MRI from 3T brain MRI. Paired 7T and 3T T1-weighted images were acquired from 172 participants (124 cognitively unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T MRI from 3T images, we trained two models: a specialized U-Net, and a U-Net integrated with a generative adversarial network (GAN U-Net). Our models outperformed two additional state-of-the-art 3T-to-7T models in image-based evaluation metrics. Four blinded MRI professionals judged our synthetic 7T images as comparable in detail to real 7T images, and superior in subjective visual quality to 7T images, apparently due to the reduction of artifacts. Importantly, automated segmentations of the amygdalae of synthetic GAN U-Net 7T images were more similar to manually segmented amygdalae (n=20), than automated segmentations from the 3T images that were used to synthesize the 7T images. Finally, synthetic 7T images showed similar performance to real 3T images in downstream prediction of cognitive status using MRI derivatives (n=3,168). In all, we show that synthetic T1-weighted brain images approaching 7T quality can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. Future directions, possible clinical use cases, and limitations are discussed.

A clinically relevant morpho-molecular classification of lung neuroendocrine tumours

Sexton-Oates, A., Mathian, E., Candeli, N., Lim, Y., Voegele, C., Di Genova, A., Mange, L., Li, Z., van Weert, T., Hillen, L. M., Blazquez-Encinas, R., Gonzalez-Perez, A., Morrison, M. L., Lauricella, E., Mangiante, L., Bonheme, L., Moonen, L., Absenger, G., Altmuller, J., Degletagne, C., Brustugun, O. T., Cahais, V., Centonze, G., Chabrier, A., Cuenin, C., Damiola, F., de Montpreville, V. T., Deleuze, J.-F., Dingemans, A.-M. C., Fadel, E., Gadot, N., Ghantous, A., Graziano, P., Hofman, P., Hofman, V., Ibanez-Costa, A., Lacomme, S., Lopez-Bigas, N., Lund-Iversen, M., Milione, M., Muscarella, L

medrxiv logopreprintJul 18 2025
Lung neuroendocrine tumours (NETs, also known as carcinoids) are rapidly rising in incidence worldwide but have unknown aetiology and limited therapeutic options beyond surgery. We conducted multi-omic analyses on over 300 lung NETs including whole-genome sequencing (WGS), transcriptome profiling, methylation arrays, spatial RNA sequencing, and spatial proteomics. The integration of multi-omic data provides definitive proof of the existence of four strikingly different molecular groups that vary in patient characteristics, genomic and transcriptomic profiles, microenvironment, and morphology, as much as distinct diseases. Among these, we identify a new molecular group, enriched for highly aggressive supra-carcinoids, that displays an immune-rich microenvironment linked to tumour--macrophage crosstalk, and we uncover an undifferentiated cell population within supra-carcinoids, explaining their molecular and behavioural link to high-grade lung neuroendocrine carcinomas. Deep learning models accurately identified the Ca A1, Ca A2, and Ca B groups based on morphology alone, outperforming current histological criteria. The characteristic tumour microenvironment of supra-carcinoids and the validation of a panel of immunohistochemistry markers for the other three molecular groups demonstrates that these groups can be accurately identified based solely on morphological features, facilitating their implementation in the clinical setting. Our proposed morpho-molecular classification highlights group-specific therapeutic opportunities, including DLL3, FGFR, TERT, and BRAF inhibitors. Overall, our findings unify previously proposed molecular classifications and refine the lung cancer map by revealing novel tumour types and potential treatments, with significant implications for prognosis and treatment decision-making.

Deep learning reconstruction for improving image quality of pediatric abdomen MRI using a 3D T1 fast spoiled gradient echo acquisition.

Zucker EJ, Milshteyn E, Machado-Rivas FA, Tsai LL, Roberts NT, Guidon A, Gee MS, Victoria T

pubmed logopapersJul 18 2025
Deep learning (DL) reconstructions have shown utility for improving image quality of abdominal MRI in adult patients, but a paucity of literature exists in children. To compare image quality between three-dimensional fast spoiled gradient echo (SPGR) abdominal MRI acquisitions reconstructed conventionally and using a prototype method based on a commercial DL algorithm in a pediatric cohort. Pediatric patients (age < 18 years) who underwent abdominal MRI from 10/2023-3/2024 including gadolinium-enhanced accelerated 3D SPGR 2-point Dixon acquisitions (LAVA-Flex, GE HealthCare) were identified. Images were retrospectively generated using a prototype reconstruction method leveraging a commercial deep learning algorithm (AIR™ Recon DL, GE HealthCare) with the 75% noise reduction setting. For each case/reconstruction, three radiologists independently scored DL and non-DL image quality (overall and of selected structures) on a 5-point Likert scale (1-nondiagnostic, 5-excellent) and indicated reconstruction preference. The signal-to-noise ratio (SNR) and mean number of edges (inverse correlate of image sharpness) were also quantified. Image quality metrics and preferences were compared using Wilcoxon signed-rank, Fisher exact, and paired t-tests. Interobserver agreement was evaluated with the Kendall rank correlation coefficient (W). The final cohort consisted of 38 patients with mean ± standard deviation age of 8.6 ± 5.7 years, 23 males. Mean image quality scores for evaluated structures ranged from 3.8 ± 1.1 to 4.6 ± 0.6 in the DL group, compared to 3.1 ± 1.1 to 3.9 ± 0.6 in the non-DL group (all P < 0.001). All radiologists preferred DL in most cases (32-37/38, P < 0.001). There were a 2.3-fold increase in SNR and a 3.9% reduction in the mean number of edges in DL compared to non-DL images (both P < 0.001). In all scored anatomic structures except the spine and non-DL adrenals, interobserver agreement was moderate to substantial (W = 0.41-0.74, all P < 0.01). In a broad spectrum of pediatric patients undergoing contrast-enhanced Dixon abdominal MRI acquisitions, the prototype deep learning reconstruction is generally preferred to conventional methods with improved image quality across a wide range of structures.
Page 146 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.