Sort by:
Page 442 of 7497481 results

Ozawa M, Sone M, Hijioka S, Hara H, Wakatsuki Y, Ishihara T, Hattori C, Hirano R, Ambo S, Esaki M, Kusumoto M, Matsui Y

pubmed logopapersJul 18 2025
Detecting small pancreatic ductal adenocarcinomas (PDAC) is challenging owing to their difficulty in being identified as distinct tumor masses. This study assesses the diagnostic performance of a three-dimensional convolutional neural network for the automatic detection of small PDAC using both automatic tumor mass detection and indirect indicator evaluation. High-resolution contrast-enhanced computed tomography (CT) scans from 181 patients diagnosed with PDAC (diameter ≤ 2 cm) between January 2018 and December 2023 were analyzed. The D/P ratio, which is the cross-sectional area of the MPD to that of the pancreatic parenchyma, was identified as an indirect indicator. A total of 204 patient data sets including 104 normal controls were analyzed for automatic tumor mass detection and D/P ratio evaluation. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were evaluated to detect tumor mass. The sensitivity of PDAC detection was compared with that of the software and radiologists, and tumor localization accuracy was validated against endoscopic ultrasonography (EUS) findings. The sensitivity, specificity, PPV, and NPV for tumor mass detection were 77.0%, 76.0%, 75.5%, and 77.5%, respectively; for D/P ratio detection, 87.0%, 94.2%, 93.5%, and 88.3%, respectively; and for combined tumor mass and D/P ratio detections, 96.0%, 70.2%, 75.6%, and 94.8%, respectively. No significant difference was observed between the software's sensitivity and that of the radiologist's report (software, 96.0%; radiologist, 96.0%; p = 1). The concordance rate between software findings and EUS was 96.0%. Combining indirect indicator evaluation with tumor mass detection may improve small PDAC detection accuracy.

Asari Y, Yasaka K, Kurashima J, Katayama A, Kurokawa M, Abe O

pubmed logopapersJul 18 2025
This study aimed to evaluate and compare the diagnostic performance and image quality of deep learning reconstruction (DLR) with hybrid iterative reconstruction (Hybrid IR) and filtered back projection (FBP) in contrast-enhanced CT venography for deep vein thrombosis (DVT). A retrospective analysis was conducted on 51 patients who underwent lower limb CT venography, including 20 with DVT lesions and 31 without DVT lesions. CT images were reconstructed using DLR, Hybrid IR, and FBP. Quantitative image quality metrics, such as contrast-to-noise ratio (CNR) and image noise, were measured. Three radiologists independently assessed DVT lesion detection, depiction of DVT lesions and normal structures, subjective image noise, artifacts, and overall image quality using scoring systems. Diagnostic performance was evaluated using sensitivity and area under the receiver operating characteristic curve (AUC). The paired t-test and Wilcoxon signed-rank test compared the results for continuous variables and ordinal scales, respectively, between DLR and Hybrid IR as well as between DLR and FBP. DLR significantly improved CNR and reduced image noise compared to Hybrid IR and FBP (p < 0.001). AUC and sensitivity for DVT detection were not statistically different across reconstruction methods. Two readers reported improved lesion visualization with DLR. DLR was also rated superior in image quality, normal structure depiction, and noise suppression by all readers (p < 0.001). DLR enhances image quality and anatomical clarity in CT venography. These findings support the utility of DLR in improving diagnostic confidence and image interpretability in DVT assessment.

Liu Q, Zheng Z, Zhang Y, Wu A, Lou J, Chen X, Yuan Y, Xie M, Zhang L, Sun P, Sun W, Lv Q

pubmed logopapersJul 18 2025
Previous studies have confirmed that fully automated three-dimensional echocardiography (3DE) right ventricular (RV) quantification software can accurately assess adult RV function. However, data on its accuracy in children are scarce. This study aimed to test the accuracy of the software in children using cardiac magnetic resonance (MR) as the gold standard. This study prospectively enrolled 82 children who underwent both echocardiography and cardiac MR within 24 h. The RV end-diastolic volume (EDV), end-systolic volume (ESV), and ejection fraction (EF) were obtained using the novel 3DE-RV quantification software and compared with cardiac MR values across different groups. The novel 3DE-RV quantification software was feasible in all 82 children (100%). Fully automated analysis was achieved in 35% patients with an analysis time of 8 ± 2 s and 100% reproducibility. Manual editing was necessary in the remaining 65% patients. The 3DE-derived RV volumes and EF correlated well with cardiac MR measurements (RVEDV, r=0.93; RVESV, r=0.90; RVEF, r=0.82; all P <0.001). Although the automated approach slightly underestimated RV volumes and overestimated RVEF compared with cardiac MR in the entire cohort, the bias was smaller in children with RVEF ≥ 45%, normal RV size, and good 3DE image quality. Fully automated 3DE-RV quantification software provided accurate and completely reproducible results in 35% children without any adjustment. The RV volumes and EF measured using the automated 3DE method correlated well with those from cardiac MR, especially in children with RVEF ≥ 45%, normal RV size, and good 3DE image quality. Therefore, the novel automated 3DE method may achieve rapid and accurate assessment of RV function in children with normal heart anatomy.

Shepherd TM, Dogra S

pubmed logopapersJul 18 2025
The prevalence of Alzheimer's disease and other dementias is increasing as populations live longer lifespans. Imaging is becoming a key component of the workup for patients with cognitive impairment or dementia. Integrated PET-MRI provides a unique opportunity for same-session multimodal characterization with many practical benefits to patients, referring physicians, radiologists, and researchers. The impact of integrated PET-MRI on clinical practice for early adopters of this technology can be profound. Classic imaging findings with integrated PET-MRI are illustrated for common neurodegenerative diseases or clinical-radiological syndromes. This review summarizes recent technical innovations that are being introduced into PET-MRI clinical practice and research for neurodegenerative disease. More recent MRI-based attenuation correction now performs similarly compared to PET-CT (e.g., whole-brain bias < 0.5%) such that early concerns for accurate PET tracer quantification with integrated PET-MRI appear resolved. Head motion is common in this patient population. MRI- and PET data-driven motion correction appear ready for routine use and should substantially improve PET-MRI image quality. PET-MRI by definition eliminates ~50% of the radiation from CT. Multiple hardware and software techniques for improving image quality with lower counts are reviewed (including motion correction). These methods can lower radiation to patients (and staff), increase scanner throughput, and generate better temporal resolution for dynamic PET. Deep learning has been broadly applied to PET-MRI. Deep learning analysis of PET and MRI data may provide accurate classification of different stages of Alzheimer's disease or predict progression to dementia. Over the past 5 years, clinical imaging of neurodegenerative disease has changed due to imaging research and the introduction of anti-amyloid immunotherapy-integrated PET-MRI is best suited for imaging these patients and its use appears poised for rapid growth outside academic medical centers. Evidence level: 5. Technical efficacy: Stage 3.

Zucker EJ, Milshteyn E, Machado-Rivas FA, Tsai LL, Roberts NT, Guidon A, Gee MS, Victoria T

pubmed logopapersJul 18 2025
Deep learning (DL) reconstructions have shown utility for improving image quality of abdominal MRI in adult patients, but a paucity of literature exists in children. To compare image quality between three-dimensional fast spoiled gradient echo (SPGR) abdominal MRI acquisitions reconstructed conventionally and using a prototype method based on a commercial DL algorithm in a pediatric cohort. Pediatric patients (age < 18 years) who underwent abdominal MRI from 10/2023-3/2024 including gadolinium-enhanced accelerated 3D SPGR 2-point Dixon acquisitions (LAVA-Flex, GE HealthCare) were identified. Images were retrospectively generated using a prototype reconstruction method leveraging a commercial deep learning algorithm (AIR™ Recon DL, GE HealthCare) with the 75% noise reduction setting. For each case/reconstruction, three radiologists independently scored DL and non-DL image quality (overall and of selected structures) on a 5-point Likert scale (1-nondiagnostic, 5-excellent) and indicated reconstruction preference. The signal-to-noise ratio (SNR) and mean number of edges (inverse correlate of image sharpness) were also quantified. Image quality metrics and preferences were compared using Wilcoxon signed-rank, Fisher exact, and paired t-tests. Interobserver agreement was evaluated with the Kendall rank correlation coefficient (W). The final cohort consisted of 38 patients with mean ± standard deviation age of 8.6 ± 5.7 years, 23 males. Mean image quality scores for evaluated structures ranged from 3.8 ± 1.1 to 4.6 ± 0.6 in the DL group, compared to 3.1 ± 1.1 to 3.9 ± 0.6 in the non-DL group (all P < 0.001). All radiologists preferred DL in most cases (32-37/38, P < 0.001). There were a 2.3-fold increase in SNR and a 3.9% reduction in the mean number of edges in DL compared to non-DL images (both P < 0.001). In all scored anatomic structures except the spine and non-DL adrenals, interobserver agreement was moderate to substantial (W = 0.41-0.74, all P < 0.01). In a broad spectrum of pediatric patients undergoing contrast-enhanced Dixon abdominal MRI acquisitions, the prototype deep learning reconstruction is generally preferred to conventional methods with improved image quality across a wide range of structures.

Mounika G, Kollem S, Samala S

pubmed logopapersJul 18 2025
Magnetic resonance imaging (MRI) is a non-invasive method widely used to evaluate abnormal tissues, especially in the brain. While many studies have examined brain tumor classification using MRI, a comprehensive scientometric analysis remains limited. This study aimed to investigate brain tumor classification based on magnetic resonance imaging (MRI), using scientometric approaches, from 2015 to 2024. A total of 348 peer-reviewed articles were extracted from the Scopus database. Tools such as CiteSpace and VOSviewer were employed to analyze key metrics, including citation frequency, author collaboration, and publication trends. The analysis revealed top authors, top-cited journals, and international collaborations. Co-occurrence networks identified the top research topics and bibliometric coupling revealed knowledge advancements in the domain. Deep learning methods are increasingly used in brain tumor classification research. This study outlines the current trends, uncovers research gaps, and suggests future directions for researchers in the domain of MRI-based brain tumor classification.

Liu EH, Carrion D, Badawy MK

pubmed logopapersJul 18 2025
Chest X-rays (CXR) rank among the most conducted X-ray examinations. They often require repeat imaging due to inadequate quality, leading to increased radiation exposure and delays in patient care and diagnosis. This research assesses the efficacy of DenseNet121 and YOLOv8 neural networks in detecting suboptimal CXRs, which may minimise delays and enhance patient outcomes. The study included 3587 patients with a median age of 67 (0-102). It utilised an initial dataset comprising 10,000 CXRs randomly divided into a training subset (4000 optimal and 4000 suboptimal) and a validation subset (400 optimal and 400 suboptimal). The test subset (25 optimal and 25 suboptimal) was curated from the remaining images to provide adequate variation. Neural networks DenseNet121 and YOLOv8 were chosen due to their capabilities in image classification. DenseNet121 is a robust, well-tested model in the medical industry with high accuracy in object recognition. YOLOv8 is a cutting-edge commercial model targeted at all industries. Their performance was assessed via the area under the receiver operating curve (AUROC) and compared to radiologist classification, utilising the chi-squared test. DenseNet121 attained an AUROC of 0.97, while YOLOv8 recorded a score of 0.95, indicating a strong capability in differentiating between optimal and suboptimal CXRs. The alignment between radiologists and models exhibited variability, partly due to the lack of clinical indications. However, the performance was not statistically significant. Both AI models effectively classified chest X-ray quality, demonstrating the potential for providing radiographers with feedback to improve image quality. Notably, this was the first study to include both PA and lateral CXRs as well as paediatric cases and the first to evaluate YOLOv8 for this application.

Zhang D, Shen M, Zhang L, He X, Huang X

pubmed logopapersJul 18 2025
This study sought to develop a radiomics model capable of predicting axillary lymph node metastasis (ALNM) in patients with invasive breast cancer (IBC) based on dual-sequence magnetic resonance imaging(MRI) of diffusion-weighted imaging (DWI) and dynamic contrast enhancement (DCE) data. The interpretability of the resultant model was probed with the SHAP (Shapley Additive Explanations) method. Established inclusion/exclusion criteria were used to retrospectively compile MRI and matching clinical data from 183 patients with pathologically confirmed IBC from our hospital evaluated between June 2021 and December 2023. All of these patients had undergone plain and enhanced MRI scans prior to treatment. These patients were separated according to their pathological biopsy results into those with ALNM (n = 107) and those without ALNM (n = 76). These patients were then randomized into training (n = 128) and testing (n = 55) cohorts at a 7:3 ratio. Optimal radiomics features were selected from the extracted data. The random forest method was used to establish three predictive models (DWI, DCE, and combined DWI + DCE sequence models). Area under the curve (AUC) values for receiver operating characteristic (ROC) curves were utilized to assess model performance. The DeLong test was utilized to compare model predictive efficacy. Model discrimination was assessed based on the integrated discrimination improvement (IDI) method. Decision curves revealed net clinical benefits for each of these models. The SHAP method was used to achieve the best model interpretability. Clinicopathological characteristics (age, menopausal status, molecular subtypes, and estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and Ki-67 status) were comparable when comparing the ALNM and non-ALNM groups as well as the training and testing cohorts (P > 0.05). AUC values for the DWI, DCE, and combined models in the training cohort were 0.793, 0.774, and 0.864, respectively, with corresponding values of 0.728, 0.760, and 0.859 in the testing cohort. The predictive efficacy of the DWI and combined models was found to differ significantly according to the DeLong test, as did the predictive efficacy of the DCE and combined models in the training groups (P < 0.05), while no other significant differences were noted in model performance (P > 0.05). IDI results indicated that the combined model offered predictive power levels that were 13.5% (P < 0.05) and 10.2% (P < 0.05) higher than those for the respective DWI and DCE models. In a decision curve analysis, the combined model offered a net clinical benefit over the DCE model. The combined dual-sequence MRI-based radiomics model constructed herein and the supporting interpretability analyses can aid in the prediction of the ALNM status of IBC patients, helping to guide clinical decision-making in these cases.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

Feng F, Hasaballa AI, Long T, Sun X, Fernandez J, Carlhäll CJ, Zhao J

pubmed logopapersJul 18 2025
Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D. A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier. EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703. This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.
Page 442 of 7497481 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,400+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.