Sort by:
Page 120 of 7327315 results

Yim S, Park S, Lim KY, Kang H, Shin D, Jang H, Weiner M, Zetterberg H, Blennow K, Gonzalez-Ortiz F, Ashton NJ, Kang SH, Yun J, Chun M, Kim E, Kim H, Na DL, Kim JP, Seo SW, Kwak K

pubmed logopapersOct 28 2025
Early detection of amyloid-β (Aβ) pathology is critical for timely intervention in Alzheimer's disease (AD). While Aβ positron emission tomography (PET) and cerebrospinal fluid (CSF) biomarkers are accurate, their high cost and limited accessibility hinder routine use. We developed a computed tomography (CT)-based, two-stage workflow combining CT-derived atrophy patterns with plasma phosphorylated tau 217 (p-Tau217) to predict Aβ PET positivity. In this cohort of 616 participants (521 with mild cognitive impairment (MCI], 95 with early dementia of Alzheimer's type (DAT]; age 60-93 years), CT, p-Tau217 assays, and Aβ PET were performed. A random forest model incorporating CT-derived regional W-scores and apolipoprotein E ε4 (APOE ε4) status stratified individuals into low-, intermediate-, and high-risk groups. p-Tau217 testing was reserved for the intermediate-risk group. At a 95% sensitivity/specificity threshold, CT-based stratification yielded a low-risk negative predictive value (NPV) of 95.8% (93.0-98.6%) and a high-risk positive predictive value (PPV) of 98.4% (96.8-100.0%), with 28.2% classified as intermediate-risk. Targeted plasma testing of intermediate-risk group improved the overall PPV to 92.8% (88.5-97.1%) and the overall NPV to 88.9% (78.6-99.2%), achieving an overall accuracy of 95.8% (94.2-97.4%). The CT-based workflow's accuracy was non-inferior to our MRI-based method (area under the curve 0.96 vs. 0.95; p = 0.14). This CT-based, two-stage approach is a cost-effective, scalable alternative to MRI-based strategies, leveraging routine CT and selective p-Tau217 testing to enhance early AD detection and optimize resource utilization in clinical practice.

Yasaka K, Wada M, Tokuyama K, Takaishi T, Abe O

pubmed logopapersOct 28 2025
CT imaging is useful for evaluating complications related to central venous catheter insertion; however, it involves exposure to ionizing radiation. This study aimed to evaluate the impact of super-resolution deep learning reconstruction (SR-DLR) on the quality of low-dose CT imaging compared to hybrid iterative reconstruction (HIR) in patients with central venous catheter or central venous port. In this retrospective study, chest CT images were reconstructed using SR-DLR and HIR from source data acquired with a three-dimensional landmark scan during scout imaging. Three readers independently assessed the quality of images in terms of noise, artifacts, depiction of the brachiocephalic vein and superior vena cava, and ease of evaluating hematoma and catheter location. The standard deviation of CT attenuation within a region of interest placed in the right atrium was recorded as quantitative image noise. All readers rated noise, artifacts, and depiction of the brachiocephalic vein as significantly improved in SR-DLR compared to HIR (p ≤ 0.042). Two out of three readers rated depiction of the superior vena cava, ease of evaluating hematoma, and catheter location as significantly improved in SR-DLR compared to HIR (p ≤ 0.016). Quantitative image noise in SR-DLR was 10.2 Hounsfield unit, which was significantly reduced compared to HIR (24.0 Hounsfield unit) (p < 0.001). When sufficient information can be obtained from these images, the main CT scans may be omitted. In conclusion, SR-DLR enhanced the quality of low-dose CT imaging compared to HIR in patients with central venous catheter or central venous port.

Zahoora U, Shahid AR, Gondal FF

pubmed logopapersOct 28 2025
Brain tumor segmentation is difficult because of a number of technical problems, including its complex morphology, individual differences in anatomy, irregular shapes, overlapping, homogeneous gray matter and white matter intensity values, abnormalities that might not contrast with normal tissues, and the possibility of additional complications from various modalities. Expert radiologists may make different conclusions as a result of these difficulties. Regarding this, deep learning techniques, particularly CNN models, can be trained to handle these MRI artifacts and automatically extract features that the human eye is unable to detect, such as variations in shape, texture, and color. Deep learning models may effectively learn features across various modalities, but they are data-hungry techniques that could be enhanced with additional annotated data. Yet, data privacy is the main barrier to the real use of data centralization. To deal with these challenges proposed a federated learning approach. The proposed federated learning enables the decentralized learning of a shared model while sharing data. However, the traditional paradigm introduced in the literature involves institutional biases that have an impact on distributed learning. Proposed Fed_WCE_BTD (Federated Learning with Weak Client Elimination for Brain Tumor Detection) is a combination of a modified UNet architecture and federated. In addition, proposed method uses an optimal adaptive client selection strategy by carefully choosing each client based on their unique strengths. Our contribution to this crucial and costly diagnostic is being validated using the BRATS 2021 dataset, taking into account the slicing of brain tumors. The goal of this research is to outperform non-federated learning or perform on par with non-federated environments. At present, the suggested model is outperforming the others by 1% for detection of enhancing tumor and necrosis. The efficiency of the proposed federated learning was demonstrated by considerably higher dice-coefficient of enhancing tumor (p< 0.05) as compared to non-federated learning. However, the edema identification dice coefficient is 80%, which is similar to the baseline.

Yang Q, Jiang J, Dong X, Yang H, Wang Q, Yang Z, Yang D, Liu P

pubmed logopapersOct 28 2025
The free-text format is widely used in radiology reports for its flexibility of expression; however, its unstructured nature leads to substantial amounts of report data remaining underutilized. A natural language processing (NLP) model for automatic extraction of information from free-text radiology reports can significantly contribute to the development of structured databases, thereby optimizing data utilization. This study aimed to perform a systematic review and meta-analysis that evaluates the performance of NLP systems in extracting information from free-text radiology reports. A systematic literature search was conducted from November 21 to 23, 2024, in PubMed/MEDLINE, Embase, EBSCO, Ovid, Web of Science, and the Cochrane Library. Study quality was assessed using the QUADAS-2 tool. A bivariate random-effects model was applied to obtain the pooled sensitivity, specificity, diagnostic odds ratio (DOR), positive likelihood ratio (PLR), negative likelihood ratio (NLR), and area under the summary receiver operating characteristic curve (AUC). Subgroup analyses (e.g., NLP model types, dataset source, and language types) and a random-effects multivariable meta-regression based on the restricted maximum likelihood (REML) method were conducted to explore potential sources of heterogeneity. Sensitivity analyses (excluding high-risk studies, leave-one-out method, and data integration strategy comparison) were performed to assess the robustness of the findings. A total of 28 studies were included in the final analysis, with 421,692 extracted entities in 51,187 free-text radiology reports. NLP systems achieved high pooled sensitivity (91% [95% CI: 87, 93]) and specificity (96% [95% CI: 93, 97]), with a diagnostic odds ratio of 220 (95% CI: 112, 435) and an area under the curve of 0.98 (95% CI: 0.96, 0.99). Subgroup analysis revealed significantly better performance for extracting single anatomical sites (AUC 0.99; 95% CI: 0.97, 0.99) compared with multiple sites (AUC 0.95; 95% CI: 0.93, 0.97; p = 0.001). No significant differences were observed across NLP model types, dataset sources, external validations, languages, or imaging modalities. Multivariable meta-regression further identified anatomical site as the only significant contributor to heterogeneity (coefficient = 2.26; 95% CI: 0.25, 4.27; p = 0.027). Sensitivity analyses confirmed the robustness of the findings, and no evidence of publication bias was detected. NLP models demonstrated excellent performance in extracting information from free-text radiology reports. However, the observed heterogeneity highlights the need for enhanced report standardization and improved model generalizability.

Xiong H, Lu Y, Qiu J, Wu T, Liu H, Fei Z, Fan C, Zhang P

pubmed logopapersOct 28 2025
Deep learning techniques in image processing are gaining widespread application, with growing research in medical image analysis and diagnosis driven by advancements in image recognition technology. This study aimed to addresses the challenges of lung nodule recognition and classification using convolutional neural networks (CNN) and proposes a novel multiscale convolutional neural network (MCNN) model. The MCNN model integrates Gaussian Pyramid Decomposition (GPD) to enhance CNN-based image recognition for lung nodule detection. A practical study was conducted to apply the MCNN model, and its performance was compared with various algorithmic models and classifiers. Experimental results show that the MCNN model outperforms traditional CNN methods, particularly in detecting solid nodules and pure ground-glass nodules, with an improvement in F1 values of over 2.0%. Furthermore, the MCNN model demonstrated superior overall accuracy in lung nodule detection. These findings underline the practical implications of deep learning in advancing medical image analysis and diagnosis, offering new possibilities for improving the prognosis of lung nodule-relate diseases.

Shalalvand M, Haghanifar S, Moudi E, Bijani A

pubmed logopapersOct 28 2025
Accurate volumetric assessment of intraosseous lesions is crucial in various fields, including bone defect evaluation, surgical outcome prediction, treatment monitoring, and 3D model design. High volumetric accuracy is essential for CBCT in digital dentistry applications. However, there is a notable lack of studies investigating the accuracy of volume determination using CBCT. In this study, we examined the factors affecting CBCT volumetric accuracy, namely voxel size, lesion location, and segmentation techniques, to improve diagnostic protocols and optimize the clinical applications of this imaging modality. 28 artificial bone defects were created in the dry rabbit mandible in two regions (Anterior and Posterior). CBCT imaging was performed with standardized positioning at two voxel sizes (0.1 and 0.2 mm). regarding micro-CT imaging as the gold standard. Images were analyzed in DICOM format using ImageJ after preprocessing, and semi-automatic segmentation was conducted via Otsu thresholding with a manually defined external defect border. In Avizo, a ResNet18-encoded U-Net architecture (Avizo's Backboned U-Net implementation) was trained for the multiclass segmentation of the bone, background, and lesions. The volume calculations were based on the voxel counts. Volumetric measurements from CBCT showed no statistically significant difference from the micro-CT gold standard (p > 0.05). However, a significant underestimation of volume was observed when using a larger voxel size (0.2 mm) compared with a smaller voxel size (0.1 mm), irrespective of the segmentation software used (p < 0.05). The choice of software (ImageJ vs. Avizo's deep learning-assisted tool) did not significantly affect the measurements of the porosity. The location of the defect (anterior vs. posterior) also had no significant impact on the accuracy. CBCT is a reliable tool for the volumetric assessment of mandibular bone defects and demonstrates strong agreement with micro-CT. Clinically, our findings suggest that selecting a smaller voxel size (0.1 mm) is paramount for maximizing measurement accuracy in applications requiring high precision, such as surgical planning and 3D model fabrication. The implementation of a deep learning-assisted segmentation model proved to be a viable and efficient alternative to conventional semi-automatic methods, highlighting its potential to streamline the digital workflow in dentistry without compromising accuracy.

Massalimova A, Bauer D, Cavalcanti N, Carrillo F, Mazda F, Fuernstahl P

pubmed logopapersOct 28 2025
Accurate pedicle screw placement (PSP) is essential in spinal fusion surgery. Conventional navigation relies on computed tomography (CT) or fluoroscopy, which involves ionizing radiation and requires an error-prone registration procedure. We propose a pipeline that enables PSP planning directly on vertebral point clouds reconstructed from intraoperative RGB-D scans, using the SurgPointTransformer network. The system detects screw entry and pedicle regions, estimates initial trajectories, and refines them via anatomically constrained optimization. We evaluated our method on nine ex-vivo cadaveric specimens, comparing RGB-D-based planning to a CT-based baseline using both RGB-D reconstructions and ground-truth CT meshes. No significant differences were found in entry-point offset (3.53 ± 1.30 mm vs. 3.90 ± 1.29 mm), pedicle-center offset (1.58 ± 0.58 mm vs. 1.68 ± 0.59 mm), trajectory-angle error (7.31 ± 3.34[Formula: see text] vs. 7.67 ± 3.59[Formula: see text]); all [Formula: see text]. Safety analysis using the Gertzbein-Robbins classification showed 100% radiologically optimal screw placement (grade A) with both methods. PSP planned from RGB-D reconstructions of the exposed dorsal surface alone achieved planning-level accuracy comparable to CT-based planning on the entire vertebral body. Prospective intraoperative validation is required to establish execution accuracy and clinical outcomes.

Tabassum M, Di Ieva A, Liu S

pubmed logopapersOct 28 2025
In recent years, numerous algorithms have emerged for the segmentation of brain tumors, driven by advancements in deep learning techniques, where the objective is to identify and delineate various tumor sub-regions. While deep learning models like nnUNet have shown promising results in glioma segmentation, their effectiveness in segmenting other brain tumor subtypes, such as meningiomas and metastases, remains uncertain, especially when the available dataset lacks representative examples. To address this challenge, we propose a meta-transfer learning approach, which involves fine-tuning the nnUNet model on datasets containing meningiomas and metastases while leveraging the knowledge acquired from glioma segmentation. This approach aims to enhance the adaptability of nnUNet, allowing it to generalize better to diverse brain tumor types and potentially improving the accuracy of diagnosis and treatment planning for patients with meningiomas and metastases. Our proposed method significantly improves segmentation performance, achieving Dice coefficients of 0.8621 ± 0.2413 for Whole Tumor (WT) in meningiomas and 0.8141 ± 0.0562 for WT in metastases. These results set a new benchmark in brain tumor segmentation and pave the way for more robust and generalizable medical image analysis tools.

Yonar A

pubmed logopapersOct 28 2025
Accurate automated classification of brain tumors from magnetic resonance imaging (MRI) is essential for early diagnosis and treatment. This study presents a hybrid framework combining Convolutional Neural Network (CNN) deep features, Large Margin Nearest Neighbor (LMNN) metric learning, and swarm-intelligence optimization for robust four-class classification. Five pretrained CNNs-DenseNet201, MobileNetV2, ResNet50, ResNet101, and InceptionV3-were evaluated on a dataset of 7,023 images categorized as glioma, meningioma, pituitary, healthy. Among these, DenseNet201 provided the highest baseline performance with 92.66% accuracy. LMNN improved feature separability, while Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) selected compact subsets. The selected features were classified using k-Nearest Neighbor (KNN), Support Vector Machine (SVM), Artificial Neural Network (ANN), and Random Forest (RF). The DenseNet201-LMNN-GWO-KNN configuration, termed DenseWolf-K, achieved the best performance with 99.64% accuracy, establishing it as the optimal implementation of the framework. Robustness and generalizability were further confirmed using an independent external dataset. Model explainability was ensured through feature-level ranking of GWO-selected features and occlusion sensitivity maps, an Explainable Artifical Intelligence (XAI) method. Overall, the proposed DenseWolf-K framework delivers high accuracy, low false-negative rates, compact representation, and enhanced interpretability, representing a reliable and efficient solution for MRI-based brain tumor classification.

Naveenraj M, Vijayakumar P

pubmed logopapersOct 28 2025
Lung cancer remains one of the deadliest diseases in the world and early detection is critical to enhancing survival rates. With traditional diagnostic techniques - CT scans and chest X-rays - an invasive procedure must be performed and, in some cases, it relies on expert interpretation. Whether benign or malignant, the similarities in visual characteristics of nodules leads to ambiguity and makes for a difficult case which calls for the development of automatic lung cancer classification framework such as the one we proposed, which incorporates Deep Learning (DL) methods and uses a rigourous training methodology on top of that. Our framework pre-processes the images with adaptive filters to eliminate noise, segments lesions, removes, and refines features with Hybrid Horse Herd Optimization (HHO) and Lion Optimization Algorithm (LOA). Those features are classified with a hybrid Deep Convolutional Neural Network and Long Short-Term Memory (DCNN + LSTM) model, which jointly enhances features extraction and temporal learning. We run data learning against standard lung CT datasets and achieved a score of 98.75% accuracy, demonstrating the proposed system is effective in classifying normal lung tissue from abnormal. Nonetheless, the real-time usability of the system is limited by the performance of the CT, and the computational demands of the model, which can be troublesome for clinical situations that typically possess less computational power. Furthermore, these limitations never the less provide a more intelligent, accurate diagnostic aid for radiologists that non-invasively assists in clinical decision making and, importantly, earlier cancer diagnoses.
Page 120 of 7327315 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.