Sort by:
Page 61 of 99990 results

Diagnostic tools in respiratory medicine (Review).

Georgakopoulou VE, Spandidos DA, Corlateanu A

pubmed logopapersJul 1 2025
Recent advancements in diagnostic technologies have significantly transformed the landscape of respiratory medicine, aiming for early detection, improved specificity and personalized therapeutic strategies. Innovations in imaging such as multi-slice computed tomography (CT) scanners, high-resolution CT and magnetic resonance imaging (MRI) have revolutionized our ability to visualize and assess the structural and functional aspects of the respiratory system. These techniques are complemented by breakthroughs in molecular biology that have identified specific biomarkers and genetic determinants of respiratory diseases, enabling targeted diagnostic approaches. Additionally, functional tests including spirometry and exercise testing continue to provide valuable insights into pulmonary function and capacity. The integration of artificial intelligence is poised to further refine these diagnostic tools, enhancing their accuracy and efficiency. The present narrative review explores these developments and their impact on the management and outcomes of respiratory conditions, underscoring the ongoing shift towards more precise and less invasive diagnostic modalities in respiratory medicine.

Evaluation of MRI-based synthetic CT for lumbar degenerative disease: a comparison with CT.

Jiang Z, Zhu Y, Wang W, Li Z, Li Y, Zhang M

pubmed logopapersJul 1 2025
Patients with lumbar degenerative disease typically undergo preoperative MRI combined with CT scans, but this approach introduces additional ionizing radiation and examination costs. To compare the effectiveness of MRI-based synthetic CT (sCT) in displaying lumbar degenerative changes, using CT as the gold standard. This prospective study was conducted between June 2021 and September 2023. Adult patients suspected of lumbar degenerative disease were enrolled and underwent both lumbar MRI and CT scans on the same day. The MRI images were processed using a deep learning-based image synthesis method (BoneMRI) to generate sCT images. Two radiologists independently assessed and measured the display and length of osteophytes, the presence of annular calcifications, and the CT values (HU) of L1 vertebrae on both sCT and CT images. The consistency between CT and sCT in terms of imaging results was evaluated using equivalence statistical tests. The display performance of sCT images generated from MRI scans by different manufacturers and field strengths was also compared. A total of 105 participants were included (54 males and 51 females, aged 19-95 years). sCT demonstrated statistical equivalence to CT in displaying osteophytes and annular calcifications but showed poorer performance in detecting osteoporosis. The display effectiveness of sCT images synthesized from MRI scans obtained using different imaging equipment was consistent. sCT demonstrated comparable effectiveness to CT in geometric measurements of lumbar degenerative changes. However, sCT cannot independently detect osteoporosis. When combined with conventional MRI's soft tissue information, sCT offers a promising possibility for radiation-free diagnosis and preoperative planning.

Differential dementia detection from multimodal brain images in a real-world dataset.

Leming M, Im H

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models have been applied to differential dementia detection tasks in brain images from curated, high-quality benchmark databases, but not real-world data in hospitals. We describe a deep learning model specially trained for disease detection in heterogeneous clinical images from electronic health records without focusing on confounding factors. It encodes up to 14 multimodal images, alongside age and demographics, and outputs the likelihood of vascular dementia, Alzheimer's, Lewy body dementia, Pick's disease, mild cognitive impairment, and unspecified dementia. We use data from Massachusetts General Hospital (183,018 images from 11,015 patients) for training and external data (125,493 images from 6,662 patients) for testing. Performance ranged between 0.82 and 0.94 area under the curve (AUC) on data from 1003 sites. Analysis shows that the model focused on subcortical brain structures as the basis for its decisions. By detecting biomarkers in real-world data, the presented techniques will help with clinical translation of disease detection AI. Our artificial intelligence (AI) model can detect neurodegenerative disorders in brain imaging electronic health record (EHR) data. It encodes up to 14 brain images and text information from a single patient's EHR. Attention maps show that the model focuses on subcortical brain structures. Performance ranged from 0.82 to 0.94 area under the curve (AUC) on data from 1003 external sites.

Attention-driven hybrid deep learning and SVM model for early Alzheimer's diagnosis using neuroimaging fusion.

Paduvilan AK, Livingston GAL, Kuppuchamy SK, Dhanaraj RK, Subramanian M, Al-Rasheed A, Getahun M, Soufiene BO

pubmed logopapersJul 1 2025
Alzheimer's Disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely interventions. AD is a progressive neurodegenerative disorder that affects millions worldwide and is one of the leading causes of cognitive impairment in older adults. Early diagnosis is critical for enabling effective treatment strategies, slowing disease progression, and improving the quality of life for patients. Existing diagnostic methods often struggle with limited sensitivity, overfitting, and reduced reliability due to inadequate feature extraction, imbalanced datasets, and suboptimal model architectures. This study addresses these gaps by introducing an innovative methodology that combines SVM with Deep Learning (DL) to improve the classification performance of AD. Deep learning models extract high-level imaging features which are then concatenated with SVM kernels in a late-fusion ensemble. This hybrid design leverages deep representations for pattern recognition and SVM's robustness on small sample sets. This study provides a necessary tool for early-stage identification of possible cases, so enhancing the management and treatment options. This is attained by precisely classifying the disease from neuroimaging data. The approach integrates advanced data pre-processing, dynamic feature optimization, and attention-driven learning mechanisms to enhance interpretability and robustness. The research leverages a dataset of MRI and PET imaging, integrating novel fusion techniques to extract key biomarkers indicative of cognitive decline. Unlike prior approaches, this method effectively mitigates the challenges of data sparsity and dimensionality reduction while improving generalization across diverse datasets. Comparative analysis highlights a 15% improvement in accuracy, a 12% reduction in false positives, and a 10% increase in F1-score against state-of-the-art models such as HNC and MFNNC. The proposed method significantly outperforms existing techniques across metrics like accuracy, sensitivity, specificity, and computational efficiency, achieving an overall accuracy of 98.5%.

Photoacoustic-Integrated Multimodal Approach for Colorectal Cancer Diagnosis.

Biswas S, Chohan DP, Wankhede M, Rodrigues J, Bhat G, Mathew S, Mahato KK

pubmed logopapersJul 1 2025
Colorectal cancer remains a major global health challenge, emphasizing the need for advanced diagnostic tools that enable early and accurate detection. Photoacoustic (PA) spectroscopy, a hybrid technique combining optical absorption with acoustic resolution, is emerging as a powerful tool in cancer diagnostics. It detects biochemical changes in biomolecules within the tumor microenvironment, aiding early identification of malignancies. Integration with modalities, such as ultrasound (US), photoacoustic microscopy (PAM), and nanoparticle-enhanced imaging, enables detailed mapping of tissue structure, vascularity, and molecular markers. When combined with endoscopy and machine learning (ML) for data analysis, PA technology offers real-time, minimally invasive, and highly accurate detection of colorectal tumors. This approach supports tumor classification, therapy monitoring, and detecting features like hypoxia and tumor-associated bacteria. Recent studies integrating machine learning with PA imaging have demonstrated high diagnostic accuracy, achieving area under the curve (AUC) values up to 0.96 and classification accuracies exceeding 89%, highlighting its potential for precise, noninvasive colorectal cancer detection. Continued advancements in nanoparticle design, molecular targeting, and ML analytics position PA as a key tool for personalized colorectal cancer management.

Patient radiation safety in the intensive care unit.

Quaia E

pubmed logopapersJul 1 2025
The aim of this commentary review was to summarize the main research evidences on radiation exposure and to underline the best clinical and radiological practices to limit radiation exposure in ICU patients. Radiological imaging is essential for management of patients in the ICU despite the risk of ionizing radiation exposure in monitoring critically ill patients, especially in those with prolonged hospitalization. In optimizing radiation exposure reduction for ICU patients, multiple parties and professionals must be considered, including hospital management, clinicians, radiographers, and radiologists. Modified diagnostic reference levels for ICU patients, based on UK guidance, may be proposed, especially considering the frequent repetition of x-ray diagnostic procedures in ICU patients. Best practices may reduce radiation exposure in ICU patients with particular emphasis on justification and radiation exposure optimization in conventional radiology, interventional radiology and fluoroscopy, CT, and nuclear medicine. CT contributes most predominately to radiation exposure in ICU patients. Low-dose (<1 mSv in effective dose) or even ultra-low-dose CT protocols, iterative reconstruction algorithms, and artificial intelligence-based innovative dose-reduction strategies could reduce radiation exposure and related oncogenic risks.

Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures.

Bottini M, Zanier O, Da Mutten R, Gandia-Gonzalez ML, Edström E, Elmi-Terander A, Regli L, Serra C, Staartjes VE

pubmed logopapersJul 1 2025
This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique. A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity. The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images. This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.

Fully automatic anatomical landmark localization and trajectory planning for navigated external ventricular drain placement.

de Boer M, van Doormaal JAM, Köllen MH, Bartels LW, Robe PAJT, van Doormaal TPC

pubmed logopapersJul 1 2025
The aim of this study was to develop and validate a fully automatic anatomical landmark localization and trajectory planning method for external ventricular drain (EVD) placement using CT or MRI. The authors used 125 preoperative CT and 137 contrast-enhanced T1-weighted MRI scans to generate 3D surface meshes of patients' skin and ventricular systems. Seven anatomical landmarks were manually annotated to train a neural network for automatic landmark localization. The model's accuracy was assessed by calculating the mean Euclidian distance of predicted landmarks to the ground truth. Kocher's point and EVD trajectories were automatically calculated with the foramen of Monro as the target. Performance was evaluated using Kakarla grades, as assessed by 3 clinicians. Interobserver agreement was measured with Pearson correlation, and scores were aggregated using majority voting. Ordinal linear regressions were used to assess whether modality or placement side had an effect on Kakarla grades. The impact of landmark localization error on the final EVD plan was also evaluated. The automated landmark localization model achieved a mean error of 4.0 mm (SD 2.6 mm). Trajectory planning generated a trajectory for all patients, with a Kakarla grade of 1 in 92.9% of cases. Statistical analyses indicated a strong interobserver agreement and no significant differences between modalities (CT vs MRI) or EVD placement sides. The location of Kocher's point and the target point were significantly correlated to nasion landmark localization error, with median drifts of 9.38 mm (95% CI 1.94-19.16 mm) and 3.91 mm (95% CI 0.18-26.76 mm) for Kocher's point and the target point, respectively. The presented method was efficient and robust for landmark localization and accurate EVD trajectory planning. The short processing time thereby also provides a base for use in emergency settings.

Response prediction for neoadjuvant treatment in locally advanced rectal cancer patients-improvement in decision-making: A systematic review.

Boldrini L, Charles-Davies D, Romano A, Mancino M, Nacci I, Tran HE, Bono F, Boccia E, Gambacorta MA, Chiloiro G

pubmed logopapersJul 1 2025
Predicting pathological complete response (pCR) from pre or post-treatment features could be significant in improving the process of making clinical decisions and providing a more personalized treatment approach for better treatment outcomes. However, the lack of external validation of predictive models, missing in several published articles, is a major issue that can potentially limit the reliability and applicability of predictive models in clinical settings. Therefore, this systematic review described different externally validated methods of predicting response to neoadjuvant chemoradiotherapy (nCRT) in locally advanced rectal cancer (LARC) patients and how they could improve clinical decision-making. An extensive search for eligible articles was performed on PubMed, Cochrane, and Scopus between 2018 and 2023, using the keywords: (Response OR outcome) prediction AND (neoadjuvant OR chemoradiotherapy) treatment in 'locally advanced Rectal Cancer'. (i) Studies including patients diagnosed with LARC (T3/4 and N- or any T and N+) by pre-medical imaging and pathological examination or as stated by the author (ii) Standardized nCRT completed. (iii) Treatment with long or short course radiotherapy. (iv) Studies reporting on the prediction of response to nCRT with pathological complete response (pCR) as the primary outcome. (v) Studies reporting external validation results for response prediction. (vi) Regarding language restrictions, only articles in English were accepted. (i) We excluded case report studies, conference abstracts, reviews, studies reporting patients with distant metastases at diagnosis. (ii) Studies reporting response prediction with only internally validated approaches. Three researchers (DC-D, FB, HT) independently reviewed and screened titles and abstracts of all articles retrieved after de-duplication. Possible disagreements were resolved through discussion among the three researchers. If necessary, three other researchers (LB, GC, MG) were consulted to make the final decision. The extraction of data was performed using the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) template and quality assessment was done using the Prediction model Risk Of Bias Assessment Tool (PROBAST). A total of 4547 records were identified from the three databases. After excluding 392 duplicate results, 4155 records underwent title and abstract screening. Three thousand and eight hundred articles were excluded after title and abstract screening and 355 articles were retrieved. Out of the 355 retrieved articles, 51 studies were assessed for eligibility. Nineteen reports were then excluded due to lack of reports on external validation, while 4 were excluded due to lack of evaluation of pCR as the primary outcome. Only Twenty-eight articles were eligible and included in this systematic review. In terms of quality assessment, 89 % of the models had low concerns in the participants domain, while 11 % had an unclear rating. 96 % of the models were of low concern in both the predictors and outcome domains. The overall rating showed high applicability potential of the models with 82 % showing low concern, while 18 % were deemed unclear. Most of the external validated techniques showed promising performances and the potential to be applied in clinical settings, which is a crucial step towards evidence-based medicine. However, more studies focused on the external validations of these models in larger cohorts is necessary to ensure that they can reliably predict outcomes in diverse populations.

Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study.

Kim D, Choo K, Lee S, Kang S, Yun M, Yang J

pubmed logopapersJul 1 2025
Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MR<sub>SYN</sub>) and performing automated quantitative regional analysis using MR<sub>SYN</sub>-derived segmentation. In this retrospective study, 139 subjects who underwent brain [<sup>18</sup>F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MR<sub>SYN</sub>; subsequently, a separate model was trained to segment MR<sub>SYN</sub> into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [<sup>18</sup>F]FBB PET images using the acquired ROIs. For evaluation of MR<sub>SYN</sub>, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MR<sub>SYN</sub>-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MR<sub>SYN</sub> and ground-truth MR (MR<sub>GT</sub>). Compared to MR<sub>GT</sub>, the mean SSIM of MR<sub>SYN</sub> was 0.974 ± 0.005. The MR<sub>SYN</sub>-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MR<sub>SYN</sub> and those from MR<sub>GT</sub>, excluding the precuneus. We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MR<sub>SYN</sub>. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.
Page 61 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.