Sort by:
Page 98 of 1601600 results

Non-invasive arterial input function estimation using an MRA atlas and machine learning.

Vashistha R, Moradi H, Hammond A, O'Brien K, Rominger A, Sari H, Shi K, Vegh V, Reutens D

pubmed logopapersMay 23 2025
Quantifying biological parameters of interest through dynamic positron emission tomography (PET) requires an arterial input function (AIF) conventionally obtained from arterial blood samples. The AIF can also be non-invasively estimated from blood pools in PET images, often identified using co-registered MRI images. Deploying methods without blood sampling or the use of MRI generally requires total body PET systems with a long axial field-of-view (LAFOV) that includes a large cardiovascular blood pool. However, the number of such systems in clinical use is currently much smaller than that of short axial field-of-view (SAFOV) scanners. We propose a data-driven approach for AIF estimation for SAFOV PET scanners, which is non-invasive and does not require MRI or blood sampling using brain PET scans. The proposed method was validated using dynamic <sup>18</sup>F-fluorodeoxyglucose [<sup>18</sup>F]FDG total body PET data from 10 subjects. A variational inference-based machine learning approach was employed to correct for peak activity. The prior was estimated using a probabilistic vascular MRI atlas, registered to each subject's PET image to identify cerebral arteries in the brain. The estimated AIF using brain PET images (IDIF-Brain) was compared to that obtained using data from the descending aorta of the heart (IDIF-DA). Kinetic rate constants (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) and net radiotracer influx (K<sub>i</sub>) for both cases were computed and compared. Qualitatively, the shape of IDIF-Brain matched that of IDIF-DA, capturing information on both the peak and tail of the AIF. The area under the curve (AUC) of IDIF-Brain and IDIF-DA were similar, with an average relative error of 9%. The mean Pearson correlations between kinetic parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) estimated with IDIF-DA and IDIF-Brain for each voxel were between 0.92 and 0.99 in all subjects, and for K<sub>i</sub>, it was above 0.97. This study introduces a new approach for AIF estimation in dynamic PET using brain PET images, a probabilistic vascular atlas, and machine learning techniques. The findings demonstrate the feasibility of non-invasive and subject-specific AIF estimation for SAFOV scanners.

Optimizing the power of AI for fracture detection: from blind spots to breakthroughs.

Behzad S, Eibschutz L, Lu MY, Gholamrezanezhad A

pubmed logopapersMay 23 2025
Artificial Intelligence (AI) is increasingly being integrated into the field of musculoskeletal (MSK) radiology, from research methods to routine clinical practice. Within the field of fracture detection, AI is allowing for precision and speed previously unimaginable. Yet, AI's decision-making processes are sometimes wrought with deficiencies, undermining trust, hindering accountability, and compromising diagnostic precision. To make AI a trusted ally for radiologists, we recommend incorporating clinical history, rationalizing AI decisions by explainable AI (XAI) techniques, increasing the variety and scale of training data to approach the complexity of a clinical situation, and active interactions between clinicians and developers. By bridging these gaps, the true potential of AI can be unlocked, enhancing patient outcomes and fundamentally transforming radiology through a harmonious integration of human expertise and intelligent technology. In this article, we aim to examine the factors contributing to AI inaccuracies and offer recommendations to address these challenges-benefiting both radiologists and developers striving to improve future algorithms.

Deep learning and iterative image reconstruction for head CT: Impact on image quality and radiation dose reduction-Comparative study.

Pula M, Kucharczyk E, Zdanowicz-Ratajczyk A, Dorochowicz M, Guzinski M

pubmed logopapersMay 23 2025
<b>Background and purpose:</b> This study focuses on an objective evaluation of a novel reconstruction algorithm-Deep Learning Image Reconstruction (DLIR)-ability to improve image quality and reduce radiation dose compared to the established standard of Adaptive Statistical Iterative Reconstruction-V (ASIR-V), in unenhanced head computed tomography (CT). <b>Materials and methods:</b> A retrospective analysis of 163 consecutive unenhanced head CTs was conducted. Image quality assessment was computed on the objective parameters of Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR), derived from 5 regions of interest (ROI). The evaluation of DLIR dose reduction abilities was based on the analysis of the PACS derived parameters of dose length product and computed tomography dose index volume (CTDIvol). <b>Results:</b> Following the application of rigorous criteria, the study comprised 35 patients. Significant image quality improvement was achieved with the implementation of DLIR, as evidenced by up to a 145% and 160% increase in SNR in supra- and infratentorial regions, respectively. CNR measurements further confirmed the superiority of DLIR over ASIR-V, with an increase of 171.5% in the supratentorial region and a 59.3% increase in the infratentorial region. Despite the signal improvement and noise reduction DLIR facilitated radiation dose reduction of up to 44% in CTDIvol. <b>Conclusion:</b> Implementation of DLIR in head CT scans enables significant image quality improvement and dose reduction abilities compared to standard ASIR-V. However, the dose reduction feature was proven insufficient to counteract the lack of gantry angulation in wide-detector scanners.

COVID-19CT+: A public dataset of CT images for COVID-19 retrospective analysis.

Sun Y, Du T, Wang B, Rahaman MM, Wang X, Huang X, Jiang T, Grzegorzek M, Sun H, Xu J, Li C

pubmed logopapersMay 23 2025
Background and objectiveCOVID-19 is considered as the biggest global health disaster in the 21st century, and it has a huge impact on the world.MethodsThis paper publishes a publicly available dataset of CT images of multiple types of pneumonia (COVID-19CT+). Specifically, the dataset contains 409,619 CT images of 1333 patients, with subset-A containing 312 community-acquired pneumonia cases and subset-B containing 1021 COVID-19 cases. In order to demonstrate that there are differences in the methods used to classify COVID-19CT+ images across time, we selected 13 classical machine learning classifiers and 5 deep learning classifiers to test the image classification task.ResultsIn this study, two sets of experiments are conducted using traditional machine learning and deep learning methods, the first set of experiments is the classification of COVID-19 in Subset-B versus COVID-19 white lung disease, and the second set of experiments is the classification of community-acquired pneumonia in Subset-A versus COVID-19 in Subset-B, demonstrating that the different periods of the methods differed on COVID-19CT+. On the first set of experiments, the accuracy of traditional machine learning reaches a maximum of 97.3% and a minimum of only 62.6%. Deep learning algorithms reaches a maximum of 97.9% and a minimum of 85.7%. On the second set of experiments, traditional machine learning reaches a high of 94.6% accuracy and a low of 56.8%. The deep learning algorithm reaches a high of 91.9% and a low of 86.3%.ConclusionsThe COVID-19CT+ in this study covers a large number of CT images of patients with COVID-19 and community-acquired pneumonia and is one of the largest datasets available. We expect that this dataset will attract more researchers to participate in exploring new automated diagnostic algorithms to contribute to the improvement of the diagnostic accuracy and efficiency of COVID-19.

Detection, Classification, and Segmentation of Rib Fractures From CT Data Using Deep Learning Models: A Review of Literature and Pooled Analysis.

Den Hengst S, Borren N, Van Lieshout EMM, Doornberg JN, Van Walsum T, Wijffels MME, Verhofstad MHJ

pubmed logopapersMay 23 2025
Trauma-induced rib fractures are common injuries. The gold standard for diagnosing rib fractures is computed tomography (CT), but the sensitivity in the acute setting is low, and interpreting CT slices is labor-intensive. This has led to the development of new diagnostic approaches leveraging deep learning (DL) models. This systematic review and pooled analysis aimed to compare the performance of DL models in the detection, segmentation, and classification of rib fractures based on CT scans. A literature search was performed using various databases for studies describing DL models detecting, segmenting, or classifying rib fractures from CT data. Reported performance metrics included sensitivity, false-positive rate, F1-score, precision, accuracy, and mean average precision. A meta-analysis was performed on the sensitivity scores to compare the DL models with clinicians. Of the 323 identified records, 25 were included. Twenty-one studies reported on detection, four on segmentation, and 10 on classification. Twenty studies had adequate data for meta-analysis. The gold standard labels were provided by clinicians who were radiologists and orthopedic surgeons. For detecting rib fractures, DL models had a higher sensitivity (86.7%; 95% CI: 82.6%-90.2%) than clinicians (75.4%; 95% CI: 68.1%-82.1%). In classification, the sensitivity of DL models for displaced rib fractures (97.3%; 95% CI: 95.6%-98.5%) was significantly better than that of clinicians (88.2%; 95% CI: 84.8%-91.3%). DL models for rib fracture detection and classification achieved promising results. With better sensitivities than clinicians for detecting and classifying displaced rib fractures, the future should focus on implementing DL models in daily clinics. Level III-systematic review and pooled analysis.

Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers.

Kato R, Kadoya N, Kato T, Tozuka R, Ogawa S, Murakami M, Jingu K

pubmed logopapersMay 23 2025
This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.

Multimodal ultrasound-based radiomics and deep learning for differential diagnosis of O-RADS 4-5 adnexal masses.

Zeng S, Jia H, Zhang H, Feng X, Dong M, Lin L, Wang X, Yang H

pubmed logopapersMay 23 2025
Accurate differentiation between benign and malignant adnexal masses is crucial for patients to avoid unnecessary surgical interventions. Ultrasound (US) is the most widely utilized diagnostic and screening tool for gynecological diseases, with contrast-enhanced US (CEUS) offering enhanced diagnostic precision by clearly delineating blood flow within lesions. According to the Ovarian and Adnexal Reporting and Data System (O-RADS), masses classified as categories 4 and 5 carry the highest risk of malignancy. However, the diagnostic accuracy of US remains heavily reliant on the expertise and subjective interpretation of radiologists. Radiomics has demonstrated significant value in tumor differential diagnosis by extracting microscopic information imperceptible to the human eye. Despite this, no studies to date have explored the application of CEUS-based radiomics for differentiating adnexal masses. This study aims to develop and validate a multimodal US-based nomogram that integrates clinical variables, radiomics, and deep learning (DL) features to effectively distinguish adnexal masses classified as O-RADS 4-5. From November 2020 to March 2024, we enrolled 340 patients who underwent two-dimensional US (2DUS) and CEUS and had masses categorized as O-RADS 4-5. These patients were randomly divided into a training cohort and a test cohort in a 7:3 ratio. Adnexal masses were manually segmented from 2DUS and CEUS images. Using machine learning (ML) and DL techniques, five models were developed and validated to differentiate adnexal masses. The diagnostic performance of these models was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Additionally, a nomogram was constructed to visualize outcome measures. The CEUS-based radiomics model outperformed the 2DUS model (AUC: 0.826 vs. 0.737). Similarly, the CEUS-based DL model surpassed the 2DUS model (AUC: 0.823 vs. 0.793). The ensemble model combining clinical variables, radiomics, and DL features achieved the highest AUC (0.929). Our study confirms the effectiveness of CEUS-based radiomics for distinguishing adnexal masses with high accuracy and specificity using a multimodal US-based radiomics DL nomogram. This approach holds significant promise for improving the diagnostic precision of adnexal masses classified as O-RADS 4-5.

A deep learning model integrating domain-specific features for enhanced glaucoma diagnosis.

Xu J, Jing E, Chai Y

pubmed logopapersMay 23 2025
Glaucoma is a group of serious eye diseases that can cause incurable blindness. Despite the critical need for early detection, over 60% of cases remain undiagnosed, especially in less developed regions. Glaucoma diagnosis is a costly task and some models have been proposed to automate diagnosis based on images of the retina, specifically the area known as the optic cup and the associated disc where retinal blood vessels and nerves enter and leave the eye. However, diagnosis is complicated because both normal and glaucoma-affected eyes can vary greatly in appearance. Some normal cases, like glaucoma, exhibit a larger cup-to-disc ratio, one of the main diagnostic criteria, making it challenging to distinguish between them. We propose a deep learning model with domain features (DLMDF) to combine unstructured and structured features to distinguish between glaucoma and physiologic large cups. The structured features were based upon the known cup-to-disc ratios of the four quadrants of the optic discs in normal, physiologic large cups, and glaucomatous optic cups. We segmented each cup and disc using a fully convolutional neural network and then calculated the cup size, disc size, and cup-to-disc ratio of each quadrant. The unstructured features were learned from a deep convolutional neural network. The average precision (AP) for disc segmentation was 98.52%, and for cup segmentation it was also 98.57%. Thus, the relatively high AP values enabled us to calculate the 15 reliable features from each segmented disc and cup. In classification tasks, the DLMDF outperformed other models, achieving superior accuracy, precision, and recall. These results validate the effectiveness of combining deep learning-derived features with domain-specific structured features, underscoring the potential of this approach to advance glaucoma diagnosis.

Development of a non-contrast CT-based radiomics nomogram for early prediction of delayed cerebral ischemia in aneurysmal subarachnoid hemorrhage.

Chen L, Wang X, Wang S, Zhao X, Yan Y, Yuan M, Sun S

pubmed logopapersMay 23 2025
Delayed cerebral ischemia (DCI) is a significant complication following aneurysmal subarachnoid hemorrhage (aSAH), leading to poor prognosis and high mortality. This study developed a non-contrast CT (NCCT)-based radiomics nomogram for early DCI prediction in aSAH patients. Three hundred seventy-seven aSAH patients were included in this retrospective study. Radiomic features from the baseline CTs were extracted using PyRadiomics. Feature selection was conducted using t-tests, Pearson correlation, and Lasso regression to identify those features most closely associated with DCI. Multivariable logistic regression was used to identify independent clinical and demographic risk factors. Eight machine learning algorithms were applied to construct radiomics-only and radiomics-clinical fusion nomogram models. The nomogram integrated the radscore and three clinically significant parameters (aneurysm and aneurysm treatment and admission Hunt-Hess score), with the Support Vector Machine model yielding the highest performance in the validation set. The radiomics model and nomogram produced AUCs of 0.696 (95% CI: 0.578-0.815) and 0.831 (95% CI: 0.739-0.923), respectively. The nomogram achieved an accuracy of 0.775, a sensitivity of 0.750, a specificity of 0.795, and an F1 score of 0.750. The NCCT-based radiomics nomogram demonstrated high predictive performance for DCI in aSAH patients, providing a valuable tool for early DCI identification and formulating appropriate treatment strategies. Not applicable.

Artificial intelligence automated measurements of spinopelvic parameters in adult spinal deformity-a systematic review.

Bishara A, Patel S, Warman A, Jo J, Hughes LP, Khalifeh JM, Azad TD

pubmed logopapersMay 23 2025
This review evaluates advances made in deep learning (DL) applications to automatic spinopelvic parameter estimation, comparing their accuracy to manual measurements performed by surgeons. The PubMed database was queried for studies on DL measurement of adult spinopelvic parameters between 2014 and 2024. Studies were excluded if they focused on pediatric patients, non-deformity-related conditions, non-human subjects, or if they lacked sufficient quantitative data comparing DL models to human measurements. Included studies were assessed based on model architecture, patient demographics, training, validation, testing methods, and sample sizes, as well as performance compared to manual methods. Of 442 screened articles, 16 were included, with sample sizes ranging from 15 to 9,832 radiograph images and reporting interclass correlation coefficients (ICCs) of 0.56 to 1.00. Measurements of pelvic tilt, pelvic incidence, T4-T12 kyphosis, L1-L4 lordosis, and SVA showed consistently high ICCs (>0.80) and low mean absolute deviations (MADs <6°), with substantial number of studies reporting pelvic tilt achieving an excellent ICC of 0.90 or greater. In contrast, T1-T12 kyphosis and L4-S1 lordosis exhibited lower ICCs and higher measurement errors. Overall, most DL models demonstrated strong correlations (>0.80) with clinician measurements and minimal differences compared to manual references, except for T1-T12 kyphosis (average Pearson correlation: 0.68), L1-L4 lordosis (average Pearson correlation: 0.75), and L4-S1 lordosis (average Pearson correlation: 0.65). Novel computer vision algorithms show promising accuracy in measuring spinopelvic parameters, comparable to manual surgeon measurements. Future research should focus on external validation, additional imaging modalities, and the feasibility of integration in clinical settings to assess model reliability and predictive capacity.
Page 98 of 1601600 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.