Sort by:
Page 284 of 3463455 results

Prenatal detection of congenital heart defects using the deep learning-based image and video analysis: protocol for Clinical Artificial Intelligence in Fetal Echocardiography (CAIFE), an international multicentre multidisciplinary study.

Patey O, Hernandez-Cruz N, D'Alberti E, Salovic B, Noble JA, Papageorghiou AT

pubmed logopapersJun 5 2025
Congenital heart defect (CHD) is a significant, rapidly emerging global problem in child health and a leading cause of neonatal and childhood death. Prenatal detection of CHDs with the help of ultrasound allows better perinatal management of such pregnancies, leading to reduced neonatal mortality, morbidity and developmental complications. However, there is a wide variation in reported fetal heart problem detection rates from 34% to 85%, with some low- and middle-income countries detecting as low as 9.3% of cases before birth. Research has shown that deep learning-based or more general artificial intelligence (AI) models can support the detection of fetal CHDs more rapidly than humans performing ultrasound scan. Progress in this AI-based research depends on the availability of large, well-curated and diverse data of ultrasound images and videos of normal and abnormal fetal hearts. Currently, CHD detection based on AI models is not accurate enough for practical clinical use, in part due to the lack of ultrasound data available for machine learning as CHDs are rare and heterogeneous, the retrospective nature of published studies, the lack of multicentre and multidisciplinary collaboration, and utilisation of mostly standard planes still images of the fetal heart for AI models. Our aim is to develop AI models that could support clinicians in detecting fetal CHDs in real time, particularly in nonspecialist or low-resource settings where fetal echocardiography expertise is not readily available. We have designed the Clinical Artificial Intelligence Fetal Echocardiography (CAIFE) study as an international multicentre multidisciplinary collaboration led by a clinical and an engineering team at the University of Oxford. This study involves five multicountry hospital sites for data collection (Oxford, UK (n=1), London, UK (n=3) and Southport, Australia (n=1)). We plan to curate 14 000 retrospective ultrasound scans of fetuses with normal hearts (n=13 000) and fetuses with CHDs (n=1000), as well as 2400 prospective ultrasound cardiac scans, including the proposed research-specific CAIFE 10 s video sweeps, from fetuses with normal hearts (n=2000) and fetuses diagnosed with major CHDs (n=400). This gives a total of 16 400 retrospective and prospective ultrasound scans from the participating hospital sites. We will build, train and validate computational models capable of differentiating between normal fetal hearts and those diagnosed with CHDs and recognise specific types of CHDs. Data will be analysed using statistical metrics, namely, sensitivity, specificity and accuracy, which include calculating positive and negative predictive values for each outcome, compared with manual assessment. We will disseminate the findings through regional, national and international conferences and through peer-reviewed journals. The study was approved by the Health Research Authority, Care Research Wales and the Research Ethics Committee (Ref: 23/EM/0023; IRAS Project ID: 317510) on 8 March 2023. All collaborating hospitals have obtained the local trust research and development approvals.

Dual energy CT-based Radiomics for identification of myocardial focal scar and artificial beam-hardening.

Zeng L, Hu F, Qin P, Jia T, Lu L, Yang Z, Zhou X, Qiu Y, Luo L, Chen B, Jin L, Tang W, Wang Y, Zhou F, Liu T, Wang A, Zhou Z, Guo X, Zheng Z, Fan X, Xu J, Xiao L, Liu Q, Guan W, Chen F, Wang J, Li S, Chen J, Pan C

pubmed logopapersJun 5 2025
Computed tomography is an inadequate method for detecting myocardial focal scar (MFS) due to its moderate density resolution, which is insufficient for distinguishing MFS from artificial beam-hardening (BH). Virtual monochromatic images (VMIs) of dual-energy coronary computed tomography angiography (DECCTA) provide a variety of diagnostic information with significant potential for detecting myocardial lesions. The aim of this study was to assess whether radiomics analysis in VMIs of DECCTA can help distinguish MFS from BH. A prospective cohort of patients who were suspected with an old myocardial infarction was assembled at two different centers between Janurary 2021 and June 2024. MFS and BH segmentation and radiomics feature extraction and selection were performed on VMIs images, and four machine learning classifiers were constructed using selected strongest features. Subsequently, an independent validation was conducted, and a subjective diagnosis of the validation set was provided by an radiologist. The AUC was used to assess the performance of the radiomics models. The training set included 57 patients from center 1 (mean age, 54 years +/- 9, 55 men), and the external validation set included 10 patients from center 2 (mean age, 59 years +/- 10, 9 men). The radiomics models exhibited the highest AUC value of 0.937 (expressed at 130 keV VMIs), while the radiologist demonstrated the highest AUC value of 0.734 (expressed at 40 keV VMIs). The integration of radiomic features derived from VMIs of DECCTA with machine learning algorithms has the potential to improve the efficiency of distinguishing MFS from BH.

Prediction of impulse control disorders in Parkinson's disease: a longitudinal machine learning study

Vamvakas, A., van Balkom, T., van Wingen, G., Booij, J., Weintraub, D., Berendse, H. W., van den Heuvel, O. A., Vriend, C.

medrxiv logopreprintJun 5 2025
BackgroundImpulse control disorders (ICD) in Parkinsons disease (PD) patients mainly occur as adverse effects of dopamine replacement therapy. Despite several known risk factors associated with ICD development, this cannot yet be accurately predicted at PD diagnosis. ObjectivesWe aimed to investigate the predictability of incident ICD by baseline measures of demographic, clinical, dopamine transporter single photon emission computed tomography (DAT-SPECT), and genetic variables. MethodsWe used demographic and clinical data of medication-free PD patients from two longitudinal datasets; Parkinsons Progression Markers Initiative (PPMI) (n=311) and Amsterdam UMC (n=72). We extracted radiomic and latent features from DAT-SPECT. We used single nucleotic polymorphisms (SNPs) from PPMIs NeuroX and Exome sequencing data. Four machine learning classifiers were trained on combinations of the input feature sets, to predict incident ICD at any follow-up assessment. Classification performance was measured with 10x5-fold cross-validation. ResultsICD prevalence at any follow-up was 0.32. The highest performance in predicting incident ICD (AUC=0.66) was achieved by the models trained on clinical features only. Anxiety severity and age of PD onset were identified as the most important features. Performance did not improve with adding features from DAT-SPECT or SNPs. We observed significantly higher performance (AUC=0.74) when classifying patients who developed ICD within four years from diagnosis compared with those tested negative for seven or more years. ConclusionsPrediction accuracy for later ICD development, at the time of PD diagnosis, is limited; however, it increases for shorter time-to-event predictions. Neither DAT-SPECT nor genetic data improve the predictability obtained using demographic and clinical variables alone.

Noise-induced self-supervised hybrid UNet transformer for ischemic stroke segmentation with limited data annotations.

Soh WK, Rajapakse JC

pubmed logopapersJun 5 2025
We extend the Hybrid Unet Transformer (HUT) foundation model, which combines the advantages of the CNN and Transformer architectures with a noisy self-supervised approach, and demonstrate it in an ischemic stroke lesion segmentation task. We introduce a self-supervised approach using a noise anchor and show that it can perform better than a supervised approach under a limited amount of annotated data. We supplement our pre-training process with an additional unannotated CT perfusion dataset to validate our approach. Compared to the supervised version, the noisy self-supervised HUT (HUT-NSS) outperforms its counterpart by a margin of 2.4% in terms of dice score. HUT-NSS, on average, gained a further margin of 7.2% dice score and 28.1% Hausdorff Distance score over the state-of-the-art network USSLNet on the CT perfusion scans of the Ischemic Stroke Lesion Segmentation (ISLES2018) dataset. In limited annotated data sets, we show that HUT-NSS gained 7.87% of the dice score over USSLNet when we used 50% of the annotated data sets for training. HUT-NSS gained 7.47% of the dice score over USSLNet when we used 10% of the annotated datasets, and HUT-NSS gained 5.34% of the dice score over USSLNet when we used 1% of the annotated datasets for training. The code is available at https://github.com/vicsohntu/HUTNSS_CT .

Long-Term Prognostic Implications of Thoracic Aortic Calcification on CT Using Artificial Intelligence-Based Quantification in a Screening Population: A Two-Center Study.

Lee JE, Kim NY, Kim YH, Kwon Y, Kim S, Han K, Suh YJ

pubmed logopapersJun 4 2025
<b>BACKGROUND.</b> The importance of including the thoracic aortic calcification (TAC), in addition to coronary artery calcification (CAC), in prognostic assessments has been difficult to determine, partly due to greater challenge in performing standardized TAC assessments. <b>OBJECTIVE.</b> The purpose of this study was to evaluate long-term prognostic implications of TAC assessed using artificial intelligence (AI)-based quantification on routine chest CT in a screening population. <b>METHODS.</b> This retrospective study included 7404 asymptomatic individuals (median age, 53.9 years; 5875 men, 1529 women) who underwent nongated noncontrast chest CT as part of a national general health screening program at one of two centers from January 2007 to December 2014. A commercial AI program quantified TAC and CAC using Agatston scores, which were stratified into categories. Radiologists manually quantified TAC and CAC in 2567 examinations. The role of AI-based TAC categories in predicting major adverse cardiovascular events (MACE) and all-cause mortality (ACM), independent of AI-based CAC categories as well as clinical and laboratory variables, was assessed by multivariable Cox proportional hazards models using data from both centers and concordance statistics from prognostic models developed and tested using center 1 and center 2 data, respectively. <b>RESULTS.</b> AI-based and manual quantification showed excellent agreement for TAC and CAC (concordance correlation coefficient: 0.967 and 0.895, respectively). The median observation periods were 7.5 years for MACE (383 events in 5342 individuals) and 11.0 years for ACM (292 events in 7404 individuals). When adjusted for AI-based CAC categories along with clinical and laboratory variables, the risk for MACE was not independently associated with any AI-based TAC category; risk of ACM was independently associated with AI-based TAC score of 1001-3000 (HR = 2.14, <i>p</i> = .02) but not with other AI-based TAC categories. When prognostic models were tested, the addition of AI-based TAC categories did not improve model fit relative to models containing clinical variables, laboratory variables, and AI-based CAC categories for MACE (concordance index [C-index] = 0.760-0.760, <i>p</i> = .81) or ACM (C-index = 0.823-0.830, <i>p</i> = .32). <b>CONCLUSION.</b> The addition of TAC to models containing CAC provided limited improvement in risk prediction in an asymptomatic screening population undergoing CT. <b>CLINICAL IMPACT.</b> AI-based quantification provides a standardized approach for better understanding the potential role of TAC as a predictive imaging biomarker.

Advancing prenatal healthcare by explainable AI enhanced fetal ultrasound image segmentation using U-Net++ with attention mechanisms.

Singh R, Gupta S, Mohamed HG, Bharany S, Rehman AU, Ghadi YY, Hussen S

pubmed logopapersJun 4 2025
Prenatal healthcare development requires accurate automated techniques for fetal ultrasound image segmentation. This approach allows standardized evaluation of fetal development by minimizing time-exhaustive processes that perform poorly due to human intervention. This research develops a segmentation framework through U-Net++ with ResNet backbone features which incorporates attention components for enhancing extraction of features in low contrast, noisy ultrasound data. The model leverages the nested skip connections of U-Net++ and the residual learning of ResNet-34 to achieve state-of-the-art segmentation accuracy. Evaluations of the developed model against the vast fetal ultrasound image collection yielded superior results by reaching 97.52% Dice coefficient as well as 95.15% Intersection over Union (IoU), and 3.91 mm Hausdorff distance. The pipeline integrated Grad-CAM++ allows explanations of the model decisions for clinical utility and trust enhancement. The explainability component enables medical professionals to study how the model functions, which creates clear and proven segmentation outputs for better overall reliability. The framework fills in the gap between AI automation and clinical interpretability by showing important areas which affect predictions. The research shows that deep learning combined with Explainable AI (XAI) operates to generate medical imaging solutions that achieve high accuracy. The proposed system demonstrates readiness for clinical workflows due to its ability to deliver a sophisticated prenatal diagnostic instrument that enhances healthcare results.

Enhanced risk stratification for stage II colorectal cancer using deep learning-based CT classifier and pathological markers to optimize adjuvant therapy decision.

Huang YQ, Chen XB, Cui YF, Yang F, Huang SX, Li ZH, Ying YJ, Li SY, Li MH, Gao P, Wu ZQ, Wen G, Wang ZS, Wang HX, Hong MP, Diao WJ, Chen XY, Hou KQ, Zhang R, Hou J, Fang Z, Wang ZN, Mao Y, Wee L, Liu ZY

pubmed logopapersJun 4 2025
Current risk stratification for stage II colorectal cancer (CRC) has limited accuracy in identifying patients who would benefit from adjuvant chemotherapy, leading to potential over- or under-treatment. We aimed to develop a more precise risk stratification system by integrating artificial intelligence-based imaging analysis with pathological markers. We analyzed 2,992 stage II CRC patients from 12 centers. A deep learning classifier (Swin Transformer Assisted Risk-stratification for CRC, STAR-CRC) was developed using multi-planar CT images from 1,587 patients (training:internal validation=7:3) and validated in 1,405 patients from 8 independent centers, which stratified patients into low-, uncertain-, and high-risk groups. To further refine the uncertain-risk group, a composite score based on pathological markers (pT4 stage, number of lymph nodes sampled, perineural invasion, and lymphovascular invasion) was applied, forming the intelligent risk integration system for stage II CRC (IRIS-CRC). IRIS-CRC was compared against the guideline-based risk stratification system (GRSS-CRC) for prediction performance and validated in the validation dataset. IRIS-CRC stratified patients into four prognostic groups with distinct 3-year disease-free survival rates (≥95%, 95-75%, 75-55%, ≤55%). Upon external validation, compared to GRSS-CRC, IRIS-CRC downstaged 27.1% of high-risk patients into Favorable group, while upstaged 6.5% of low-risk patients into Very Poor prognosis group who might require more aggressive treatment. In the GRSS-CRC intermediate-risk group of the external validation dataset, IRIS-CRC reclassified 40.1% as Favorable prognosis and 7.0% as Very Poor prognosis. IRIS-CRC's performance maintained generalized in both chemotherapy and non-chemotherapy cohorts. IRIS-CRC offers a more precise and personalized risk assessment than current guideline-based risk factors, potentially sparing low-risk patients from unnecessary adjuvant chemotherapy while identifying high-risk individuals for more aggressive treatment. This novel approach holds promise for improving clinical decision-making and outcomes in stage II CRC.

Deep learning-based cone-beam CT motion compensation with single-view temporal resolution.

Maier J, Sawall S, Arheit M, Paysan P, Kachelrieß M

pubmed logopapersJun 4 2025
Cone-beam CT (CBCT) scans that are affected by motion often require motion compensation to reduce artifacts or to reconstruct 4D (3D+time) representations of the patient. To do so, most existing strategies rely on some sort of gating strategy that sorts the acquired projections into motion bins. Subsequently, these bins can be reconstructed individually before further post-processing may be applied to improve image quality. While this concept is useful for periodic motion patterns, it fails in case of non-periodic motion as observed, for example, in irregularly breathing patients. To address this issue and to increase temporal resolution, we propose the deep single angle-based motion compensation (SAMoCo). To avoid gating, and therefore its downsides, the deep SAMoCo trains a U-net-like network to predict displacement vector fields (DVFs) representing the motion that occurred between any two given time points of the scan. To do so, 4D clinical CT scans are used to simulate 4D CBCT scans as well as the corresponding ground truth DVFs that map between the different motion states of the scan. The network is then trained to predict these DVFs as a function of the respective projection views and an initial 3D reconstruction. Once the network is trained, an arbitrary motion state corresponding to a certain projection view of the scan can be recovered by estimating DVFs from any other state or view and by considering them during reconstruction. Applied to 4D CBCT simulations of breathing patients, the deep SAMoCo provides high-quality reconstructions for periodic and non-periodic motion. Here, the deviations with respect to the ground truth are less than 27 HU on average, while respiratory motion, or the diaphragm position, can be resolved with an accuracy of about 0.75 mm. Similar results were obtained for real measurements where a high correlation with external motion monitoring signals could be observed, even in patients with highly irregular respiration. The ability to estimate DVFs as a function of two arbitrary projection views and an initial 3D reconstruction makes deep SAMoCo applicable to arbitrary motion patterns with single-view temporal resolution. Therefore, the deep SAMoCo is particularly useful for cases with unsteady breathing, compensation of residual motion during a breath-hold scan, or scans with fast gantry rotation times in which the data acquisition only covers a very limited number of breathing cycles. Furthermore, not requiring gating signals may simplify the clinical workflow and reduces the time needed for patient preparation.

Computed tomography-based radiomics model for predicting station 4 lymph node metastasis in non-small cell lung cancer.

Kang Y, Li M, Xing X, Qian K, Liu H, Qi Y, Liu Y, Cui Y, Zhang H

pubmed logopapersJun 4 2025
This study aimed to develop and validate machine learning models for preoperative identification of metastasis to station 4 mediastinal lymph nodes (MLNM) in non-small cell lung cancer (NSCLC) patients at pathological N0-N2 (pN0-pN2) stage, thereby enhancing the precision of clinical decision-making. We included a total of 356 NSCLC patients at pN0-pN2 stage, divided into training (n = 207), internal test (n = 90), and independent test (n = 59) sets. Station 4 mediastinal lymph nodes (LNs) regions of interest (ROIs) were semi-automatically segmented on venous-phase computed tomography (CT) images for radiomics feature extraction. Using least absolute shrinkage and selection operator (LASSO) regression to select features with non-zero coefficients. Four machine learning algorithms-decision tree (DT), logistic regression (LR), random forest (RF), and support vector machine (SVM)-were employed to construct radiomics models. Clinical predictors were identified through univariate and multivariate logistic regression, which were subsequently integrated with radiomics features to develop combined models. Models performance were evaluated using receiver operating characteristic (ROC) analysis, calibration curves, decision curve analysis (DCA), and DeLong's test. Out of 1721 radiomics features, eight radiomics features were selected using LASSO regression. The RF-based combined model exhibited the strongest discriminative power, with an area under the curve (AUC) of 0.934 for the training set and 0.889 for the internal test set. The calibration curve and DCA further indicated the superior performance of the combined model based on RF. The independent test set further verified the model's robustness. The combined model based on RF, integrating radiomics and clinical features, effectively and non-invasively identifies metastasis to the station 4 mediastinal LNs in NSCLC patients at pN0-pN2 stage. This model serves as an effective auxiliary tool for clinical decision-making and has the potential to optimize treatment strategies and improve prognostic assessment for pN0-pN2 patients. Not applicable.
Page 284 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.