Sort by:
Page 155 of 2092082 results

Construction of a Prediction Model for Adverse Perinatal Outcomes in Foetal Growth Restriction Based on a Machine Learning Algorithm: A Retrospective Study.

Meng X, Wang L, Wu M, Zhang N, Li X, Wu Q

pubmed logopapersMay 23 2025
To create and validate a machine learning (ML)-based model for predicting the adverse perinatal outcome (APO) in foetal growth restriction (FGR) at diagnosis. A retrospective study. Multi-centre in China. Pregnancies affected by FGR. We enrolled singleton foetuses with a perinatal diagnosis of FGR who were admitted between January 2021 and November 2023. A total of 361 pregnancies from Beijing Obstetrics and Gynecology Hospital were used as the training set and the internal test set. In comparison, data from 50 pregnancies from Haidian Maternal and Child Health Hospital were used as the external test set. Feature screening was performed using the random forest (RF), the Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression (LR). Subsequently, six ML methods, including Stacking, were used to construct models to predict the APO of FGR. Model's performance was evaluated through indicators such as the area under the receiver operating characteristic curve (AUROC). The Shapley Additive Explanation analysis was used to rank each model feature and explain the final model. Mean ± SD gestational age at diagnosis was 32.3 ± 4.8 weeks in the absent APO group and 27.3 ± 3.7 in the present APO group. Women enrolled in the present APO group had a higher rate of hypertension related to pregnancy (74.8% vs. 18.8%, p < 0.001). Among 17 candidate predictors (including maternal characteristics, maternal comorbidities, obstetric characteristics and ultrasound parameters), the integration of RF, LASSO and LR methodologies identified maternal body mass index, hypertension, gestational age at diagnosis of FGR, estimated foetal weight (EFW) z score, EFW growth velocity and abnormal umbilical artery Doppler (defined as a pulsatility index above the 95th percentile or instances of absent/reversed diastolic flow) as significant predictors. The Stacking model demonstrated a good performance in both the internal test set [AUROC: 0.861, 95% confidence interval (CI), 0.838-0.896] and the external test set [AUROC: 0.906, 95% CI, 0.875-0.947]. The calibration curve showed high agreement between the predicted and observed risks. The Hosmer-Lemeshow test for the internal and external test sets was p = 0.387 and p = 0.825, respectively. The ML algorithm for APO, which integrates maternal clinical factors and ultrasound parameters, demonstrates good predictive value for APO in FGR at diagnosis. This suggested that ML techniques may be a valid approach for the early detection of high-risk APO in FGR pregnancies.

Self-supervised feature learning for cardiac Cine MR image reconstruction.

Xu S, Fruh M, Hammernik K, Lingg A, Kubler J, Krumm P, Rueckert D, Gatidis S, Kustner T

pubmed logopapersMay 23 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to 16× retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

AMVLM: Alignment-Multiplicity Aware Vision-Language Model for Semi-Supervised Medical Image Segmentation.

Pan Q, Li Z, Qiao W, Lou J, Yang Q, Yang G, Ji B

pubmed logopapersMay 23 2025
Low-quality pseudo labels pose a significant obstacle in semi-supervised medical image segmentation (SSMIS), impeding consistency learning on unlabeled data. Leveraging vision-language model (VLM) holds promise in ameliorating pseudo label quality by employing textual prompts to delineate segmentation regions, but it faces the challenge of cross-modal alignment uncertainty due to multiple correspondences (multiple images/texts tend to correspond to one text/image). Existing VLMs address this challenge by modeling semantics as distributions but such distributions lead to semantic degradation. To address these problems, we propose Alignment-Multiplicity Aware Vision-Language Model (AMVLM), a new VLM pre-training paradigm with two novel similarity metric strategies. (i) Cross-modal Similarity Supervision (CSS) proposes a probability distribution transformer to supervise similarity scores across fine-granularity semantics through measuring cross-modal distribution disparities, thus learning cross-modal multiple alignments. (ii) Intra-modal Contrastive Learning (ICL) takes into account the similarity metric of coarse-fine granularity information within each modality to encourage cross-modal semantic consistency. Furthermore, using the pretrained AMVLM, we propose a pioneering text-guided SSMIS network to compensate for the quality deficiencies of pseudo-labels. This network incorporates a text mask generator to produce multimodal supervision information, enhancing pseudo label quality and the model's consistency learning. Extensive experimentation validates the efficacy of our AMVLM-driven SSMIS, showcasing superior performance across four publicly available datasets. The code will be available at: https://github.com/QingtaoPan/AMVLM.

Non-invasive arterial input function estimation using an MRA atlas and machine learning.

Vashistha R, Moradi H, Hammond A, O'Brien K, Rominger A, Sari H, Shi K, Vegh V, Reutens D

pubmed logopapersMay 23 2025
Quantifying biological parameters of interest through dynamic positron emission tomography (PET) requires an arterial input function (AIF) conventionally obtained from arterial blood samples. The AIF can also be non-invasively estimated from blood pools in PET images, often identified using co-registered MRI images. Deploying methods without blood sampling or the use of MRI generally requires total body PET systems with a long axial field-of-view (LAFOV) that includes a large cardiovascular blood pool. However, the number of such systems in clinical use is currently much smaller than that of short axial field-of-view (SAFOV) scanners. We propose a data-driven approach for AIF estimation for SAFOV PET scanners, which is non-invasive and does not require MRI or blood sampling using brain PET scans. The proposed method was validated using dynamic <sup>18</sup>F-fluorodeoxyglucose [<sup>18</sup>F]FDG total body PET data from 10 subjects. A variational inference-based machine learning approach was employed to correct for peak activity. The prior was estimated using a probabilistic vascular MRI atlas, registered to each subject's PET image to identify cerebral arteries in the brain. The estimated AIF using brain PET images (IDIF-Brain) was compared to that obtained using data from the descending aorta of the heart (IDIF-DA). Kinetic rate constants (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) and net radiotracer influx (K<sub>i</sub>) for both cases were computed and compared. Qualitatively, the shape of IDIF-Brain matched that of IDIF-DA, capturing information on both the peak and tail of the AIF. The area under the curve (AUC) of IDIF-Brain and IDIF-DA were similar, with an average relative error of 9%. The mean Pearson correlations between kinetic parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) estimated with IDIF-DA and IDIF-Brain for each voxel were between 0.92 and 0.99 in all subjects, and for K<sub>i</sub>, it was above 0.97. This study introduces a new approach for AIF estimation in dynamic PET using brain PET images, a probabilistic vascular atlas, and machine learning techniques. The findings demonstrate the feasibility of non-invasive and subject-specific AIF estimation for SAFOV scanners.

COVID-19CT+: A public dataset of CT images for COVID-19 retrospective analysis.

Sun Y, Du T, Wang B, Rahaman MM, Wang X, Huang X, Jiang T, Grzegorzek M, Sun H, Xu J, Li C

pubmed logopapersMay 23 2025
Background and objectiveCOVID-19 is considered as the biggest global health disaster in the 21st century, and it has a huge impact on the world.MethodsThis paper publishes a publicly available dataset of CT images of multiple types of pneumonia (COVID-19CT+). Specifically, the dataset contains 409,619 CT images of 1333 patients, with subset-A containing 312 community-acquired pneumonia cases and subset-B containing 1021 COVID-19 cases. In order to demonstrate that there are differences in the methods used to classify COVID-19CT+ images across time, we selected 13 classical machine learning classifiers and 5 deep learning classifiers to test the image classification task.ResultsIn this study, two sets of experiments are conducted using traditional machine learning and deep learning methods, the first set of experiments is the classification of COVID-19 in Subset-B versus COVID-19 white lung disease, and the second set of experiments is the classification of community-acquired pneumonia in Subset-A versus COVID-19 in Subset-B, demonstrating that the different periods of the methods differed on COVID-19CT+. On the first set of experiments, the accuracy of traditional machine learning reaches a maximum of 97.3% and a minimum of only 62.6%. Deep learning algorithms reaches a maximum of 97.9% and a minimum of 85.7%. On the second set of experiments, traditional machine learning reaches a high of 94.6% accuracy and a low of 56.8%. The deep learning algorithm reaches a high of 91.9% and a low of 86.3%.ConclusionsThe COVID-19CT+ in this study covers a large number of CT images of patients with COVID-19 and community-acquired pneumonia and is one of the largest datasets available. We expect that this dataset will attract more researchers to participate in exploring new automated diagnostic algorithms to contribute to the improvement of the diagnostic accuracy and efficiency of COVID-19.

Detection, Classification, and Segmentation of Rib Fractures From CT Data Using Deep Learning Models: A Review of Literature and Pooled Analysis.

Den Hengst S, Borren N, Van Lieshout EMM, Doornberg JN, Van Walsum T, Wijffels MME, Verhofstad MHJ

pubmed logopapersMay 23 2025
Trauma-induced rib fractures are common injuries. The gold standard for diagnosing rib fractures is computed tomography (CT), but the sensitivity in the acute setting is low, and interpreting CT slices is labor-intensive. This has led to the development of new diagnostic approaches leveraging deep learning (DL) models. This systematic review and pooled analysis aimed to compare the performance of DL models in the detection, segmentation, and classification of rib fractures based on CT scans. A literature search was performed using various databases for studies describing DL models detecting, segmenting, or classifying rib fractures from CT data. Reported performance metrics included sensitivity, false-positive rate, F1-score, precision, accuracy, and mean average precision. A meta-analysis was performed on the sensitivity scores to compare the DL models with clinicians. Of the 323 identified records, 25 were included. Twenty-one studies reported on detection, four on segmentation, and 10 on classification. Twenty studies had adequate data for meta-analysis. The gold standard labels were provided by clinicians who were radiologists and orthopedic surgeons. For detecting rib fractures, DL models had a higher sensitivity (86.7%; 95% CI: 82.6%-90.2%) than clinicians (75.4%; 95% CI: 68.1%-82.1%). In classification, the sensitivity of DL models for displaced rib fractures (97.3%; 95% CI: 95.6%-98.5%) was significantly better than that of clinicians (88.2%; 95% CI: 84.8%-91.3%). DL models for rib fracture detection and classification achieved promising results. With better sensitivities than clinicians for detecting and classifying displaced rib fractures, the future should focus on implementing DL models in daily clinics. Level III-systematic review and pooled analysis.

Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers.

Kato R, Kadoya N, Kato T, Tozuka R, Ogawa S, Murakami M, Jingu K

pubmed logopapersMay 23 2025
This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.

Multimodal ultrasound-based radiomics and deep learning for differential diagnosis of O-RADS 4-5 adnexal masses.

Zeng S, Jia H, Zhang H, Feng X, Dong M, Lin L, Wang X, Yang H

pubmed logopapersMay 23 2025
Accurate differentiation between benign and malignant adnexal masses is crucial for patients to avoid unnecessary surgical interventions. Ultrasound (US) is the most widely utilized diagnostic and screening tool for gynecological diseases, with contrast-enhanced US (CEUS) offering enhanced diagnostic precision by clearly delineating blood flow within lesions. According to the Ovarian and Adnexal Reporting and Data System (O-RADS), masses classified as categories 4 and 5 carry the highest risk of malignancy. However, the diagnostic accuracy of US remains heavily reliant on the expertise and subjective interpretation of radiologists. Radiomics has demonstrated significant value in tumor differential diagnosis by extracting microscopic information imperceptible to the human eye. Despite this, no studies to date have explored the application of CEUS-based radiomics for differentiating adnexal masses. This study aims to develop and validate a multimodal US-based nomogram that integrates clinical variables, radiomics, and deep learning (DL) features to effectively distinguish adnexal masses classified as O-RADS 4-5. From November 2020 to March 2024, we enrolled 340 patients who underwent two-dimensional US (2DUS) and CEUS and had masses categorized as O-RADS 4-5. These patients were randomly divided into a training cohort and a test cohort in a 7:3 ratio. Adnexal masses were manually segmented from 2DUS and CEUS images. Using machine learning (ML) and DL techniques, five models were developed and validated to differentiate adnexal masses. The diagnostic performance of these models was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Additionally, a nomogram was constructed to visualize outcome measures. The CEUS-based radiomics model outperformed the 2DUS model (AUC: 0.826 vs. 0.737). Similarly, the CEUS-based DL model surpassed the 2DUS model (AUC: 0.823 vs. 0.793). The ensemble model combining clinical variables, radiomics, and DL features achieved the highest AUC (0.929). Our study confirms the effectiveness of CEUS-based radiomics for distinguishing adnexal masses with high accuracy and specificity using a multimodal US-based radiomics DL nomogram. This approach holds significant promise for improving the diagnostic precision of adnexal masses classified as O-RADS 4-5.

A deep learning model integrating domain-specific features for enhanced glaucoma diagnosis.

Xu J, Jing E, Chai Y

pubmed logopapersMay 23 2025
Glaucoma is a group of serious eye diseases that can cause incurable blindness. Despite the critical need for early detection, over 60% of cases remain undiagnosed, especially in less developed regions. Glaucoma diagnosis is a costly task and some models have been proposed to automate diagnosis based on images of the retina, specifically the area known as the optic cup and the associated disc where retinal blood vessels and nerves enter and leave the eye. However, diagnosis is complicated because both normal and glaucoma-affected eyes can vary greatly in appearance. Some normal cases, like glaucoma, exhibit a larger cup-to-disc ratio, one of the main diagnostic criteria, making it challenging to distinguish between them. We propose a deep learning model with domain features (DLMDF) to combine unstructured and structured features to distinguish between glaucoma and physiologic large cups. The structured features were based upon the known cup-to-disc ratios of the four quadrants of the optic discs in normal, physiologic large cups, and glaucomatous optic cups. We segmented each cup and disc using a fully convolutional neural network and then calculated the cup size, disc size, and cup-to-disc ratio of each quadrant. The unstructured features were learned from a deep convolutional neural network. The average precision (AP) for disc segmentation was 98.52%, and for cup segmentation it was also 98.57%. Thus, the relatively high AP values enabled us to calculate the 15 reliable features from each segmented disc and cup. In classification tasks, the DLMDF outperformed other models, achieving superior accuracy, precision, and recall. These results validate the effectiveness of combining deep learning-derived features with domain-specific structured features, underscoring the potential of this approach to advance glaucoma diagnosis.

Development of a non-contrast CT-based radiomics nomogram for early prediction of delayed cerebral ischemia in aneurysmal subarachnoid hemorrhage.

Chen L, Wang X, Wang S, Zhao X, Yan Y, Yuan M, Sun S

pubmed logopapersMay 23 2025
Delayed cerebral ischemia (DCI) is a significant complication following aneurysmal subarachnoid hemorrhage (aSAH), leading to poor prognosis and high mortality. This study developed a non-contrast CT (NCCT)-based radiomics nomogram for early DCI prediction in aSAH patients. Three hundred seventy-seven aSAH patients were included in this retrospective study. Radiomic features from the baseline CTs were extracted using PyRadiomics. Feature selection was conducted using t-tests, Pearson correlation, and Lasso regression to identify those features most closely associated with DCI. Multivariable logistic regression was used to identify independent clinical and demographic risk factors. Eight machine learning algorithms were applied to construct radiomics-only and radiomics-clinical fusion nomogram models. The nomogram integrated the radscore and three clinically significant parameters (aneurysm and aneurysm treatment and admission Hunt-Hess score), with the Support Vector Machine model yielding the highest performance in the validation set. The radiomics model and nomogram produced AUCs of 0.696 (95% CI: 0.578-0.815) and 0.831 (95% CI: 0.739-0.923), respectively. The nomogram achieved an accuracy of 0.775, a sensitivity of 0.750, a specificity of 0.795, and an F1 score of 0.750. The NCCT-based radiomics nomogram demonstrated high predictive performance for DCI in aSAH patients, providing a valuable tool for early DCI identification and formulating appropriate treatment strategies. Not applicable.
Page 155 of 2092082 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.