Sort by:
Page 13 of 26252 results

A Deep Learning-Based Artificial Intelligence Model Assisting Thyroid Nodule Diagnosis and Management: Pilot Results for Evaluating Thyroid Malignancy in Pediatric Cohorts.

Ha EJ, Lee JH, Mak N, Duh AK, Tong E, Yeom KW, Meister KD

pubmed logopapersJun 2 2025
<b><i>Purpose:</i></b> Artificial intelligence (AI) models have shown promise in predicting malignant thyroid nodules in adults; however, research on deep learning (DL) for pediatric cases is limited. We evaluated the applicability of a DL-based model for assessing thyroid nodules in children. <b><i>Methods:</i></b> We retrospectively identified two pediatric cohorts (<i>n</i> = 128; mean age 15.5 ± 2.4 years; 103 girls) who had thyroid nodule ultrasonography (US) with histological confirmation at two institutions. The AI-Thyroid DL model, originally trained on adult data, was tested on pediatric nodules in three scenarios axial US images, longitudinal US images, and both. We conducted a subgroup analysis based on the two pediatric cohorts and age groups (≥14 years vs. < 14 years) and compared the model's performance with radiologist interpretations using the Thyroid Imaging Reporting and Data System (TIRADS). <b><i>Results:</i></b> Out of 156 nodules analyzed, 47 (30.1%) were malignant. AI-Thyroid demonstrated respective area under the receiver operating characteristic (AUROC), sensitivity, and specificity values of 0.913-0.929, 78.7-89.4%, and 79.8-91.7%, respectively. The AUROC values did not significantly differ across the image planes (all <i>p</i> > 0.05) and between the two pediatric cohorts (<i>p</i> = 0.804). No significant differences were observed between age groups in terms of sensitivity and specificity (all <i>p</i> > 0.05) while the AUROC values were higher for patients aged <14 years compared to those aged ≥14 years (all <i>p</i> < 0.01). AI-Thyroid yielded the highest AUROC values, followed by ACR-TIRADS and K-TIRADS (<i>p</i> = 0.016 and <i>p</i> < 0.001, respectively). <b><i>Conclusion:</i></b> AI-Thyroid demonstrated high performance in diagnosing pediatric thyroid cancer. Future research should focus on optimizing AI-Thyroid for pediatric use and exploring its role alongside tissue sampling in clinical practice.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

Gubbi MR, Bell MAL

pubmed logopapersJun 1 2025
Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this article, we developed a novel deep learning-based 3-D photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3-D point source localization system. When tested with 4000 simulated, 993 phantom, and 1983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as ${1.46} \; \pm \; {1.11}$ mm, ${1.58} \; \pm \; {1.30}$ mm, and ${1.55} \; \pm \; {0.86}$ mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of ${19.22} \; \pm \; {26.26}$ m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.

[Capabilities and Advances of Transrectal Ultrasound in 2025].

Kaufmann S, Kruck S

pubmed logopapersJun 1 2025
Transrectal ultrasound, particularly in the combination of high-frequency ultrasound and MR-TRUS fusion technologies, provides a highly precise and effective method for correlation and targeted biopsy of suspicious intraprostatic lesions detected by MRI. Advances in imaging technology, driven by 29 Mhz micro-ultrasound transducers, robotic-assisted systems, and the integration of AI-based analyses, promise further improvements in diagnostic accuracy and a reduction in unnecessary biopsies. Further technological advancements and improved TRUS training could contribute to a decentralized and cost-effective diagnostic evaluation of prostate cancer in the future.

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.

Ensemble learning of deep CNN models and two stage level prediction of Cobb angle on surface topography in adolescents with idiopathic scoliosis.

Hassan M, Gonzalez Ruiz JM, Mohamed N, Burke TN, Mei Q, Westover L

pubmed logopapersJun 1 2025
This study employs Convolutional Neural Networks (CNNs) as feature extractors with appended regression layers for the non-invasive prediction of Cobb Angle (CA) from Surface Topography (ST) scans in adolescents with Idiopathic Scoliosis (AIS). The aim is to minimize radiation exposure during critical growth periods by offering a reliable, non-invasive assessment tool. The efficacy of various CNN-based feature extractors-DenseNet121, EfficientNetB0, ResNet18, SqueezeNet, and a modified U-Net-was evaluated on a dataset of 654 ST scans using a regression analysis framework for accurate CA prediction. The dataset comprised 590 training and 64 testing scans. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and accuracy in classifying scoliosis severity (mild, moderate, severe) based on CA measurements. The EfficientNetB0 feature extractor outperformed other models, demonstrating strong performance on the training set (R=0.96, R=20.93) and achieving an MAE of 6.13<sup>∘</sup> and RMSE of 7.5<sup>∘</sup> on the test set. In terms of scoliosis severity classification, it achieved high precision (84.62%) and specificity (95.65% for mild cases and 82.98% for severe cases), highlighting its clinical applicability in AIS management. The regression-based approach using the EfficientNetB0 as a feature extractor presents a significant advancement for accurately determining CA from ST scans, offering a promising tool for improving scoliosis severity categorization and management in adolescents.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.
Page 13 of 26252 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.