Sort by:
Page 9 of 26252 results

AN INNOVATIVE MACHINE LEARNING-BASED ALGORITHM FOR DIAGNOSING PEDIATRIC OVARIAN TORSION.

Boztas AE, Sencan E, Payza AD, Sencan A

pubmed logopapersJun 16 2025
We aimed to develop a machine-learning(ML) algorithm consisting of physical examination, sonographic findings, and laboratory markers. The data of 70 patients with confirmed ovarian torsion followed and treated in our clinic for ovarian torsion and 73 patients for control group that presented to the emergency department with similar complaints but didn't have ovarian torsion detected on ultrasound as the control group between 2013-2023 were retrospectively analyzed. Sonographic findings, laboratory values, and clinical status of patients were examined and fed into three supervised ML systems to identify and develop viable decision algorithms. Presence of nausea/vomiting and symptom duration was statistically significant(p<0.05) for ovarian torsion. Presence of abdominal pain and palpable mass on physical examination weren't significant(p>0.05). White blood cell count(WBC), neutrophile/lymphocyte ratio(NLR), systemic immune-inflammation index(SII) and systemic inflammation response index(SIRI), high values of C-reactive protein was highly significant in prediction of torsion( p<0.001,p<0.05). Ovarian size ratio, medialization, follicular ring sign, presence of free fluid in pelvis in ultrasound demonstrated statistical significance in the torsion group(p<0.001). We used supervised ML algorithms, including decision trees, random forests, and LightGBM, to classify patients as either control or having torsion. We evaluated the models using 5-fold cross-validation, achieving an average F1-score of 98%, an accuracy of 98%, and a specificity of 100% across each fold with the decision tree model. This study represents the first development of a ML algorithm that integrates clinical, laboratory and ultrasonographic findings for the diagnosis of pediatric ovarian torsion with over 98% accuracy.

Predicting mucosal healing in Crohn's disease: development of a deep-learning model based on intestinal ultrasound images.

Ma L, Chen Y, Fu X, Qin J, Luo Y, Gao Y, Li W, Xiao M, Cao Z, Shi J, Zhu Q, Guo C, Wu J

pubmed logopapersJun 16 2025
Predicting treatment response in Crohn's disease (CD) is essential for making an optimal therapeutic regimen, but relevant models are lacking. This study aimed to develop a deep learning model based on baseline intestinal ultrasound (IUS) images and clinical information to predict mucosal healing. Consecutive CD patients who underwent pretreatment IUS were retrospectively recruited at a tertiary hospital. A total of 1548 IUS images of longitudinal diseased bowel segments were collected and divided into a training cohort and a test cohort. A convolutional neural network model was developed to predict mucosal healing after one year of standardized treatment. The model's efficacy was validated using the five-fold internal cross-validation and further tested in the test cohort. A total of 190 patients (68.9% men, mean age 32.3 ± 14.1 years) were enrolled, consisting of 1038 IUS images of mucosal healing and 510 images of no mucosal healing. The mean area under the curve in the test cohort was 0.73 (95% CI: 0.68-0.78), with the mean sensitivity of 68.1% (95% CI: 60.5-77.4%), specificity of 69.5% (95% CI: 60.1-77.2%), positive prediction value of 80.0% (95% CI: 74.5-84.9%), negative prediction value of 54.8% (95% CI: 48.0-63.7%). Heat maps showing the deep-learning decision-making process revealed that information from the bowel wall, serous surface, and surrounding mesentery was mainly considered by the model. We developed a deep learning model based on IUS images to predict mucosal healing in CD with notable accuracy. Further validation and improvement of this model with more multi-center, real-world data are needed. Predicting treatment response in CD is essential to making an optimal therapeutic regimen. In this study, a deep-learning model using pretreatment ultrasound images and clinical information was generated to predict mucosal healing with an AUC of 0.73. Response to medication treatment is highly variable among patients with CD. High-resolution IUS images of the intestinal wall may hide significant characteristics for treatment response. A deep-learning model capable of predicting treatment response was generated using pretreatment IUS images.

ThreeF-Net: Fine-grained feature fusion network for breast ultrasound image segmentation.

Bian X, Liu J, Xu S, Liu W, Mei L, Xiao C, Yang F

pubmed logopapersJun 14 2025
Convolutional Neural Networks (CNNs) have achieved remarkable success in breast ultrasound image segmentation, but they still face several challenges when dealing with breast lesions. Due to the limitations of CNNs in modeling long-range dependencies, they often perform poorly in handling issues such as similar intensity distributions, irregular lesion shapes, and blurry boundaries, leading to low segmentation accuracy. To address these issues, we propose the ThreeF-Net, a fine-grained feature fusion network. This network combines the advantages of CNNs and Transformers, aiming to simultaneously capture local features and model long-range dependencies, thereby improving the accuracy and stability of segmentation tasks. Specifically, we designed a Transformer-assisted Dual Encoder Architecture (TDE), which integrates convolutional modules and self-attention modules to achieve collaborative learning of local and global features. Additionally, we designed a Global Group Feature Extraction (GGFE) module, which effectively fuses the features learned by CNNs and Transformers, enhancing feature representation ability. To further improve model performance, we also introduced a Dynamic Fine-grained Convolution (DFC) module, which significantly improves lesion boundary segmentation accuracy by dynamically adjusting convolution kernels and capturing multi-scale features. Comparative experiments with state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that ThreeF-Net outperforms existing methods across multiple key evaluation metrics.

Enhancing Free-hand 3D Photoacoustic and Ultrasound Reconstruction using Deep Learning.

Lee S, Kim S, Seo M, Park S, Imrus S, Ashok K, Lee D, Park C, Lee S, Kim J, Yoo JH, Kim M

pubmed logopapersJun 13 2025
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging. Standard PAUS imaging is often limited by a narrow field of view (FoV) and the inability to effectively visualize complex 3D structures. The 3D freehand technique, which aligns sequential 2D images for 3D reconstruction, faces significant challenges in accurate motion estimation without relying on external positional sensors. MoGLo-Net addresses these limitations through an innovative adaptation of the self-attention mechanism, which effectively exploits the critical regions, such as fully-developed speckle areas or high-echogenic tissue regions within successive ultrasound images to accurately estimate the motion parameters. This facilitates the extraction of intricate features from individual frames. Additionally, we employ a patch-wise correlation operation to generate a correlation volume that is highly correlated with the scanning motion. A custom loss function was also developed to ensure robust learning with minimized bias, leveraging the characteristics of the motion parameters. Experimental evaluations demonstrated that MoGLo-Net surpasses current state-of-the-art methods in both quantitative and qualitative performance metrics. Furthermore, we expanded the application of 3D reconstruction technology beyond simple B-mode ultrasound volumes to incorporate Doppler ultrasound and photoacoustic imaging, enabling 3D visualization of vasculature. The source code for this study is publicly available at: https://github.com/pnu-amilab/US3D.

Uncovering ethical biases in publicly available fetal ultrasound datasets.

Fiorentino MC, Moccia S, Cosmo MD, Frontoni E, Giovanola B, Tiribelli S

pubmed logopapersJun 13 2025
We explore biases present in publicly available fetal ultrasound (US) imaging datasets, currently at the disposal of researchers to train deep learning (DL) algorithms for prenatal diagnostics. As DL increasingly permeates the field of medical imaging, the urgency to critically evaluate the fairness of benchmark public datasets used to train them grows. Our thorough investigation reveals a multifaceted bias problem, encompassing issues such as lack of demographic representativeness, limited diversity in clinical conditions depicted, and variability in US technology used across datasets. We argue that these biases may significantly influence DL model performance, which may lead to inequities in healthcare outcomes. To address these challenges, we recommend a multilayered approach. This includes promoting practices that ensure data inclusivity, such as diversifying data sources and populations, and refining model strategies to better account for population variances. These steps will enhance the trustworthiness of DL algorithms in fetal US analysis.

Radiomics and machine learning for predicting valve vegetation in infective endocarditis: a comparative analysis of mitral and aortic valves using TEE imaging.

Esmaely F, Moradnejad P, Boudagh S, Bitarafan-Rajabi A

pubmed logopapersJun 12 2025
Detecting valve vegetation in infective endocarditis (IE) poses challenges, particularly with mechanical valves, because acoustic shadowing artefacts often obscure critical diagnostic details. This study aimed to classify native and prosthetic mitral and aortic valves with and without vegetation using radiomics and machine learning. 286 TEE scans from suspected IE cases (August 2023-November 2024) were analysed alongside 113 rejected IE as control cases. Frames were preprocessed using the Extreme Total Variation Bilateral (ETVB) filter, and radiomics features were extracted for classification using machine learning models, including Random Forest, Decision Tree, SVM, k-NN, and XGBoost. in order to evaluate the models, AUC, ROC curves, and Decision Curve Analysis (DCA) were used. For native mitral valves, SVM achieved the highest performance with an AUC of 0.88, a sensitivity of 0.91, and a specificity of 0.87. Mechanical mitral valves also showed optimal results with SVM (AUC: 0.85, sensitivity: 0.73, specificity: 0.92). Native aortic valves were best classified using SVM (AUC: 0.86, sensitivity: 0.87, specificity: 0.86), while Random Forest excelled for mechanical aortic valves (AUC: 0.81, sensitivity: 0.89, specificity: 0.78). These findings suggest that combining the models with the clinician's report may enhance the diagnostic accuracy of TEE, particularly in the absence of advanced imaging methods like PET/CT.

Exploring the limit of image resolution for human expert classification of vascular ultrasound images in giant cell arteritis and healthy subjects: the GCA-US-AI project.

Bauer CJ, Chrysidis S, Dejaco C, Koster MJ, Kohler MJ, Monti S, Schmidt WA, Mukhtyar CB, Karakostas P, Milchert M, Ponte C, Duftner C, de Miguel E, Hocevar A, Iagnocco A, Terslev L, Døhn UM, Nielsen BD, Juche A, Seitz L, Keller KK, Karalilova R, Daikeler T, Mackie SL, Torralba K, van der Geest KSM, Boumans D, Bosch P, Tomelleri A, Aschwanden M, Kermani TA, Diamantopoulos A, Fredberg U, Inanc N, Petzinna SM, Albarqouni S, Behning C, Schäfer VS

pubmed logopapersJun 12 2025
Prompt diagnosis of giant cell arteritis (GCA) with ultrasound is crucial for preventing severe ocular and other complications, yet expertise in ultrasound performance is scarce. The development of an artificial intelligence (AI)-based assistant that facilitates ultrasound image classification and helps to diagnose GCA early promises to close the existing gap. In the projection of the planned AI, this study investigates the minimum image resolution required for human experts to reliably classify ultrasound images of arteries commonly affected by GCA for the presence or absence of GCA. Thirty-one international experts in GCA ultrasonography participated in a web-based exercise. They were asked to classify 10 ultrasound images for each of 5 vascular segments as GCA, normal, or not able to classify. The following segments were assessed: (1) superficial common temporal artery, (2) its frontal and (3) parietal branches (all in transverse view), (4) axillary artery in transverse view, and 5) axillary artery in longitudinal view. Identical images were shown at different resolutions, namely 32 × 32, 64 × 64, 128 × 128, 224 × 224, and 512 × 512 pixels, thereby resulting in a total of 250 images to be classified by every study participant. Classification performance improved with increasing resolution up to a threshold, plateauing at 224 × 224 pixels. At 224 × 224 pixels, the overall classification sensitivity was 0.767 (95% CI, 0.737-0.796), and specificity was 0.862 (95% CI, 0.831-0.888). A resolution of 224 × 224 pixels ensures reliable human expert classification and aligns with the input requirements of many common AI-based architectures. Thus, the results of this study substantially guide projected AI development.

Multimodal deep learning for enhanced breast cancer diagnosis on sonography.

Wei TR, Chang A, Kang Y, Patel M, Fang Y, Yan Y

pubmed logopapersJun 12 2025
This study introduces a novel multimodal deep learning model tailored for the differentiation of benign and malignant breast masses using dual-view breast ultrasound images (radial and anti-radial views) in conjunction with corresponding radiology reports. The proposed multimodal model architecture includes specialized image and text encoders for independent feature extraction, along with a transformation layer to align the multimodal features for the subsequent classification task. The model achieved an area of the curve of 85% and outperformed unimodal models with 6% and 8% in Youden index. Additionally, our multimodal model surpassed zero-shot predictions generated by prominent foundation models such as CLIP and MedCLIP. In direct comparison with classification results based on physician-assessed ratings, our model exhibited clear superiority, highlighting its practical significance in diagnostics. By integrating both image and text modalities, this study exemplifies the potential of multimodal deep learning in enhancing diagnostic performance, laying the foundation for developing robust and transparent AI-assisted solutions.

Using a Large Language Model for Breast Imaging Reporting and Data System Classification and Malignancy Prediction to Enhance Breast Ultrasound Diagnosis: Retrospective Study.

Miaojiao S, Xia L, Xian Tao Z, Zhi Liang H, Sheng C, Songsong W

pubmed logopapersJun 11 2025
Breast ultrasound is essential for evaluating breast nodules, with Breast Imaging Reporting and Data System (BI-RADS) providing standardized classification. However, interobserver variability among radiologists can affect diagnostic accuracy. Large language models (LLMs) like ChatGPT-4 have shown potential in medical imaging interpretation. This study explores its feasibility in improving BI-RADS classification consistency and malignancy prediction compared to radiologists. This study aims to evaluate the feasibility of using LLMs, particularly ChatGPT-4, to assess the consistency and diagnostic accuracy of standardized breast ultrasound imaging reports, using pathology as the reference standard. This retrospective study analyzed breast nodule ultrasound data from 671 female patients (mean 45.82, SD 9.20 years; range 26-75 years) who underwent biopsy or surgical excision at our hospital between June 2019 and June 2024. ChatGPT-4 was used to interpret BI-RADS classifications and predict benign versus malignant nodules. The study compared the model's performance to that of two senior radiologists (≥15 years of experience) and two junior radiologists (<5 years of experience) using key diagnostic metrics, including accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, P values, and odds ratios with 95% CIs. Two diagnostic models were evaluated: (1) image interpretation model, where ChatGPT-4 classified nodules based on BI-RADS features, and (2) image-to-text-LLM model, where radiologists provided textual descriptions, and ChatGPT-4 determined malignancy probability based on keywords. Radiologists were blinded to pathological outcomes, and BI-RADS classifications were finalized through consensus. ChatGPT-4 achieved an overall BI-RADS classification accuracy of 96.87%, outperforming junior radiologists (617/671, 91.95% and 604/671, 90.01%, P<.01). For malignancy prediction, ChatGPT-4 achieved an area under the receiver operating characteristic curve of 0.82 (95% CI 0.79-0.85), an accuracy of 80.63% (541/671 cases), a sensitivity of 90.56% (259/286 cases), and a specificity of 73.51% (283/385 cases). The image interpretation model demonstrated performance comparable to senior radiologists, while the image-to-text-LLM model further improved diagnostic accuracy for all radiologists, increasing their sensitivity and specificity significantly (P<.001). Statistical analyses, including the McNemar test and DeLong test, confirmed that ChatGPT-4 outperformed junior radiologists (P<.01) and showed noninferiority compared to senior radiologists (P>.05). Pathological diagnoses served as the reference standard, ensuring robust evaluation reliability. Integrating ChatGPT-4 into an image-to-text-LLM workflow improves BI-RADS classification accuracy and supports radiologists in breast ultrasound diagnostics. These results demonstrate its potential as a decision-support tool to enhance diagnostic consistency and reduce variability.

MoNetV2: Enhanced Motion Network for Freehand 3-D Ultrasound Reconstruction.

Luo M, Yang X, Yan Z, Cao Y, Zhang Y, Hu X, Wang J, Ding H, Han W, Sun L, Ni D

pubmed logopapersJun 11 2025
Three-dimensional ultrasound (US) aims to provide sonographers with the spatial relationships of anatomical structures, playing a crucial role in clinical diagnosis. Recently, deep-learning-based freehand 3-D US has made significant advancements. It reconstructs volumes by estimating transformations between images without external tracking. However, image-only reconstruction poses difficulties in reducing cumulative drift and further improving reconstruction accuracy, particularly in scenarios involving complex motion trajectories. In this context, we propose an enhanced motion network (MoNetV2) to enhance the accuracy and generalizability of reconstruction under diverse scanning velocities and tactics. First, we propose a sensor-based temporal and multibranch structure (TMS) that fuses image and motion information from a velocity perspective to improve image-only reconstruction accuracy. Second, we devise an online multilevel consistency constraint (MCC) that exploits the inherent consistency of scans to handle various scanning velocities and tactics. This constraint exploits scan-level velocity consistency (SVC), path-level appearance consistency (PAC), and patch-level motion consistency (PMC) to supervise interframe transformation estimation. Third, we distill an online multimodal self-supervised strategy (MSS) that leverages the correlation between network estimation and motion information to further reduce cumulative errors. Extensive experiments clearly demonstrate that MoNetV2 surpasses existing methods in both reconstruction quality and generalizability performance across three large datasets.
Page 9 of 26252 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.