Sort by:
Page 68 of 1351347 results

Deep Learning-Based Automated Measurement of Cervical Length in Transvaginal Ultrasound Images of Pregnant Women.

Kwon H, Sun S, Cho HC, Yun HS, Park S, Jung YJ, Kwon JY, Seo JK

pubmed logopapersJun 1 2025
Cervical length (CL) measurement using transvaginal ultrasound is an effective screening tool to assess the risk of preterm birth. An adequate assessment of CL is crucial, however, manual sonographic CL measurement is highly operator-dependent and cumbersome. Therefore, a reliable and reproducible automatic method for CL measurement is in high demand to reduce inter-rater variability and improve workflow. Despite the increasing use of artificial intelligence techniques in ultrasound, applying deep learning (DL) to analyze ultrasound images of the cervix remains a challenge due to low signal-to-noise ratios and difficulties in capturing the cervical canal, which appears as a thin line and with extremely low contrast against the surrounding tissues. To address these challenges, we have developed CL-Net, a novel DL network that incorporates expert anatomical knowledge to identify the cervix, similar to the approach taken by clinicians. CL-Net captures anatomical features related to CL measurement, facilitating the identification of the cervical canal. It then identifies the cervical canal and automatically provides reproducible and reliable CL measurements. CL-Net achieved a success rate of 95.5% in recognizing the cervical canal, comparable to that of human experts (96.4%). Furthermore, the differences between the CL measurements of CL-Net and ground truth were considerably smaller than those made by non-experts and were comparable to those made by experts (median 1.36 mm, IQR 0.87-2.82 mm, range 0.06-6.95 mm for straight cervix; median 1.31 mm, IQR 0.61-2.65 mm, range 0.01-8.18 mm for curved one).

Data Augmentation for Medical Image Classification Based on Gaussian Laplacian Pyramid Blending With a Similarity Measure.

Kumar A, Sharma A, Singh AK, Singh SK, Saxena S

pubmed logopapersJun 1 2025
Breast cancer is a devastating disease that affects women worldwide, and computer-aided algorithms have shown potential in automating cancer diagnosis. Recently Generative Artificial Intelligence (GenAI) opens new possibilities for addressing the challenges of labeled data scarcity and accurate prediction in critical applications. However, a lack of diversity, as well as unrealistic and unreliable data, have a detrimental impact on performance. Therefore, this study proposes an augmentation scheme to address the scarcity of labeled data and data imbalance in medical datasets. This approach integrates the concepts of the Gaussian-Laplacian pyramid and pyramid blending with similarity measures. In order to maintain the structural properties of images and capture inter-variability of patient images of the same category similarity-metric-based intermixing has been introduced. It helps to maintain the overall quality and integrity of the dataset. Subsequently, deep learning approach with significant modification, that leverages transfer learning through the usage of concatenated pre-trained models is applied to classify breast cancer histopathological images. The effectiveness of the proposal, including the impact of data augmentation, is demonstrated through a detailed analysis of three different medical datasets, showing significant performance improvement over baseline models. The proposal has the potential to contribute to the development of more accurate and reliable approach for breast cancer diagnosis.

Ultrasound measurement of relative tongue size and its correlation with tongue mobility for healthy individuals.

Sun J, Kitamura T, Nota Y, Yamane N, Hayashi R

pubmed logopapersJun 1 2025
The size of an individual's tongue relative to the oral cavity is associated with articulation speed [Feng, Lu, Zheng, Chi, and Honda, in Proceedings of the 10th Biennial Asia Pacific Conference on Speech, Language, and Hearing (2017), pp. 17-19)] and may affect speech clarity. This study introduces an ultrasound-based method for measuring relative tongue size, termed ultrasound-based relative tongue size (uRTS), as a cost-effective alternative to the magnetic resonance imaging (MRI) based method. Using deep learning to extract the tongue contour, uRTS was calculated from tongue and oropharyngeal cavity sizes in the midsagittal plane. Results from ten speakers showed a strong correlation between uRTS and MRI-based measurements (r = 0.87) and a negative correlation with tongue movement speed (r = -0.73), indicating uRTS is a useful index for assessing tongue size.

Diagnostic value of deep learning of multimodal imaging of thyroid for TI-RADS category 3-5 classification.

Qian T, Feng X, Zhou Y, Ling S, Yao J, Lai M, Chen C, Lin J, Xu D

pubmed logopapersJun 1 2025
Thyroid nodules classified within the Thyroid Imaging Reporting and Data Systems (TI-RADS) category 3-5 are typically regarded as having varying degrees of malignancy risk, with the risk increasing from TI-RADS 3 to TI-RADS 5. While some of these nodules may undergo fine-needle aspiration (FNA) biopsy to assess their nature, this procedure carries a risk of false negatives and inherent complications. To avoid the need for unnecessary biopsy examination, we explored a method for distinguishing the benign and malignant characteristics of thyroid TI-RADS 3-5 nodules based on deep-learning ultrasound images combined with computed tomography (CT). Thyroid nodules, assessed as American College of Radiology (ACR) TI-RADS category 3-5 through conventional ultrasound, all of which had postoperative pathology results, were examined using both conventional ultrasound and CT before operation. We investigated the effectiveness of deep-learning models based on ultrasound alone, CT alone, and a combination of both imaging modalities using the following metrics: Area Under Curve (AUC), sensitivity, accuracy, and positive predictive value (PPV). Additionally, we compared the diagnostic efficacy of the combined methods with manual readings of ultrasound and CT. A total of 768 thyroid nodules falling within TI-RADS categories 3-5 were identified across 768 patients. The dataset comprised 499 malignant and 269 benign cases. For the automatic identification of thyroid TI-RADS category 3-5 nodules, deep learning combined with ultrasound and CT demonstrated a significantly higher AUC (0.930; 95% CI: 0.892, 0.969) compared to the application of ultrasound alone AUC (0.901; 95% CI: 0.856, 0.947) or CT alone AUC (0.776; 95% CI: 0.713, 0.840). Additionally, the AUC of combined modalities surpassed that of radiologists'assessments using ultrasound alone AUCmean (0.725;95% CI:0.677, 0.773), CT alone AUCmean (0.617; 95% CI:0.564, 0.669). Deep learning method combined with ultrasound and CT imaging of thyroid can allow more accurate and precise classification of nodules within TI-RADS categories 3-5.

Dental practitioners versus artificial intelligence software in assessing alveolar bone loss using intraoral radiographs.

Almarghlani A, Fakhri J, Almarhoon A, Ghonaim G, Abed H, Sharka R

pubmed logopapersJun 1 2025
Integrating artificial intelligence (AI) in the dental field can potentially enhance the efficiency of dental care. However, few studies have investigated whether AI software can achieve results comparable to those obtained by dental practitioners (general practitioners (GPs) and specialists) when assessing alveolar bone loss in a clinical setting. Thus, this study compared the performance of AI in assessing periodontal bone loss with those of GPs and specialists. This comparative cross-sectional study evaluated the performance of dental practitioners and AI software in assessing alveolar bone loss. Radiographs were randomly selected to ensure representative samples. Dental practitioners independently evaluated the radiographs, and the AI software "Second Opinion Software" was tested using the same set of radiographs evaluated by the dental practitioners. The results produced by the AI software were then compared with the baseline values to measure their accuracy and allow direct comparison with the performance of human specialists. The survey received 149 responses, where each answered 10 questions to compare the measurements made by AI and dental practitioners when assessing the amount of bone loss radiographically. The mean estimates of the participants had a moderate positive correlation with the radiographic measurements (rho = 0.547, <i>p</i> < 0.001) and a weaker but still significant correlation with AI measurements (rho = 0.365, <i>p</i> < 0.001). AI measurements had a stronger positive correlation with the radiographic measurements (rho = 0.712, <i>p</i> < 0.001) compared with their correlation with the estimates of dental practitioners. This study highlights the capacity of AI software to enhance the accuracy and efficiency of radiograph-based evaluations of alveolar bone loss. Dental practitioners are vital for the clinical experience but AI technology provides a consistent and replicable methodology. Future collaborations between AI experts, researchers, and practitioners could potentially optimize patient care.

AO Spine Clinical Practice Recommendations for Diagnosis and Management of Degenerative Cervical Myelopathy: Evidence Based Decision Making - A Review of Cutting Edge Recent Literature Related to Degenerative Cervical Myelopathy.

Fehlings MG, Evaniew N, Ter Wengel PV, Vedantam A, Guha D, Margetis K, Nouri A, Ahmed AI, Neal CJ, Davies BM, Ganau M, Wilson JR, Martin AR, Grassner L, Tetreault L, Rahimi-Movaghar V, Marco R, Harrop J, Guest J, Alvi MA, Pedro KM, Kwon BK, Fisher CG, Kurpad SN

pubmed logopapersJun 1 2025
Study DesignLiterature review of key topics related to degenerative cervical myelopathy (DCM) with critical appraisal and clinical recommendations.ObjectiveThis article summarizes several key current topics related to the management of DCM.MethodsRecent literature related to the management of DCM was reviewed. Four articles were selected and critically appraised. Recommendations were graded as Strong or Conditional.ResultsArticle 1: The Relationship Between pre-operative MRI Signal Intensity and outcomes. <b>Conditional</b> recommendation to use diffusion-weighted imaging MR signal changes in the cervical cord to evaluate prognosis following surgical intervention for DCM. Article 2: Efficacy and Safety of Surgery for Mild DCM. <b>Conditional</b> recommendation that surgery is a valid option for mild DCM with favourable clinical outcomes. Article 3: Effect of Ventral vs Dorsal Spinal Surgery on Patient-Reported Physical Functioning in Patients With Cervical Spondylotic Myelopathy: A Randomized Clinical Trial. <b>Strong</b> recommendation that there is equipoise in the outcomes of anterior vs posterior surgical approaches in cases where either technique could be used. Article 4: Machine learning-based cluster analysis of DCM phenotypes. <b>Conditional</b> recommendation that clinicians consider pain, medical frailty, and the impact on health-related quality of life when counselling patients.ConclusionsDCM requires a multidimensional assessment including neurological dysfunction, pain, impact on health-related quality of life, medical frailty and MR imaging changes in the cord. Surgical treatment is effective and is a valid option for mild DCM. In patients where either anterior or posterior surgical approaches can be used, both techniques afford similar clinical benefit albeit with different complication profiles.

Enhancing diagnostic accuracy of thyroid nodules: integrating self-learning and artificial intelligence in clinical training.

Kim D, Hwang YA, Kim Y, Lee HS, Lee E, Lee H, Yoon JH, Park VY, Rho M, Yoon J, Lee SE, Kwak JY

pubmed logopapersJun 1 2025
This study explores a self-learning method as an auxiliary approach in residency training for distinguishing between benign and malignant thyroid nodules. Conducted from March to December 2022, internal medicine residents underwent three repeated learning sessions with a "learning set" comprising 3000 thyroid nodule images. Diagnostic performances for internal medicine residents were assessed before the study, after every learning session, and for radiology residents before and after one-on-one education, using a "test set," comprising 120 thyroid nodule images. Finally, all residents repeated the same test using artificial intelligence computer-assisted diagnosis (AI-CAD). Twenty-one internal medicine and eight radiology residents participated. Initially, internal medicine residents had a lower area under the receiver operating characteristic curve (AUROC) than radiology residents (0.578 vs. 0.701, P < 0.001), improving post-learning (0.578 to 0.709, P < 0.001) to a comparable level with radiology residents (0.709 vs. 0.735, P = 0.17). Further improvement occurred with AI-CAD for both group (0.709 to 0.755, P < 0.001; 0.735 to 0.768, P = 0.03). The proposed iterative self-learning method using a large volume of ultrasonographic images can assist beginners, such as residents, in thyroid imaging to differentiate benign and malignant thyroid nodules. Additionally, AI-CAD can improve the diagnostic performance across varied levels of experience in thyroid imaging.

Multimodal Artificial Intelligence Using Endoscopic USG, CT, and MRI to Differentiate Between Serous and Mucinous Cystic Neoplasms.

Seza K, Tawada K, Kobayashi A, Nakamura K

pubmed logopapersJun 1 2025
Introduction Serous cystic neoplasms (SCN) and mucinous cystic neoplasms (MCN) often exhibit similar imaging features when evaluated with a single imaging modality. Differentiating between SCN and MCN typically necessitates the utilization of multiple imaging techniques, including computed tomography (CT), magnetic resonance imaging (MRI), and endoscopic ultrasonography (EUS). Recent research indicates that artificial intelligence (AI) can effectively distinguish between SCN and MCN using single-modal imaging. Despite these advancements, the diagnostic performance of AI has not yet reached an optimal level. This study compares the efficacy of AI in classifying SCN and MCN using multimodal imaging versus single-modal imaging. The objective was to assess the effectiveness of AI utilizing multimodal imaging with EUS, CT, and MRI to classify these two types of pancreatic cysts. Methods We retrospectively gathered data from 25 patients with surgically confirmed SCN and 24 patients with surgically confirmed MCN as part of a multicenter study. Imaging was conducted using four modalities: EUS, early-phase contrast-enhanced abdominal CT, T2-weighted MRI, and magnetic resonance pancreatography. Four images per modality were obtained for each tumor. Data augmentation techniques were utilized, resulting in a final dataset of 39,200 images per modality. An AI model with ResNet was employed to categorize the cysts as SCN or MCN, incorporating clinical features and combinations of imaging modalities (single, double, triple, and all four modalities). The classification outcomes were compared with those of five experienced gastroenterologists with over 10 years of experience. The comparison is based on three performance metrics: sensitivity, specificity, and accuracy. Results For AI utilizing a single imaging modality, the sensitivity, specificity, and accuracy were 87.0%, 92.7%, and 90.8%, respectively. Combining two imaging modalities improved the sensitivity, specificity, and accuracy to 95.3%, 95.1%, and 94.9%. With three modalities, AI achieved a sensitivity of 96.0%, a specificity of 99.0%, and an accuracy of 97.0%. Ultimately, employing all four imaging modalities resulted in AI achieving 98.0% sensitivity, 100% specificity, and 99.0% accuracy. In contrast, experts utilizing all four modalities attained a sensitivity of 78.0%, specificity of 82.0%, and accuracy of 81.0%. The AI models consistently outperformed the experts across all metrics. A continuous enhancement in performance was observed with each additional imaging modality, with AI utilizing three and four modalities significantly surpassing single-modal imaging AI. Conclusion AI utilizing multimodal imaging offers better performance compared to both single-modal imaging AI and experienced human experts in classifying SCN and MCN.

Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends.

Wang R, Chen F, Chen H, Lin C, Shuai J, Wu Y, Ma L, Hu X, Wu M, Wang J, Zhao Q, Shuai J, Pan J

pubmed logopapersJun 1 2025
The high-resolution three-dimensional (3D) images generated with digital breast tomosynthesis (DBT) in the screening of breast cancer offer new possibilities for early disease diagnosis. Early detection is especially important as the incidence of breast cancer increases. However, DBT also presents challenges in terms of poorer results for dense breasts, increased false positive rates, slightly higher radiation doses, and increased reading times. Deep learning (DL) has been shown to effectively increase the processing efficiency and diagnostic accuracy of DBT images. This article reviews the application and outlook of DL in DBT-based breast cancer screening. First, the fundamentals and challenges of DBT technology are introduced. The applications of DL in DBT are then grouped into three categories: diagnostic classification of breast diseases, lesion segmentation and detection, and medical image generation. Additionally, the current public databases for mammography are summarized in detail. Finally, this paper analyzes the main challenges in the application of DL techniques in DBT, such as the lack of public datasets and model training issues, and proposes possible directions for future research, including large language models, multisource domain transfer, and data augmentation, to encourage innovative applications of DL in medical imaging.

Implementation costs and cost-effectiveness of ultraportable chest X-ray with artificial intelligence in active case finding for tuberculosis in Nigeria.

Garg T, John S, Abdulkarim S, Ahmed AD, Kirubi B, Rahman MT, Ubochioma E, Creswell J

pubmed logopapersJun 1 2025
Availability of ultraportable chest x-ray (CXR) and advancements in artificial intelligence (AI)-enabled CXR interpretation are promising developments in tuberculosis (TB) active case finding (ACF) but costing and cost-effectiveness analyses are limited. We provide implementation cost and cost-effectiveness estimates of different screening algorithms using symptoms, CXR and AI in Nigeria. People 15 years and older were screened for TB symptoms and offered a CXR with AI-enabled interpretation using qXR v3 (Qure.ai) at lung health camps. Sputum samples were tested on Xpert MTB/RIF for individuals reporting symptoms or with qXR abnormality scores ≥0.30. We conducted a retrospective costing using a combination of top-down and bottom-up approaches while utilizing itemized expense data from a health system perspective. We estimated costs in five screening scenarios: abnormality score ≥0.30 and ≥0.50; cough ≥ 2 weeks; any symptom; abnormality score ≥0.30 or any symptom. We calculated total implementation costs, cost per bacteriologically-confirmed case detected, and assessed cost-effectiveness using incremental cost-effectiveness ratio (ICER) as additional cost per additional case. Overall, 3205 people with presumptive TB were identified, 1021 were tested, and 85 people with bacteriologically-confirmed TB were detected. Abnormality ≥ 0.30 or any symptom (US$65704) had the highest costs while cough ≥ 2 weeks was the lowest (US$40740). The cost per case was US$1198 for cough ≥ 2 weeks, and lowest for any symptom (US$635). Compared to baseline strategy of cough ≥ 2 weeks, the ICER for any symptom was US$191 per additional case detected and US$ 2096 for Abnormality ≥0.30 OR any symptom algorithm. Using CXR and AI had lower cost per case detected than any symptom screening criteria when asymptomatic TB was higher than 30% of all bacteriologically-confirmed TB detected. Compared to traditional symptom screening, using CXR and AI in combination with symptoms detects more cases at lower cost per case detected and is cost-effective. TB programs should explore adoption of CXR and AI for screening in ACF.
Page 68 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.