Sort by:
Page 28 of 34334 results

Deep Learning-Based Breast Cancer Detection in Mammography: A Multi-Center Validation Study in Thai Population

Isarun Chamveha, Supphanut Chaiyungyuen, Sasinun Worakriangkrai, Nattawadee Prasawang, Warasinee Chaisangmongkon, Pornpim Korpraphong, Voraparee Suvannarerg, Shanigarn Thiravit, Chalermdej Kannawat, Kewalin Rungsinaporn, Suwara Issaragrisil, Payia Chadbunchachai, Pattiya Gatechumpol, Chawiporn Muktabhant, Patarachai Sereerat

arxiv logopreprintMay 29 2025
This study presents a deep learning system for breast cancer detection in mammography, developed using a modified EfficientNetV2 architecture with enhanced attention mechanisms. The model was trained on mammograms from a major Thai medical center and validated on three distinct datasets: an in-domain test set (9,421 cases), a biopsy-confirmed set (883 cases), and an out-of-domain generalizability set (761 cases) collected from two different hospitals. For cancer detection, the model achieved AUROCs of 0.89, 0.96, and 0.94 on the respective datasets. The system's lesion localization capability, evaluated using metrics including Lesion Localization Fraction (LLF) and Non-Lesion Localization Fraction (NLF), demonstrated robust performance in identifying suspicious regions. Clinical validation through concordance tests showed strong agreement with radiologists: 83.5% classification and 84.0% localization concordance for biopsy-confirmed cases, and 78.1% classification and 79.6% localization concordance for out-of-domain cases. Expert radiologists' acceptance rate also averaged 96.7% for biopsy-confirmed cases, and 89.3% for out-of-domain cases. The system achieved a System Usability Scale score of 74.17 for source hospital, and 69.20 for validation hospitals, indicating good clinical acceptance. These results demonstrate the model's effectiveness in assisting mammogram interpretation, with the potential to enhance breast cancer screening workflows in clinical practice.

Deep learning reconstruction enhances tophus detection in a dual-energy CT phantom study.

Schmolke SA, Diekhoff T, Mews J, Khayata K, Kotlyarov M

pubmed logopapersMay 28 2025
This study aimed to compare two deep learning reconstruction (DLR) techniques (AiCE mild; AiCE strong) with two established methods-iterative reconstruction (IR) and filtered back projection (FBP)-for the detection of monosodium urate (MSU) in dual-energy computed tomography (DECT). An ex vivo bio-phantom and a raster phantom were prepared by inserting syringes containing different MSU concentrations and scanned in a 320-rows volume DECT scanner at different tube currents. The scans were reconstructed in a soft tissue kernel using the four reconstruction techniques mentioned above, followed by quantitative assessment of MSU volumes and image quality parameters, i.e., signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Both DLR techniques outperformed conventional IR and FBP in terms of volume detection and image quality. Notably, unlike IR and FBP, the two DLR methods showed no positive correlation of the MSU detection rate with the CT dose index (CTDIvol) in the bio-phantom. Our study highlights the potential of DLR for DECT imaging in gout, where it offers enhanced detection sensitivity, improved image contrast, reduced image noise, and lower radiation exposure. Further research is needed to assess the clinical reliability of this approach.

Automatic assessment of lower limb deformities using high-resolution X-ray images.

Rostamian R, Panahi MS, Karimpour M, Nokiani AA, Khaledi RJ, Kashani HG

pubmed logopapersMay 27 2025
Planning an osteotomy or arthroplasty surgery on a lower limb requires prior classification/identification of its deformities. The detection of skeletal landmarks and the calculation of angles required to identify the deformities are traditionally done manually, with measurement accuracy relying considerably on the experience of the individual doing the measurements. We propose a novel, image pyramid-based approach to skeletal landmark detection. The proposed approach uses a Convolutional Neural Network (CNN) that receives the raw X-ray image as input and produces the coordinates of the landmarks. The landmark estimations are modified iteratively via the error feedback method to come closer to the target. Our clinically produced full-leg X-Rays dataset is made publically available and used to train and test the network. Angular quantities are calculated based on detected landmarks. Angles are then classified as lower than normal, normal or higher than normal according to predefined ranges for a normal condition. The performance of our approach is evaluated at several levels: landmark coordinates accuracy, angles' measurement accuracy, and classification accuracy. The average absolute error (difference between automatically and manually determined coordinates) for landmarks was 0.79 ± 0.57 mm on test data, and the average absolute error (difference between automatically and manually calculated angles) for angles was 0.45 ± 0.42°. Results from multiple case studies involving high-resolution images show that the proposed approach outperforms previous deep learning-based approaches in terms of accuracy and computational cost. It also enables the automatic detection of the lower limb misalignments in full-leg x-ray images.

Methodological Challenges in Deep Learning-Based Detection of Intracranial Aneurysms: A Scoping Review.

Joo B

pubmed logopapersMay 26 2025
Artificial intelligence (AI), particularly deep learning, has demonstrated high diagnostic performance in detecting intracranial aneurysms on computed tomography angiography (CTA) and magnetic resonance angiography (MRA). However, the clinical translation of these technologies remains limited due to methodological limitations and concerns about generalizability. This scoping review comprehensively evaluates 36 studies that applied deep learning to intracranial aneurysm detection on CTA or MRA, focusing on study design, validation strategies, reporting practices, and reference standards. Key findings include inconsistent handling of ruptured and previously treated aneurysms, underreporting of coexisting brain or vascular abnormalities, limited use of external validation, and an almost complete absence of prospective study designs. Only a minority of studies employed diagnostic cohorts that reflect real-world aneurysm prevalence, and few reported all essential performance metrics, such as patient-wise and lesion-wise sensitivity, specificity, and false positives per case. These limitations suggest that current studies remain at the stage of technical validation, with high risks of bias and limited clinical applicability. To facilitate real-world implementation, future research must adopt more rigorous designs, representative and diverse validation cohorts, standardized reporting practices, and greater attention to human-AI interaction.

Fetal origins of adult disease: transforming prenatal care by integrating Barker's Hypothesis with AI-driven 4D ultrasound.

Andonotopo W, Bachnas MA, Akbar MIA, Aziz MA, Dewantiningrum J, Pramono MBA, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersMay 26 2025
The fetal origins of adult disease, widely known as Barker's Hypothesis, suggest that adverse fetal environments significantly impact the risk of developing chronic diseases, such as diabetes and cardiovascular conditions, in adulthood. Recent advancements in 4D ultrasound (4D US) and artificial intelligence (AI) technologies offer a promising avenue for improving prenatal diagnostics and validating this hypothesis. These innovations provide detailed insights into fetal behavior and neurodevelopment, linking early developmental markers to long-term health outcomes. This study synthesizes contemporary developments in AI-enhanced 4D US, focusing on their roles in detecting fetal anomalies, assessing neurodevelopmental markers, and evaluating congenital heart defects. The integration of AI with 4D US allows for real-time, high-resolution visualization of fetal anatomy and behavior, surpassing the diagnostic precision of traditional methods. Despite these advancements, challenges such as algorithmic bias, data diversity, and real-world validation persist and require further exploration. Findings demonstrate that AI-driven 4D US improves diagnostic sensitivity and accuracy, enabling earlier detection of fetal abnormalities and optimization of clinical workflows. By providing a more comprehensive understanding of fetal programming, these technologies substantiate the links between early-life conditions and adult health outcomes, as proposed by Barker's Hypothesis. The integration of AI and 4D US has the potential to revolutionize prenatal care, paving the way for personalized maternal-fetal healthcare. Future research should focus on addressing current limitations, including ethical concerns and accessibility challenges, to promote equitable implementation. Such advancements could significantly reduce the global burden of chronic diseases and foster healthier generations.

A dataset for quality evaluation of pelvic X-ray and diagnosis of developmental dysplasia of the hip.

Qi G, Jiao X, Li J, Qin C, Li X, Sun Z, Zhao Y, Jiang R, Zhu Z, Zhao G, Yu G

pubmed logopapersMay 26 2025
Developmental Dysplasia of the Hip (DDH) stands as one of the preeminent hip disorders prevalent in pediatric orthopedics. Automated diagnostic instruments, driven by artificial intelligence methodologies, are capable of providing substantial assistance to clinicians in the diagnosis of DDH. We have developed a dataset designated as Multitasking DDH (MTDDH), which is composed of two sub-datasets. Dataset 1 encompasses 1,250 pelvic X-ray images, with annotations demarcating four discrete regions for the evaluation of pelvic X-ray quality, in tandem with eight pivotal points serving as support for DDH diagnosis. Dataset 2 contains 906 pelvic X-ray images, and each image has been annotated with eight key points for assisting in the diagnosis of DDH. Notably, MTDDH represents the pioneering dataset engineered for the comprehensive evaluation of pelvic X-ray quality while concurrently offering the most exhaustive set of eight key points to bolster DDH diagnosis, thus fulfilling the exigency for enhanced diagnostic precision. Ultimately, we presented the elaborate process of constructing the MTDDH and furnished a concise introduction regarding its application.

AI in Orthopedic Research: A Comprehensive Review.

Misir A, Yuce A

pubmed logopapersMay 26 2025
Artificial intelligence (AI) is revolutionizing orthopedic research and clinical practice by enhancing diagnostic accuracy, optimizing treatment strategies, and streamlining clinical workflows. Recent advances in deep learning have enabled the development of algorithms that detect fractures, grade osteoarthritis, and identify subtle pathologies in radiographic and magnetic resonance images with performance comparable to expert clinicians. These AI-driven systems reduce missed diagnoses and provide objective, reproducible assessments that facilitate early intervention and personalized treatment planning. Moreover, AI has made significant strides in predictive analytics by integrating diverse patient data-including gait and imaging features-to forecast surgical outcomes, implant survivorship, and rehabilitation trajectories. Emerging applications in robotics, augmented reality, digital twin technologies, and exoskeleton control promise to further transform preoperative planning and intraoperative guidance. Despite these promising developments, challenges such as data heterogeneity, algorithmic bias, and the "black box" nature of many models-as well as issues with robust validation-remain. This comprehensive review synthesizes current developments, critically examines limitations, and outlines future directions for integrating AI into musculoskeletal care.

Deep learning-based identification of vertebral fracture and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment to predict incident fracture.

Hong N, Cho SW, Lee YH, Kim CO, Kim HC, Rhee Y, Leslie WD, Cummings SR, Kim KM

pubmed logopapersMay 24 2025
Deep learning (DL) identification of vertebral fractures and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment (VFA) images may improve fracture risk assessment in older adults. In 26 299 lateral spine radiographs from 9276 individuals attending a tertiary-level institution (60% train set; 20% validation set; 20% test set; VERTE-X cohort), DL models were developed to detect prevalent vertebral fracture (pVF) and osteoporosis. The pre-trained DL models from lateral spine radiographs were then fine-tuned in 30% of a DXA VFA dataset (KURE cohort), with performance evaluated in the remaining 70% test set. The area under the receiver operating characteristics curve (AUROC) for DL models to detect pVF and osteoporosis was 0.926 (95% CI 0.908-0.955) and 0.848 (95% CI 0.827-0.869) from VERTE-X spine radiographs, respectively, and 0.924 (95% CI 0.905-0.942) and 0.867 (95% CI 0.853-0.881) from KURE DXA VFA images, respectively. A total of 13.3% and 13.6% of individuals sustained an incident fracture during a median follow-up of 5.4 years and 6.4 years in the VERTE-X test set (n = 1852) and KURE test set (n = 2456), respectively. Incident fracture risk was significantly greater among individuals with DL-detected vertebral fracture (hazard ratios [HRs] 3.23 [95% CI 2.51-5.17] and 2.11 [95% CI 1.62-2.74] for the VERTE-X and KURE test sets) or DL-detected osteoporosis (HR 2.62 [95% CI 1.90-3.63] and 2.14 [95% CI 1.72-2.66]), which remained significant after adjustment for clinical risk factors and femoral neck bone mineral density. DL scores improved incident fracture discrimination and net benefit when combined with clinical risk factors. In summary, DL-detected pVF and osteoporosis in lateral spine radiographs and DXA VFA images enhanced fracture risk prediction in older adults.

Detection, Classification, and Segmentation of Rib Fractures From CT Data Using Deep Learning Models: A Review of Literature and Pooled Analysis.

Den Hengst S, Borren N, Van Lieshout EMM, Doornberg JN, Van Walsum T, Wijffels MME, Verhofstad MHJ

pubmed logopapersMay 23 2025
Trauma-induced rib fractures are common injuries. The gold standard for diagnosing rib fractures is computed tomography (CT), but the sensitivity in the acute setting is low, and interpreting CT slices is labor-intensive. This has led to the development of new diagnostic approaches leveraging deep learning (DL) models. This systematic review and pooled analysis aimed to compare the performance of DL models in the detection, segmentation, and classification of rib fractures based on CT scans. A literature search was performed using various databases for studies describing DL models detecting, segmenting, or classifying rib fractures from CT data. Reported performance metrics included sensitivity, false-positive rate, F1-score, precision, accuracy, and mean average precision. A meta-analysis was performed on the sensitivity scores to compare the DL models with clinicians. Of the 323 identified records, 25 were included. Twenty-one studies reported on detection, four on segmentation, and 10 on classification. Twenty studies had adequate data for meta-analysis. The gold standard labels were provided by clinicians who were radiologists and orthopedic surgeons. For detecting rib fractures, DL models had a higher sensitivity (86.7%; 95% CI: 82.6%-90.2%) than clinicians (75.4%; 95% CI: 68.1%-82.1%). In classification, the sensitivity of DL models for displaced rib fractures (97.3%; 95% CI: 95.6%-98.5%) was significantly better than that of clinicians (88.2%; 95% CI: 84.8%-91.3%). DL models for rib fracture detection and classification achieved promising results. With better sensitivities than clinicians for detecting and classifying displaced rib fractures, the future should focus on implementing DL models in daily clinics. Level III-systematic review and pooled analysis.
Page 28 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.