Sort by:
Page 84 of 2262251 results

Impact of ablation on regional strain from 4D computed tomography in the left atrium.

Mehringer N, Severance L, Park A, Ho G, McVeigh E

pubmed logopapersJun 20 2025
Ablation for atrial fibrillation targets an arrhythmogenic substrate in the left atrium (LA) myocardium with therapeutic energy, resulting in a scar tissue. Although a global LA function typically improves after ablation, the injured tissue is stiffer and non-contractile. The local functional impact of ablation has not been thoroughly investigated. This study retrospectively analyzed the LA mechanics of 15 subjects who received a four-dimensional computed tomography (4DCT) scan pre- and post-ablation for atrial fibrillation. LA volumes were automatically segmented at every frame by a trained neural network and converted into surface meshes. A local endocardial strain was computed at a resolution of 2 mm from the deforming meshes. The LA endocardial surface was automatically divided into five walls and further into 24 sub-segments using the left atrial positioning system. Intraoperative notes gathered during the ablation procedure informed which regions received ablative treatment. In an average of 18 months after ablation, the strain is decreased by 16.3% in the septal wall and by 18.3% in the posterior wall. In subjects who were imaged in sinus rhythm both before and after the procedure, the effect of ablation reduced the regional strain by 15.3% (p = 0.012). Post-ablation strain maps demonstrated spatial patterns of reduced strain which matched the ablation pattern. This study demonstrates the capability of 4DCT to capture high-resolution changes in the left atrial strain in response to tissue damage and explores the quantification of a regionally reduced LA function from the scar tissue.

Radiological data processing system: lifecycle management and annotation.

Bobrovskaya T, Vasilev Y, Vladzymyrskyy A, Omelyanskaya O, Kosov P, Krylova E, Ponomarenko A, Burtsev T, Savkina E, Kodenko M, Kasimov S, Medvedev K, Kovalchuk A, Zinchenko V, Rumyantsev D, Kazarinova V, Semenov S, Arzamasov K

pubmed logopapersJun 20 2025
To develop a platform for automated processing of radiological datasets that operates independently of medical information systems. The platform maintains datasets throughout their lifecycle, from data retrieval to annotation and presentation. The platform employs a modular structure in which modules can operate independently or in conjunction. Each module sequentially processes output from the preceding module. The platform incorporates a local database containing textual study protocols, a radiology information system (RIS), and storage for labeled studies and reports. A platform equipped with local permanent and temporary file storages facilitates radiological datasets processing. The platform's modules enable data search, extraction, anonymization, annotation, generation of annotated files, and standardized documentation of datasets. The platform provides a comprehensive workflow for radiological dataset management and is currently operational at the Center for Diagnostics and Telemedicine. Future development will focus on expanding platform functionality.

Artificial intelligence-based tumor size measurement on mammography: agreement with pathology and comparison with human readers' assessments across multiple imaging modalities.

Kwon MR, Kim SH, Park GE, Mun HS, Kang BJ, Kim YT, Yoon I

pubmed logopapersJun 20 2025
To evaluate the agreement between artificial intelligence (AI)-based tumor size measurements of breast cancer and the final pathology and compare these results with those of other imaging modalities. This retrospective study included 925 women (mean age, 55.3 years ± 11.6) with 936 breast cancers, who underwent digital mammography, breast ultrasound, and magnetic resonance imaging before breast cancer surgery. AI-based tumor size measurement was performed on post-processed mammographic images, outlining areas with AI abnormality scores of 10, 50, and 90%. Absolute agreement between AI-based tumor sizes, image modalities, and histopathology was assessed using intraclass correlation coefficient (ICC) analysis. Concordant and discordant cases between AI measurements and histopathologic examinations were compared. Tumor size with an abnormality score of 50% showed the highest agreement with histopathologic examination (ICC = 0.54, 95% confidential interval [CI]: 0.49-0.59), showing comparable agreement with mammography (ICC = 0.54, 95% CI: 0.48-0.60, p = 0.40). For ductal carcinoma in situ and human epidermal growth factor receptor 2-positive cancers, AI revealed a higher agreement than that of mammography (ICC = 0.76, 95% CI: 0.67-0.84 and ICC = 0.73, 95% CI: 0.52-0.85). Overall, 52.0% (487/936) of cases were discordant, with these cases more commonly observed in younger patients with dense breasts, multifocal malignancies, lower abnormality scores, and different imaging characteristics. AI-based tumor size measurements with abnormality scores of 50% showed moderate agreement with histopathology but demonstrated size discordance in more than half of the cases. While comparable to mammography, its limitations emphasize the need for further refinement and research.

Emergency radiology: roadmap for radiology departments.

Aydin S, Ece B, Cakmak V, Kocak B, Onur MR

pubmed logopapersJun 20 2025
Emergency radiology has evolved into a significant subspecialty over the past 2 decades, facing unique challenges including escalating imaging volumes, increasing study complexity, and heightened expectations from clinicians and patients. This review provides a comprehensive overview of the key requirements for an effective emergency radiology unit. Emergency radiologists play a crucial role in real-time decision-making by providing continuous 24/7 support, requiring expertise across various organ systems and close collaboration with emergency physicians and specialists. Beyond image interpretation, emergency radiologists are responsible for organizing staff schedules, planning equipment, determining imaging protocols, and establishing standardized reporting systems. Operational considerations in emergency radiology departments include efficient scheduling models such as circadian-based scheduling, strategic equipment organization with primary imaging modalities positioned near emergency departments, and effective imaging management through structured ordering systems and standardized protocols. Preparedness for mass casualty incidents requires a well-organized workflow process map detailing steps from patient transfer to image acquisition and interpretation, with clear task allocation and imaging pathways. Collaboration between emergency radiologists and physicians is essential, with accurate communication facilitated through various channels and structured reporting templates. Artificial intelligence has emerged as a transformative tool in emergency radiology, offering potential benefits in both interpretative domains (detecting intracranial hemorrhage, pulmonary embolism, acute ischemic stroke) and non-interpretative applications (triage systems, protocol assistance, quality control). Despite implementation challenges including clinician skepticism, financial considerations, and ethical issues, AI can enhance diagnostic accuracy and workflow optimization. Teleradiology provides solutions for staff shortages, particularly during off-hours, with hybrid models allowing radiologists to work both on-site and remotely. This review aims to guide stakeholders in establishing and maintaining efficient emergency radiology services to improve patient outcomes.

Automatic Detection of B-Lines in Lung Ultrasound Based on the Evaluation of Multiple Characteristic Parameters Using Raw RF Data.

Shen W, Zhang Y, Zhang H, Zhong H, Wan M

pubmed logopapersJun 20 2025
B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.

Robust Radiomic Signatures of Intervertebral Disc Degeneration from MRI.

McSweeney T, Tiulpin A, Kowlagi N, Määttä J, Karppinen J, Saarakkala S

pubmed logopapersJun 20 2025
A retrospective analysis. The aim of this study was to identify a robust radiomic signature from deep learning segmentations for intervertebral disc (IVD) degeneration classification. Low back pain (LBP) is the most common musculoskeletal symptom worldwide and IVD degeneration is an important contributing factor. To improve the quantitative phenotyping of IVD degeneration from T2-weighted magnetic resonance imaging (MRI) and better understand its relationship with LBP, multiple shape and intensity features have been investigated. IVD radiomics have been less studied but could reveal sub-visual imaging characteristics of IVD degeneration. We used data from Northern Finland Birth Cohort 1966 members who underwent lumbar spine T2-weighted MRI scans at age 45-47 (n=1397). We used a deep learning model to segment the lumbar spine IVDs and extracted 737 radiomic features, as well as calculating IVD height index and peak signal intensity difference. Intraclass correlation coefficients across image and mask perturbations were calculated to identify robust features. Sparse partial least squares discriminant analysis was used to train a Pfirrmann grade classification model. The radiomics model had balanced accuracy of 76.7% (73.1-80.3%) and Cohen's Kappa of 0.70 (0.67-0.74), compared to 66.0% (62.0-69.9%) and 0.55 (0.51-0.59) for an IVD height index and peak signal intensity model. 2D sphericity and interquartile range emerged as radiomics-based features that were robust and highly correlated to Pfirrmann grade (Spearman's correlation coefficients of -0.72 and -0.77 respectively). Based on our findings these radiomic signatures could serve as alternatives to the conventional indices, representing a significant advance in the automated quantitative phenotyping of IVD degeneration from standard-of-care MRI.

Large models in medical imaging: Advances and prospects.

Fang M, Wang Z, Pan S, Feng X, Zhao Y, Hou D, Wu L, Xie X, Zhang XY, Tian J, Dong D

pubmed logopapersJun 20 2025
Recent advances in large models demonstrate significant prospects for transforming the field of medical imaging. These models, including large language models, large visual models, and multimodal large models, offer unprecedented capabilities in processing and interpreting complex medical data across various imaging modalities. By leveraging self-supervised pretraining on vast unlabeled datasets, cross-modal representation learning, and domain-specific medical knowledge adaptation through fine-tuning, large models can achieve higher diagnostic accuracy and more efficient workflows for key clinical tasks. This review summarizes the concepts, methods, and progress of large models in medical imaging, highlighting their potential in precision medicine. The article first outlines the integration of multimodal data under large model technologies, approaches for training large models with medical datasets, and the need for robust evaluation metrics. It then explores how large models can revolutionize applications in critical tasks such as image segmentation, disease diagnosis, personalized treatment strategies, and real-time interactive systems, thus pushing the boundaries of traditional imaging analysis. Despite their potential, the practical implementation of large models in medical imaging faces notable challenges, including the scarcity of high-quality medical data, the need for optimized perception of imaging phenotypes, safety considerations, and seamless integration with existing clinical workflows and equipment. As research progresses, the development of more efficient, interpretable, and generalizable models will be critical to ensuring their reliable deployment across diverse clinical environments. This review aims to provide insights into the current state of the field and provide directions for future research to facilitate the broader adoption of large models in clinical practice.

Artificial intelligence-assisted decision-making in third molar assessment using ChatGPT: is it really a valid tool?

Grinberg N, Ianculovici C, Whitefield S, Kleinman S, Feldman S, Peleg O

pubmed logopapersJun 20 2025
Artificial intelligence (AI) is becoming increasingly popular in medicine. The current study aims to investigate whether an AI-based chatbot, such as ChatGPT, could be a valid tool for assisting in decision-making when assessing mandibular third molars before extractions. Panoramic radiographs were collected from a publicly available library. Mandibular third molars were assessed by position and depth. Two specialists evaluated each case regarding the need for CBCT referral, followed by introducing all cases to ChatGPT under a uniform script to decide the need for further CBCT radiographs. The process was performed first without any guidelines, Second, after introducing the guidelines presented by Rood et al. (1990), and third, with additional test cases. ChatGPT and a specialist's decision were compared and analyzed using Cohen's kappa test and the Cochrane-Mantel--Haenszel test to consider the effect of different tooth positions. All analyses were made under a 95% confidence level. The study evaluated 184 molars. Without any guidelines, ChatGPT correlated with the specialist in 49% of cases, with no statistically significant agreement (kappa < 0.1), followed by 70% and 91% with moderate (kappa = 0.39) and near-perfect (kappa = 0.81) agreement, respectively, after the second and third rounds (p < 0.05). The high correlation between the specialist and the chatbot was preserved when analyzed by the different tooth locations and positions (p < 0.01). ChatGPT has shown the ability to analyze third molars prior to surgical interventions using accepted guidelines with substantial correlation to specialists.

Ultrafast J-resolved magnetic resonance spectroscopic imaging for high-resolution metabolic brain imaging.

Zhao Y, Li Y, Jin W, Guo R, Ma C, Tang W, Li Y, El Fakhri G, Liang ZP

pubmed logopapersJun 20 2025
Magnetic resonance spectroscopic imaging has potential for non-invasive metabolic imaging of the human brain. Here we report a method that overcomes several long-standing technical barriers associated with clinical magnetic resonance spectroscopic imaging, including long data acquisition times, limited spatial coverage and poor spatial resolution. Our method achieves ultrafast data acquisition using an efficient approach to encode spatial, spectral and J-coupling information of multiple molecules. Physics-informed machine learning is synergistically integrated in data processing to enable reconstruction of high-quality molecular maps. We validated the proposed method through phantom experiments. We obtained high-resolution molecular maps from healthy participants, revealing metabolic heterogeneities in different brain regions. We also obtained high-resolution whole-brain molecular maps in regular clinical settings, revealing metabolic alterations in tumours and multiple sclerosis. This method has the potential to transform clinical metabolic imaging and provide a long-desired capability for non-invasive label-free metabolic imaging of brain function and diseases for both research and clinical applications.
Page 84 of 2262251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.