Sort by:
Page 32 of 33324 results

Impact of the recent advances in coronary artery disease imaging on pilot medical certification and aviation safety: current state and future perspective.

Benjamin MM, Rabbat MG, Park W, Benjamin M, Davenport E

pubmed logopapersMay 7 2025
Coronary artery disease (CAD) is highly prevalent among pilots due to the nature of their lifestyle, and occupational stresses. CAD is one the most common conditions affecting pilots' medical certification and is frequently nondisclosed by pilots fearing the loss of their certification. Traditional screening methods, such as resting electrocardiograms (EKGs) and functional stress tests, have limitations, especially in detecting non-obstructive CAD. Recent advances in cardiac imaging are challenging the current paradigms of CAD screening and risk assessment protocols, offering tools uniquely suited to address the occupational health challenges faced by pilots. Coronary artery calcium scoring (CACS) has proven valuable in refining risk stratification in asymptomatic individuals. Coronary computed tomography angiography (CCTA), is being increasingly adopted as a superior tool for ruling out CAD in symptomatic individuals, assessing plaque burden as well as morphologically identifying vulnerable plaque. CT-derived fractional flow reserve (CT-FFR) adds a physiologic component to the anatomical prowess of CCTA. Cardiac magnetic resonance imaging (CMR) is now used as a prognosticating tool following a coronary event as well as a stress testing modality. Investigational technologies like pericoronary fat attenuation and artificial intelligence (AI)-enabled plaque quantification hold the promise of enhancing diagnostic accuracy and risk stratification. This review highlights the interplay between occupational demands, regulatory considerations, and the limitations of the traditional modalities for pilot CAD screening and surveillance. We also discuss the potential role of the recent advances in cardiac imaging in optimizing pilot health and flight safety.

Rethinking Boundary Detection in Deep Learning-Based Medical Image Segmentation

Yi Lin, Dong Zhang, Xiao Fang, Yufan Chen, Kwang-Ting Cheng, Hao Chen

arxiv logopreprintMay 6 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: https://github.com/xiaofang007/CTO.

A novel transfer learning framework for non-uniform conductivity estimation with limited data in personalized brain stimulation.

Kubota Y, Kodera S, Hirata A

pubmed logopapersMay 6 2025
<i>Objective</i>. Personalized transcranial magnetic stimulation (TMS) requires individualized head models that incorporate non-uniform conductivity to enable target-specific stimulation. Accurately estimating non-uniform conductivity in individualized head models remains a challenge due to the difficulty of obtaining precise ground truth data. To address this issue, we have developed a novel transfer learning-based approach for automatically estimating non-uniform conductivity in a human head model with limited data.<i>Approach</i>. The proposed method complements the limitations of the previous conductivity network (CondNet) and improves the conductivity estimation accuracy. This method generates a segmentation model from T1- and T2-weighted magnetic resonance images, which is then used for conductivity estimation via transfer learning. To enhance the model's representation capability, a Transformer was incorporated into the segmentation model, while the conductivity estimation model was designed using a combination of Attention Gates and Residual Connections, enabling efficient learning even with a small amount of data.<i>Main results</i>. The proposed method was evaluated using 1494 images, demonstrating a 2.4% improvement in segmentation accuracy and a 29.1% increase in conductivity estimation accuracy compared with CondNet. Furthermore, the proposed method achieved superior conductivity estimation accuracy even with only three training cases, outperforming CondNet, which was trained on an adequate number of cases. The conductivity maps generated by the proposed method yielded better results in brain electrical field simulations than CondNet.<i>Significance</i>. These findings demonstrate the high utility of the proposed method in brain electrical field simulations and suggest its potential applicability to other medical image analysis tasks and simulations.

V3DQutrit a volumetric medical image segmentation based on 3D qutrit optimized modified tensor ring model.

Verma P, Kumar H, Shukla DK, Satpathy S, Alsekait DM, Khalaf OI, Alzoubi A, Alqadi BS, AbdElminaam DS, Kushwaha A, Singh J

pubmed logopapersMay 6 2025
This paper introduces 3D-QTRNet, a novel quantum-inspired neural network for volumetric medical image segmentation. Unlike conventional CNNs, which suffer from slow convergence and high complexity, and QINNs, which are limited to grayscale segmentation, our approach leverages qutrit encoding and tensor ring decomposition. These techniques improve segmentation accuracy, optimize memory usage, and accelerate model convergence. The proposed model demonstrates superior performance on the BRATS19 and Spleen datasets, outperforming state-of-the-art CNN and quantum models in terms of Dice similarity and segmentation precision. This work bridges the gap between quantum computing and medical imaging, offering a scalable solution for real-world applications.

Keypoint localization and parameter measurement in ultrasound biomicroscopy anterior segment images based on deep learning.

Qinghao M, Sheng Z, Jun Y, Xiaochun W, Min Z

pubmed logopapersMay 6 2025
Accurate measurement of anterior segment parameters is crucial for diagnosing and managing ophthalmic conditions, such as glaucoma, cataracts, and refractive errors. However, traditional clinical measurement methods are often time-consuming, labor-intensive, and susceptible to inaccuracies. With the growing potential of artificial intelligence in ophthalmic diagnostics, this study aims to develop and evaluate a deep learning model capable of automatically extracting key points and precisely measuring multiple clinically significant anterior segment parameters from ultrasound biomicroscopy (UBM) images. These parameters include central corneal thickness (CCT), anterior chamber depth (ACD), pupil diameter (PD), angle-to-angle distance (ATA), sulcus-to-sulcus distance (STS), lens thickness (LT), and crystalline lens rise (CLR). A data set of 716 UBM anterior segment images was collected from Tianjin Medical University Eye Hospital. YOLOv8 was utilized to segment four key anatomical structures: cornea-sclera, anterior chamber, pupil, and iris-ciliary body-thereby enhancing the accuracy of keypoint localization. Only images with intact posterior capsule lentis were selected to create an effective data set for parameter measurement. Ten keypoints were localized across the data set, allowing the calculation of seven essential parameters. Control experiments were conducted to evaluate the impact of segmentation on measurement accuracy, with model predictions compared against clinical gold standards. The segmentation model achieved a mean IoU of 0.8836 and mPA of 0.9795. Following segmentation, the binary classification model attained an mAP of 0.9719, with a precision of 0.9260 and a recall of 0.9615. Keypoint localization exhibited a Euclidean distance error of 58.73 ± 63.04 μm, improving from the pre-segmentation error of 71.57 ± 67.36 μm. Localization mAP was 0.9826, with a precision of 0.9699, a recall of 0.9642 and an FPS of 32.64. In addition, parameter error analysis and Bland-Altman plots demonstrated improved agreement with clinical gold standards after segmentation. This deep learning approach for UBM image segmentation, keypoint localization, and parameter measurement is feasible, enhancing clinical diagnostic efficiency for anterior segment parameters.

Deep Learning-Based CT-Less Cardiac Segmentation of PET Images: A Robust Methodology for Multi-Tracer Nuclear Cardiovascular Imaging.

Salimi Y, Mansouri Z, Nkoulou R, Mainta I, Zaidi H

pubmed logopapersMay 6 2025
Quantitative cardiovascular PET/CT imaging is useful in the diagnosis of multiple cardiac perfusion and motion pathologies. The common approach for cardiac segmentation consists in using co-registered CT images, exploiting publicly available deep learning (DL)-based segmentation models. However, the mismatch between structural CT images and PET uptake limits the usefulness of these approaches. Besides, the performance of DL models is not consistent over low-dose or ultra-low-dose CT images commonly used in clinical PET/CT imaging. In this work, we developed a DL-based methodology to tackle this issue by segmenting directly cardiac PET images. This study included 406 cardiac PET images from 146 patients (43 <sup>18</sup>F-FDG, 329 <sup>13</sup>N-NH<sub>3</sub>, and 37 <sup>82</sup>Rb images). Using previously trained DL nnU-Net models in our group, we segmented the whole heart and the three main cardiac components, namely the left myocardium (LM), left ventricle cavity (LV), and right ventricle (RV) on co-registered CT images. The segmentation was resampled to PET resolution and edited through a combination of automated image processing and manual correction. The corrected segmentation masks and SUV PET images were fed to a nnU-Net V2 pipeline to be trained in fivefold data split strategy by defining two tasks: task #1 for whole cardiac segmentation and task #2 for segmentation of three cardiac components. Fifteen cardiac images were used as external validation set. The DL delineated masks were compared with standard of reference masks using Dice coefficient, Jaccard distance, mean surface distance, and segment volume relative error (%). Task #1 average Dice coefficient in internal validation fivefold was 0.932 ± 0.033. The average Dice on the 15 external cases were comparable with the fivefold Dice reaching an average of 0.941 ± 0.018. Task #2 average Dice in fivefold validation was 0.88 ± 0.063, 0.828 ± 0.091, and 0.876 ± 0.062 for LM, LV, and RV, respectively. There was no statistically significant difference among the Dice coefficients, neither between images acquired by three radiotracers nor between the different folds (P-values >  > 0.05). The overall average volume prediction error in cardiac components segmentation was less than 2%. We developed an automated DL-based segmentation pipeline to segment the whole heart and cardiac components with acceptable accuracy and robust performance in the external test set and over three radiotracers used in nuclear cardiovascular imaging. The proposed methodology can overcome unreliable segmentations performed on CT images.

Corticospinal tract reconstruction with tumor by using a novel direction filter based tractography method.

Zeng Q, Xia Z, Huang J, Xie L, Zhang J, Huang S, Xing Z, Zhuge Q, Feng Y

pubmed logopapersMay 6 2025
The corticospinal tract (CST) is the primary neural pathway responsible for voluntary motor functions, and preoperative CST reconstruction is crucial for preserving nerve functions during neurosurgery. Diffusion magnetic resonance imaging-based tractography is the only noninvasive method to preoperatively reconstruct CST in clinical practice. However, for the largesize bundle CST with complex fiber geometry (fanning fibers), reconstructing its full extent remains challenging with local-derived methods without incorporating global information. Especially in the presence of tumors, the mass effect and partial volume effect cause abnormal diffusion signals. In this work, a CST reconstruction tractography method based on a novel direction filter was proposed, designed to ensure robust CST reconstruction in the clinical dataset with tumors. A direction filter based on a fourth-order differential equation was introduced for global direction estimation. By considering the spatial consistency and leveraging anatomical prior knowledge, the direction filter was computed by minimizing the energy between the target directions and initial fiber directions. On the basis of the new directions corresponding to CST obtained by the direction filter, the fiber tracking method was implemented to reconstruct the fiber trajectory. Additionally, a deep learning-based method along with tractography template prior information was employed to generate the regions of interest (ROIs) and initial fiber directions. Experimental results showed that the proposed method yields higher valid connections and lower no connections and exhibits the fewest broken fibers and short-connected fibers. The proposed method offers an effective tool to enhance CST-related surgical outcomes by optimizing tumor resection and preserving CST.

A Deep Learning Approach for Mandibular Condyle Segmentation on Ultrasonography.

Keser G, Yülek H, Öner Talmaç AG, Bayrakdar İŞ, Namdar Pekiner F, Çelik Ö

pubmed logopapersMay 6 2025
Deep learning techniques have demonstrated potential in various fields, including segmentation, and have recently been applied to medical image processing. This study aims to develop and evaluate computer-based diagnostic software designed to assess the segmentation of the mandibular condyle in ultrasound images. A total of 668 retrospective ultrasound images of anonymous adult mandibular condyles were analyzed. The CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) was utilized to annotate the mandibular condyle using a polygonal labeling method. These annotations were subsequently reviewed and validated by experts in oral and maxillofacial radiology. In this study, all test images were detected and segmented using the YOLOv8 deep learning artificial intelligence (AI) model. When evaluating the model's performance in image estimation, it achieved an F1 score of 0.93, a sensitivity of 0.90, and a precision of 0.96. The automatic segmentation of the mandibular condyle from ultrasound images presents a promising application of artificial intelligence. This approach can help surgeons, radiologists, and other specialists save time in the diagnostic process.

Artificial intelligence demonstrates potential to enhance orthopaedic imaging across multiple modalities: A systematic review.

Longo UG, Lalli A, Nicodemi G, Pisani MG, De Sire A, D'Hooghe P, Nazarian A, Oeding JF, Zsidai B, Samuelsson K

pubmed logopapersApr 1 2025
While several artificial intelligence (AI)-assisted medical imaging applications are reported in the recent orthopaedic literature, comparison of the clinical efficacy and utility of these applications is currently lacking. The aim of this systematic review is to evaluate the effectiveness and reliability of AI applications in orthopaedic imaging, focusing on their impact on diagnostic accuracy, image segmentation and operational efficiency across various imaging modalities. Based on the PRISMA guidelines, a comprehensive literature search of PubMed, Cochrane and Scopus databases was performed, using combinations of keywords and MeSH descriptors ('AI', 'ML', 'deep learning', 'orthopaedic surgery' and 'imaging') from inception to March 2024. Included were studies published between September 2018 and February 2024, which evaluated machine learning (ML) model effectiveness in improving orthopaedic imaging. Studies with insufficient data regarding the output variable used to assess the reliability of the ML model, those applying deterministic algorithms, unrelated topics, protocol studies, and other systematic reviews were excluded from the final synthesis. The Joanna Briggs Institute (JBI) Critical Appraisal tool and the Risk Of Bias In Non-randomised Studies-of Interventions (ROBINS-I) tool were applied for the assessment of bias among the included studies. The 53 included studies reported the use of 11.990.643 images from several diagnostic instruments. A total of 39 studies reported details in terms of the Dice Similarity Coefficient (DSC), while both accuracy and sensitivity were documented across 15 studies. Precision was reported by 14, specificity by nine, and the F1 score by four of the included studies. Three studies applied the area under the curve (AUC) method to evaluate ML model performance. Among the studies included in the final synthesis, Convolutional Neural Networks (CNN) emerged as the most frequently applied category of ML models, present in 17 studies (32%). The systematic review highlights the diverse application of AI in orthopaedic imaging, demonstrating the capability of various machine learning models in accurately segmenting and analysing orthopaedic images. The results indicate that AI models achieve high performance metrics across different imaging modalities. However, the current body of literature lacks comprehensive statistical analysis and randomized controlled trials, underscoring the need for further research to validate these findings in clinical settings. Systematic Review; Level of evidence IV.

Automated Bi-Ventricular Segmentation and Regional Cardiac Wall Motion Analysis for Rat Models of Pulmonary Hypertension.

Niglas M, Baxan N, Ashek A, Zhao L, Duan J, O'Regan D, Dawes TJW, Nien-Chen C, Xie C, Bai W, Zhao L

pubmed logopapersApr 1 2025
Artificial intelligence-based cardiac motion mapping offers predictive insights into pulmonary hypertension (PH) disease progression and its impact on the heart. We proposed an automated deep learning pipeline for bi-ventricular segmentation and 3D wall motion analysis in PH rodent models for bridging the clinical developments. A data set of 163 short-axis cine cardiac magnetic resonance scans were collected longitudinally from monocrotaline (MCT) and Sugen-hypoxia (SuHx) PH rats and used for training a fully convolutional network for automated segmentation. The model produced an accurate annotation in < 1 s for each scan (Dice metric > 0.92). High-resolution atlas fitting was performed to produce 3D cardiac mesh models and calculate the regional wall motion between end-diastole and end-systole. Prominent right ventricular hypokinesia was observed in PH rats (-37.7% ± 12.2 MCT; -38.6% ± 6.9 SuHx) compared to healthy controls, attributed primarily to the loss in basal longitudinal and apical radial motion. This automated bi-ventricular rat-specific pipeline provided an efficient and novel translational tool for rodent studies in alignment with clinical cardiac imaging AI developments.
Page 32 of 33324 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.