Sort by:
Page 11 of 1281279 results

Diagnostic and Technological Advances in Magnetic Resonance (Focusing on Imaging Technique and the Gadolinium-Based Contrast Media), Computed Tomography (Focusing on Photon Counting CT), and Ultrasound-State of the Art.

Runge VM, Heverhagen JT

pubmed logopapersJun 9 2025
Magnetic resonance continues to evolve and advance as a critical imaging modality for disease diagnosis and monitoring. Hardware and software advances continue to propel this modality to the forefront of the field of diagnostic imaging. Next generation MR contrast media, specifically gadolinium chelates with improved relaxivity and stability (relative to the provided contrast effect), have emerged providing a further boost to the field. Concern regarding gadolinium deposition in the body with primarily the weaker gadolinium chelates (which have been now removed from the market, at least in Europe) continues to be at the forefront of clinicians' minds. This has driven renewed interest in possible development of manganese-based contrast media. The development of photon counting CT and its clinical introduction have made possible a further major advance in CT image quality, along with the potential for decreasing radiation dose. The possibility of major clinical advances in thoracic, cardiac, and musculoskeletal imaging were first recognized, with its broader impact - across all organ systems - now also recognized. The utility of routine acquisition (without penalty in time or radiation dose) of full spectral multi-energy data is now also being recognized as an additional major advance made possible by photon counting CT. Artificial intelligence is now being used in the background across most imaging platforms and modalities, making possible further advances in imaging technique and image quality, although this field is nowhere yet near to realizing its full potential. And last, but not least, the field of ultrasound is on the cusp of further major advances in availability (with development of very low-cost systems) and a possible new generation of microbubble contrast media.

MHASegNet: A multi-scale hybrid aggregation network of segmenting coronary artery from CCTA images.

Li S, Wu Y, Jiang B, Liu L, Zhang T, Sun Y, Hou J, Monkam P, Qian W, Qi S

pubmed logopapersJun 9 2025
Segmentation of coronary arteries in Coronary Computed Tomography Angiography (CCTA) images is crucial for diagnosing coronary artery disease (CAD), but remains challenging due to small artery size, uneven contrast distribution, and issues like over-segmentation or omission. The aim of this study is to improve coronary artery segmentation in CCTA images using both conventional and deep learning techniques. We propose MHASegNet, a lightweight network for coronary artery segmentation, combined with a tailored refinement method. MHASegNet employs multi-scale hybrid attention to capture global and local features, and integrates a 3D context anchor attention module to focus on key coronary artery structures while suppressing background noise. An iterative, region-growth-based refinement addresses crown breaks and reduces false alarms. We evaluated the method on an in-house dataset of 90 subjects and two public datasets with 1060 subjects. MHASegNet, coupled with tailored refinement, outperforms state-of-the-art algorithms, achieving a Dice Similarity Coefficient (DSC) of 0.867 on the in-house dataset, 0.875 on the ASOCA dataset, and 0.827 on the ImageCAS dataset. The tailored refinement significantly reduces false positives and resolves most discontinuities, even for other networks. MHASegNet and the tailored refinement may aid in diagnosing and quantifying CAD following further validation.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Transfer learning for accurate brain tumor classification in MRI: a step forward in medical diagnostics.

Khan MA, Hussain MZ, Mehmood S, Khan MF, Ahmad M, Mazhar T, Shahzad T, Saeed MM

pubmed logopapersJun 9 2025
Brain tumor classification is critical for therapeutic applications that benefit from computer-aided diagnostics. Misdiagnosing a brain tumor can significantly reduce a patient's chances of survival, as it may lead to ineffective treatments. This study proposes a novel approach for classifying brain tumors in MRI images using Transfer Learning (TL) with state-of-the-art deep learning models: AlexNet, MobileNetV2, and GoogleNet. Unlike previous studies that often focus on a single model, our work comprehensively compares these architectures, fine-tuned specifically for brain tumor classification. We utilize a publicly available dataset of 4,517 MRI scans, consisting of three prevalent types of brain tumors-glioma (1,129 images), meningioma (1,134 images), and pituitary tumors (1,138 images)-as well as 1,116 images of normal brains (no tumor). Our approach addresses key research gaps, including class imbalance, through data augmentation and model efficiency, leveraging lightweight architectures like MobileNetV2. The GoogleNet model achieves the highest classification accuracy of 99.2%, outperforming previous studies using the same dataset. This demonstrates the potential of our approach to assist physicians in making rapid and precise decisions, thereby improving patient outcomes. The results highlight the effectiveness of TL in medical diagnostics and its potential for real-world clinical deployment. This study advances the field of brain tumor classification and provides a robust framework for future research in medical image analysis.

optiGAN: A Deep Learning-Based Alternative to Optical Photon Tracking in Python-Based GATE (10+).

Mummaneni G, Trigila C, Krah N, Sarrut D, Roncali E

pubmed logopapersJun 9 2025
To accelerate optical photon transport simulations in the GATE medical physics framework using a Generative Adversarial Network (GAN), while ensuring high modeling accuracy. Traditionally, detailed optical Monte Carlo methods have been the gold standard for modeling photon interactions in detectors, but their high computational cost remains a challenge. This study explores the integration of optiGAN, a Generative Adversarial Network (GAN) model into GATE 10, the new Python-based version of the GATE medical physics simulation framework released in November 2024.
Approach: The goal of optiGAN is to accelerate optical photon transport simulations while maintaining modelling accuracy. The optiGAN model, based on a GAN architecture, was integrated into GATE 10 as a computationally efficient alternative to traditional optical Monte Carlo simulations. To ensure consistency, optical photon transport modules were implemented in GATE 10 and validated against GATE v9.3 under identical simulation conditions. Subsequently, simulations using full Monte Carlo tracking in GATE 10 were compared to those using GATE 10-optiGAN.
Main results: Validation studies confirmed that GATE 10 produces results consistent with GATE v9.3. Simulations using GATE 10-optiGAN showed over 92% similarity to Monte Carlo-based GATE 10 results, based on the Jensen-Shannon distance across multiple photon transport parameters. optiGAN successfully captured multimodal distributions of photon position, direction, and energy at the photodetector face. Simulation time analysis revealed a reduction of approximately 50% in execution time with GATE 10-optiGAN compared to full Monte Carlo simulations.
Significance: The study confirms both the fidelity of optical photon transport modeling in GATE 10 and the effective integration of deep learning-based acceleration through optiGAN. This advancement enables large-scale, high-fidelity optical simulations with significantly reduced computational cost, supporting broader applications in medical imaging and detector design.

Sex estimation from the variables of talocrural joint by using machine learning algorithms.

Ray A, Ray G, Kürtül İ, Şenol GT

pubmed logopapersJun 9 2025
This study has focused on sex determination from the variables estimated on X-ray images of the talocrural joint by using machine learning algorithms (ML). The variables of the mediolateral diameter of tibia (TMLD) and fibula (FMLD), the distance between the innermost points of the talocrural joint (DIT), the distance between the outermost points of the talocrural joint (DOT), and the distal articular surface of the tibia (TAS) estimated using X-ray images of 150 women and 150 men were evaluated by applying different ML methods. Logistic regression classifier, Decision Tree classifier, K-Nearest Neighbor classifier, Linear Discriminant Analysis, Naive Bayes and Random Forest classifier were used as algorithms. As a result of ML, an accuracy between 82 and 92 % was found. The highest rate of accuracy was achieved with RFC algorithm. DOT was the variable which contributed to the model at highest degree. Except for the variables of the age and FMLD, the other variables were found to be statistically significant in terms of sex difference. It was found that the variables of the talocrural joint were classified with high accuracy in terms of sex. In addition, morphometric data were found about the population and racial differences were emphasized.

Differentiating Bacterial and Non-Bacterial Pneumonia on Chest CT Using Multi-Plane Features and Clinical Biomarkers.

Song L, Zhan Y, Li L, Li X, Wu Y, Zhao M, Li Z, Ren G, Cai J

pubmed logopapersJun 9 2025
Timely and accurate classification of bacterial pneumonia (BP) is essential for guiding antibiotic therapy. However, distinguishing BP from non-bacterial pneumonia (NBP) using computed tomography (CT) is challenging due to overlapping imaging features and limited biomarker specificity, often leading to delayed or empirical treatment. This study aimed to develop and evaluate MPMT-Pneumo, a multi-plane, multi-modal deep learning model, to improve BP versus NBP differentiation. A total of 384 patients with microbiologically confirmed pneumonia (239 BP, 145 NBP) from two hospitals were included and divided into training and test sets. MPMT-Pneumo utilized a hybrid CNN-Transformer architecture to integrate features from axial, coronal, sagittal CT views and four routine inflammatory biomarkers (WBC, ANC, CRP, PCT). Poly Focal Loss addressed class imbalance during training. Performance was evaluated using Area Under the Curve (AUC), accuracy, and sensitivity on the test set. MPMT-Pneumo was benchmarked against recent deep learning models, biomarker-only models, and clinical radiologists' CT interpretations. Ablation studies assessed component contributions. MPMT-Pneumo achieved an AUC of 0.874, accuracy of 0.852, and sensitivity of 0.894 on the test set, outperforming baseline deep learning models and biomarker-only models. Sensitivity for BP detection surpassed that of less experienced radiologists and was comparable to the most experienced. Ablation studies confirmed the importance of both multi-plane imaging and biomarkers. MPMT-Pneumo provides a clinically applicable solution for BP classification and shows great potential in improving diagnostic accuracy and promoting more rational antibiotic use in clinical practice.

Advancing respiratory disease diagnosis: A deep learning and vision transformer-based approach with a novel X-ray dataset.

Alghadhban A, Ramadan RA, Alazmi M

pubmed logopapersJun 9 2025
With the increasing prevalence of respiratory diseases such as pneumonia and COVID-19, timely and accurate diagnosis is critical. This paper makes significant contributions to the field of respiratory disease classification by utilizing X-ray images and advanced machine learning techniques such as deep learning (DL) and Vision Transformers (ViT). First, the paper systematically reviews the current diagnostic methodologies, analyzing the recent advancement in DL and ViT techniques through a comprehensive analysis of the review articles published between 2017 and 2024, excluding short reviews and overviews. The review not only analyses the existing knowledge but also identifies the critical gaps in the field as well as the lack of diversity of the comprehensive and diverse datasets for training the machine learning models. To address such limitations, the paper extensively evaluates DL-based models on publicly available datasets, analyzing key performance metrics such as accuracy, precision, recall, and F1-score. Our evaluations reveal that the current datasets are mostly limited to the narrow subsets of pulmonary diseases, which might lead to some challenges, including overfitting, poor generalization, and reduced possibility of using advanced machine learning techniques in real-world applications. For instance, DL and ViT models require extensive data for effective learning. The primary contribution of this paper is not only the review of the most recent articles and surveys of respiratory diseases and DL models, including ViT, but also introduces a novel, diverse dataset comprising 7867 X-ray images from 5263 patients across three local hospitals, covering 49 distinct pulmonary diseases. The dataset is expected to enhance DL and ViT model training and improve the generalization of those models in various real-world medical image scenarios. By addressing the data scarcity issue, this paper paves the for more reliable and robust disease classification, improving clinical decision-making. Additionally, the article highlights the critical challenges that still need to be addressed, such as dataset bias and variations of X-ray image quality, as well as the need for further clinical validation. Furthermore, the study underscores the critical role of DL in medical diagnosis and highlights the necessity of comprehensive, well-annotated datasets to improve model robustness and clinical reliability. Through these contributions, the paper provides the basis and foundation of future research on respiratory disease diagnosis using AI-driven methodologies. Although the paper tries to cover all the work done between 2017 and 2024, this research might have some limitations of this research, including the review period before 2017 might have foundational work. At the same time, the rapid development of AI might make the earlier methods less relevant.

Deep learning-based post-hoc noise reduction improves quarter-radiation-dose coronary CT angiography.

Morikawa T, Nishii T, Tanabe Y, Yoshida K, Toshimori W, Fukuyama N, Toritani H, Suekuni H, Fukuda T, Kido T

pubmed logopapersJun 9 2025
To evaluate the impact of deep learning-based post-hoc noise reduction (DLNR) on image quality, coronary artery disease reporting and data system (CAD-RADS) assessment, and diagnostic performance in quarter-dose versus full-dose coronary CT angiography (CCTA) on external datasets. We retrospectively reviewed 221 patients who underwent retrospective electrocardiogram-gated CCTA in 2022-2023. Using dose modulation, either mid-diastole or end-systole was scanned at full dose depending on heart rates, and the other phase at quarter dose. Only patients with motion-free coronaries in both phases were included. Images were acquired using iterative reconstruction, and a residual dense network trained on external datasets denoised the quarter-dose images. Image quality was assessed by comparing noise levels using Tukey's test. Two radiologists independently assessed CAD-RADS, with agreement to full-dose images evaluated by Cohen's kappa. Diagnostic performance for significant stenosis referencing full-dose images was compared between quarter-dose and denoised images by the area under the receiver operating characteristic curve (AUC) using the DeLong test. Among 40 cases (age, 71 ± 7 years; 24 males), DLNR reduced noise from 37 to 18 HU (P < 0.001) in quarter-dose CCTA (full-dose images: 22 HU), and improved CAD-RADS agreement from moderate (0.60 [95 % CI: 0.41-0.78]) to excellent (0.82 [95 % CI: 0.66-0.94]). Denoised images demonstrated a superior AUC (0.97 [95 % CI: 0.95-1.00]) for diagnosing significant stenosis compared with original quarter-dose images (0.93 [95 % CI: 0.89-0.98]; P = 0.032). DLNR for quarter-dose CCTA significantly improved image quality, CAD-RADS agreement, and diagnostic performance for detecting significant stenosis referencing full-dose images.

Dose to circulating blood in intensity-modulated total body irradiation, total marrow irradiation, and total marrow and lymphoid irradiation.

Guo B, Cherian S, Murphy ES, Sauter CS, Sobecks RM, Rotz S, Hanna R, Scott JG, Xia P

pubmed logopapersJun 8 2025
Multi-isocentric intensity-modulated (IM) total body irradiation (TBI), total marrow irradiation (TMI), and total marrow and lymphoid irradiation (TMLI) are gaining popularity. A question arises on the impact of the interplay between blood circulation and dynamic delivery on blood dose. This study answers the question by introducing a new whole-body blood circulation modeling technique. A whole-body CT with intravenous contrast was used to develop the blood circulation model. Fifteen organs and tissues, heart chambers, and great vessels were segmented using a deep-learning-based auto-contouring software. The main blood vessels were segmented using an in-house algorithm. Blood density, velocity, time-to-heart, and perfusion distributions were derived for systole, diastole, and portal circulations and used to simulate trajectories of blood particles during delivery. With the same prescription of 12 Gy in 8 fractions, doses to circulating blood were calculated for three plans: (1) an IM-TBI plan prescribing uniform dose to the whole body while reducing lung and kidney doses; (2) a TMI plan treating all bones; and (3) a TMLI plan treating all bones, major lymph nodes, and spleen; TMI and TMLI plans were optimized to reduce doses to non-target tissue. Circulating blood received 1.57 ± 0.43 Gy, 1.04 ± 0.32 Gy, and 1.09 ± 0.32 Gy in one fraction and 12.60 ± 1.21 Gy, 8.34 ± 0.88 Gy, and 8.71 ± 0.92 Gy in 8 fractions in IM-TBI, TMI, and TMLI, respectively. The interplay effect of blood motion with IM delivery did not change the mean dose, but changed the dose heterogeneity of the circulating blood. Fractionation reduced the blood dose heterogeneity. A novel whole-body blood circulating model was developed based on patient-specific anatomy and realistic blood dynamics, concentration, and perfusion. Using the blood circulation model, we developed a dosimetry tool for circulating blood in IM-TBI, TMI, and TMLI.
Page 11 of 1281279 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.