Sort by:
Page 156 of 3563559 results

A Biomimetic Titanium Scaffold with and Without Magnesium Filled for Adjustable Patient-Specific Elastic Modulus.

Jana S, Sarkar R, Rana M, Das S, Chakraborty A, Das A, Roy Chowdhury A, Pal B, Dutta Majumdar J, Dhara S

pubmed logopapersJul 22 2025
This study focuses on determining the effective young modulus (stiffness) of various lattice structures for titanium scaffolds filled with magnesium and without magnesium. For specific patient success of the implant is depends on adequate elastic modulus which helps proper osteointegration. The Mg filled portion in the Ti scaffold is expected to dissolve with time as the bone growth through the Ti scaffold porous cavity is started. The proposed method is based on a general numerical homogenization scheme to determine the effective elastic properties of the lattice scaffold at the macroscopic scale. A large numerical campaign has been conducted on 18 geometries. The 3D scaffold is conceived based on the model generated from the Micro CT data of the prepared sample. The effect of the scaffold local features, e.g., the distribution of porosity, presence of scaffold's surface area to the adjacent bone location, strut diameter of implant, on the effective elastic properties is investigated. Results show that both the relative density and the geometrical features of the scaffold strongly affect the equivalent macroscopic elastic behaviour of the lattice. 6 samples are made (three each Mg filled and three without Mg) The compression test was carried out for each type of samples and the displacement obtained from the test results were in close match with the simulated results from finite element analysis. To predict the unknown required stiffness what would be the ratio between Ti scaffold and filled up Mg have been calculated using the data driven AI model.

Supervised versus unsupervised GAN for pseudo-CT synthesis in brain MR-guided radiotherapy.

Kermani MZ, Tavakoli MB, Khorasani A, Abedi I, Sadeghi V, Amouheidari A

pubmed logopapersJul 22 2025
Radiotherapy is a crucial treatment for brain tumor malignancies. To address the limitations of CT-based treatment planning, recent research has explored MR-only radiotherapy, requiring precise MR-to-CT synthesis. This study compares two deep learning approaches, supervised (Pix2Pix) and unsupervised (CycleGAN), for generating pseudo-CT (pCT) images from T1- and T2-weighted MR sequences. 3270 paired T1- and T2-weighted MRI images were collected and registered with corresponding CT images. After preprocessing, a supervised pCT generative model was trained using the Pix2Pix framework, and an unsupervised generative network (CycleGAN) was also trained to enable a comparative assessment of pCT quality relative to the Pix2Pix model. To assess differences between pCT and reference CT images, three key metrics (SSIM, PSNR, and MAE) were used. Additionally, a dosimetric evaluation was performed on selected cases to assess clinical relevance. The average SSIM, PSNR, and MAE for Pix2Pix on T1 images were 0.964 ± 0.03, 32.812 ± 5.21, and 79.681 ± 9.52 HU, respectively. Statistical analysis revealed that Pix2Pix significantly outperformed CycleGAN in generating high-fidelity pCT images (p < 0.05). There was no notable difference in the effectiveness of T1-weighted versus T2-weighted MR images for generating pCT (p > 0.05). Dosimetric evaluation confirmed comparable dose distributions between pCT and reference CT, supporting clinical feasibility. Both supervised and unsupervised methods demonstrated the capability to generate accurate pCT images from conventional T1- and T2-weighted MR sequences. While supervised methods like Pix2Pix achieve higher accuracy, unsupervised approaches such as CycleGAN offer greater flexibility by eliminating the need for paired training data, making them suitable for applications where paired data is unavailable.

Area detection improves the person-based performance of a deep learning system for classifying the presence of carotid artery calcifications on panoramic radiographs.

Kuwada C, Mitsuya Y, Fukuda M, Yang S, Kise Y, Mori M, Naitoh M, Ariji Y, Ariji E

pubmed logopapersJul 22 2025
This study investigated deep learning (DL) systems for diagnosing carotid artery calcifications (CAC) on panoramic radiographs. To this end, two DL systems, one with preceding and one with simultaneous area detection functions, were developed to classify CAC on panoramic radiographs, and their person-based classification performances were compared with that of a DL model directly created using entire panoramic radiographs. A total of 580 panoramic radiographs from 290 patients (with CAC) and 290 controls (without CAC) were used to create and evaluate the DL systems. Two convolutional neural networks, GoogLeNet and YOLOv7, were utilized. The following three systems were created: (1) direct classification of entire panoramic images (System 1), (2) preceding region-of-interest (ROI) detection followed by classification (System 2), and (3) simultaneous ROI detection and classification (System 3). Person-based evaluation using the same test data was performed to compare the three systems. A side-based (left and right sides of participants) evaluation was also performed on Systems 2 and 3. Between-system differences in area under the receiver-operating characteristics curve (AUC) were assessed using DeLong's test. For the side-based evaluation, the AUCs of Systems 2 and 3 were 0.89 and 0.84, respectively, and in the person-based evaluation, Systems 2 and 3 had significantly higher AUC values of 0.86 and 0.90, respectively, compared with System 1 (P < 0.001). No significant difference was found between Systems 2 and 3. Preceding or simultaneous use of area detection improved the person-based performance of DL for classifying the presence of CAC on panoramic radiographs.

A Hybrid CNN-VSSM model for Multi-View, Multi-Task Mammography Analysis: Robust Diagnosis with Attention-Based Fusion

Yalda Zafari, Roaa Elalfy, Mohamed Mabrok, Somaya Al-Maadeed, Tamer Khattab, Essam A. Rashed

arxiv logopreprintJul 22 2025
Early and accurate interpretation of screening mammograms is essential for effective breast cancer detection, yet it remains a complex challenge due to subtle imaging findings and diagnostic ambiguity. Many existing AI approaches fall short by focusing on single view inputs or single-task outputs, limiting their clinical utility. To address these limitations, we propose a novel multi-view, multitask hybrid deep learning framework that processes all four standard mammography views and jointly predicts diagnostic labels and BI-RADS scores for each breast. Our architecture integrates a hybrid CNN VSSM backbone, combining convolutional encoders for rich local feature extraction with Visual State Space Models (VSSMs) to capture global contextual dependencies. To improve robustness and interpretability, we incorporate a gated attention-based fusion module that dynamically weights information across views, effectively handling cases with missing data. We conduct extensive experiments across diagnostic tasks of varying complexity, benchmarking our proposed hybrid models against baseline CNN architectures and VSSM models in both single task and multi task learning settings. Across all tasks, the hybrid models consistently outperform the baselines. In the binary BI-RADS 1 vs. 5 classification task, the shared hybrid model achieves an AUC of 0.9967 and an F1 score of 0.9830. For the more challenging ternary classification, it attains an F1 score of 0.7790, while in the five-class BI-RADS task, the best F1 score reaches 0.4904. These results highlight the effectiveness of the proposed hybrid framework and underscore both the potential and limitations of multitask learning for improving diagnostic performance and enabling clinically meaningful mammography analysis.

AURA: A Multi-Modal Medical Agent for Understanding, Reasoning & Annotation

Nima Fathi, Amar Kumar, Tal Arbel

arxiv logopreprintJul 22 2025
Recent advancements in Large Language Models (LLMs) have catalyzed a paradigm shift from static prediction systems to agentic AI agents capable of reasoning, interacting with tools, and adapting to complex tasks. While LLM-based agentic systems have shown promise across many domains, their application to medical imaging remains in its infancy. In this work, we introduce AURA, the first visual linguistic explainability agent designed specifically for comprehensive analysis, explanation, and evaluation of medical images. By enabling dynamic interactions, contextual explanations, and hypothesis testing, AURA represents a significant advancement toward more transparent, adaptable, and clinically aligned AI systems. We highlight the promise of agentic AI in transforming medical image analysis from static predictions to interactive decision support. Leveraging Qwen-32B, an LLM-based architecture, AURA integrates a modular toolbox comprising: (i) a segmentation suite with phase grounding, pathology segmentation, and anatomy segmentation to localize clinically meaningful regions; (ii) a counterfactual image-generation module that supports reasoning through image-level explanations; and (iii) a set of evaluation tools including pixel-wise difference-map analysis, classification, and advanced state-of-the-art components to assess diagnostic relevance and visual interpretability.

DualSwinUnet++: An enhanced Swin-Unet architecture with dual decoders for PTMC segmentation.

Dialameh M, Rajabzadeh H, Sadeghi-Goughari M, Sim JS, Kwon HJ

pubmed logopapersJul 22 2025
Precise segmentation of papillary thyroid microcarcinoma (PTMC) during ultrasound-guided radiofrequency ablation (RFA) is critical for effective treatment but remains challenging due to acoustic artifacts, small lesion size, and anatomical variability. In this study, we propose DualSwinUnet++, a dual-decoder transformer-based architecture designed to enhance PTMC segmentation by incorporating thyroid gland context. DualSwinUnet++ employs independent linear projection heads for each decoder and a residual information flow mechanism that passes intermediate features from the first (thyroid) decoder to the second (PTMC) decoder via concatenation and transformation. These design choices allow the model to condition tumor prediction explicitly on gland morphology without shared gradient interference. Trained on a clinical ultrasound dataset with 691 annotated RFA images and evaluated against state-of-the-art models, DualSwinUnet++ achieves superior Dice and Jaccard scores while maintaining sub-200ms inference latency. The results demonstrate the model's suitability for near real-time surgical assistance and its effectiveness in improving segmentation accuracy in challenging PTMC cases.

Faithful, Interpretable Chest X-ray Diagnosis with Anti-Aliased B-cos Networks

Marcel Kleinmann, Shashank Agnihotri, Margret Keuper

arxiv logopreprintJul 22 2025
Faithfulness and interpretability are essential for deploying deep neural networks (DNNs) in safety-critical domains such as medical imaging. B-cos networks offer a promising solution by replacing standard linear layers with a weight-input alignment mechanism, producing inherently interpretable, class-specific explanations without post-hoc methods. While maintaining diagnostic performance competitive with state-of-the-art DNNs, standard B-cos models suffer from severe aliasing artifacts in their explanation maps, making them unsuitable for clinical use where clarity is essential. In this work, we address these limitations by introducing anti-aliasing strategies using FLCPooling (FLC) and BlurPool (BP) to significantly improve explanation quality. Our experiments on chest X-ray datasets demonstrate that the modified $\text{B-cos}_\text{FLC}$ and $\text{B-cos}_\text{BP}$ preserve strong predictive performance while providing faithful and artifact-free explanations suitable for clinical application in multi-class and multi-label settings. Code available at: GitHub repository (url: https://github.com/mkleinma/B-cos-medical-paper).

Verification of resolution and imaging time for high-resolution deep learning reconstruction techniques.

Harada S, Takatsu Y, Murayama K, Sano Y, Ikedo M

pubmed logopapersJul 22 2025
Magnetic resonance imaging (MRI) involves a trade-off between imaging time, signal-to-noise ratio (SNR), and spatial resolution. Reducing the imaging time often leads to a lower SNR or resolution. Deep-learning-based reconstruction (DLR) methods have been introduced to address these limitations. Image-domain super-resolution DLR enables high resolution without additional image scans. High-quality images can be obtained within a shorter timeframe by appropriately configuring DLR parameters. It is necessary to maximize the performance of super-resolution DLR to enable efficient use in MRI. We evaluated the performance of a vendor-provided super-resolution DLR method (PIQE) on a Canon 3 T MRI scanner using an edge phantom and clinical brain images from eight patients. Quantitative assessment included structural similarity index (SSIM), peak SNR (PSNR), root mean square error (RMSE), and full width at half maximum (FWHM). FWHM was used to quantitatively assess spatial resolution and image sharpness. Visual evaluation using a five-point Likert scale was also performed to assess perceived image quality. Image domain super-resolution DLR reduced scan time by up to 70 % while preserving the structural image quality. Acquisition matrices of 0.87 mm/pixel or finer with a zoom ratio of ×2 yielded SSIM ≥0.80, PSNR ≥35 dB, and non-significant FWHM differences compared to full-resolution references. In contrast, aggressive downsampling (zoom ratio 3 from low-resolution matrices) led to image degradation including truncation artifacts and reduced sharpness. These results clarify the optimal use of PIQE as an image-domain super-resolution method and provide practical guidance for its application in clinical MRI workflows.

Machine learning approach effectively discriminates between Parkinson's disease and progressive supranuclear palsy: multi-level indices of rs-fMRI.

Cheng W, Liang X, Zeng W, Guo J, Yin Z, Dai J, Hong D, Zhou F, Li F, Fang X

pubmed logopapersJul 22 2025
Parkinson's disease (PD) and progressive supranuclear palsy (PSP) present similar clinical symptoms, but their treatment options and clinical prognosis differ significantly. Therefore, we aimed to discriminate between PD and PSP based on multi-level indices of resting-state functional magnetic resonance imaging (rs-fMRI) via the machine learning approach. A total of 58 PD and 52 PSP patients were prospectively enrolled in this study. Participants were randomly allocated to a training set and a validation set in a 7:3 ratio. Various rs-fMRI indices were extracted, followed by a comprehensive feature screening for each index. We constructed fifteen distinct combinations of indices and selected four machine learning algorithms for model development. Subsequently, different validation templates were employed to assess the classification results and investigate the relationship between the most significant features and clinical assessment scales. The classification performance of logistic regression (LR) and support vector machine (SVM) models, based on multiple index combinations, was significantly superior to that of other machine learning models and combinations when utilizing automatic anatomical labeling (AAL) templates. This has been verified across different templates. The utilization of multiple rs-fMRI indices significantly enhances the performance of machine learning models and can effectively achieve the automatic identification of PD and PSP at the individual level.

Artificial intelligence in thyroid eye disease imaging: A systematic review.

Zhang H, Li Z, Chan HC, Song X, Zhou H, Fan X

pubmed logopapersJul 22 2025
Thyroid eye disease (TED) is a common, complex orbital disorder characterized by soft-tissue changes visible on imaging. Artificial intelligence (AI) offers promises for improving TED diagnosis and treatment; however, no systematic review has yet characterized the research landscape, key challenges, and future directions. We followed PRISMA guidelines to search multiple databases until January, 2025, for studies applying AI to computed tomography (CT), magnetic resonance imaging, and nuclear, facial or retinal imaging in TED patients. Using the APPRAISE-AI tool, we assessed study quality and included 41 studies covering various AI applications. Sample sizes ranged from 33 to 2,288 participants, predominantly East Asian. CT and facial imaging were the most common modalities, reported in 16 and 13 articles, respectively. Studies addressed clinical tasks-diagnosis, activity assessment, severity grading, and treatment prediction-and technical tasks-classification, segmentation, and image generation-with classification being the most frequent. Researchers primarily employed deep-learning models, such as residual network (ResNet) and Visual Geometry Group (VGG). Overall, the majority of the studies were of moderate quality. Image-based AI shows strong potential to improve diagnostic accuracy and guide personalized treatment strategies in TED. Future research should prioritize robust study designs, the creation of public datasets, multimodal imaging integration, and interdisciplinary collaboration to accelerate clinical translation.
Page 156 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.