Sort by:
Page 92 of 1291287 results

Deep-Learning Reconstruction for 7T MP2RAGE and SPACE MRI: Improving Image Quality at High Acceleration Factors.

Liu Z, Patel V, Zhou X, Tao S, Yu T, Ma J, Nickel D, Liebig P, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersMay 20 2025
Deep learning (DL) reconstruction has been successful in realizing otherwise impracticable acceleration factors and improving image quality in conventional MRI field strengths; however, there has been limited application to ultra-high field MRI.The objective of this study was to evaluate the performance of a prototype DL-based image reconstruction technique in 7T MRI of the brain utilizing MP2RAGE and SPACE acquisitions, in comparison to reconstructions in conventional compressed sensing (CS) and controlled aliasing in parallel imaging (CAIPIRINHA) techniques. This retrospective study involved 60 patients who underwent 7T brain MRI between June 2024 and October 2024, comprised of 30 patients with MP2RAGE data and 30 patients with SPACE FLAIR data. Each set of raw data was reconstructed with DL-based reconstruction and conventional reconstruction. Image quality was independently assessed by two neuroradiologists using a 5-point Likert scale, which included overall image quality, artifacts, sharpness, structural conspicuity, and noise level. Inter-observer agreement was determined using top-box analysis. Contrast-to-noise ratio (CNR) and noise levels were quantitatively evaluated and compared using the Wilcoxon signed-rank test. DL-based reconstruction resulted in a significant increase in overall image quality and a reduction in subjective noise level for both MP2RAGE and SPACE FLAIR data (all P<0.001), with no significant differences in image artifacts (all P>0.05). When compared to standard reconstruction, the implementation of DL-based reconstruction yielded an increase in CNR of 49.5% [95% CI 33.0-59.0%] for MP2RAGE data and 90.6% [95% CI 73.2-117.7%] for SPACE FLAIR data, along with a decrease in noise of 33.5% [95% CI 23.0-38.0%] for MP2RAGE data and 47.5% [95% CI 41.9-52.6%] for SPACE FLAIR data. DL-based reconstruction of 7T MRI significantly enhanced image quality compared to conventional reconstruction without introducing image artifacts. The achievable high acceleration factors have the potential to substantially improve image quality and resolution in 7T MRI. CAIPIRINHA = Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration; CNR = contrast-to-noise ratio; CS = compressed sensing; DL = deep learning; MNI = Montreal Neurological Institute; MP2RAGE = Magnetization-Prepared 2 Rapid Acquisition Gradient Echoes; SPACE = Sampling Perfection with Application-Optimized Contrasts using Different Flip Angle Evolutions.

Neuroimaging Characterization of Acute Traumatic Brain Injury with Focus on Frontline Clinicians: Recommendations from the 2024 National Institute of Neurological Disorders and Stroke Traumatic Brain Injury Classification and Nomenclature Initiative Imaging Working Group.

Mac Donald CL, Yuh EL, Vande Vyvere T, Edlow BL, Li LM, Mayer AR, Mukherjee P, Newcombe VFJ, Wilde EA, Koerte IK, Yurgelun-Todd D, Wu YC, Duhaime AC, Awwad HO, Dams-O'Connor K, Doperalski A, Maas AIR, McCrea MA, Umoh N, Manley GT

pubmed logopapersMay 20 2025
Neuroimaging screening and surveillance is one of the first frontline diagnostic tools leveraged in the acute assessment (first 24 h postinjury) of patients suspected to have traumatic brain injury (TBI). While imaging, in particular computed tomography, is used almost universally in emergency departments worldwide to evaluate possible features of TBI, there is no currently agreed-upon reporting system, standard terminology, or framework to contextualize brain imaging findings with other available medical, psychosocial, and environmental data. In 2023, the NIH-National Institute of Neurological Disorders and Stroke convened six working groups of international experts in TBI to develop a new framework for nomenclature and classification. The goal of this effort was to propose a more granular system of injury classification that incorporates recent progress in imaging biomarkers, blood-based biomarkers, and injury and recovery modifiers to replace the commonly used Glasgow Coma Scale-based diagnosis groups of mild, moderate, and severe TBI, which have shown relatively poor diagnostic, prognostic, and therapeutic utility. Motivated by prior efforts to standardize the nomenclature for pathoanatomic imaging findings of TBI for research and clinical trials, along with more recent studies supporting the refinement of the originally proposed definitions, the Imaging Working Group sought to update and expand this application specifically for consideration of use in clinical practice. Here we report the recommendations of this working group to enable the translation of structured imaging common data elements to the standard of care. These leverage recent advances in imaging technology, electronic medical record (EMR) systems, and artificial intelligence (AI), along with input from key stakeholders, including patients with lived experience, caretakers, providers across medical disciplines, radiology industry partners, and policymakers. It was recommended that (1) there would be updates to the definitions of key imaging features used for this system of classification and that these should be further refined as new evidence of the underlying pathology driving the signal change is identified; (2) there would be an efficient, integrated tool embedded in the EMR imaging reporting system developed in collaboration with industry partners; (3) this would include AI-generated evidence-based feature clusters with diagnostic, prognostic, and therapeutic implications; and (4) a "patient translator" would be developed in parallel to assist patients and families in understanding these imaging features. In addition, important disclaimers would be provided regarding known limitations of current technology until such time as they are overcome, such as resolution and sequence parameter considerations. The end goal is a multifaceted TBI characterization model incorporating clinical, imaging, blood biomarker, and psychosocial and environmental modifiers to better serve patients not only acutely but also through the postinjury continuum in the days, months, and years that follow TBI.

AI-powered integration of multimodal imaging in precision medicine for neuropsychiatric disorders.

Huang W, Shu N

pubmed logopapersMay 20 2025
Neuropsychiatric disorders have complex pathological mechanism, pronounced clinical heterogeneity, and a prolonged preclinical phase, which presents a challenge for early diagnosis and development of precise intervention strategies. With the development of large-scale multimodal neuroimaging datasets and advancement of artificial intelligence (AI) algorithms, the integration of multimodal imaging with AI techniques has emerged as a pivotal avenue for early detection and tailoring individualized treatment for neuropsychiatric disorders. To support these advances, in this review, we outline multimodal neuroimaging techniques, AI methods, and strategies for multimodal data fusion. We highlight applications of multimodal AI based on neuroimaging data in precision medicine for neuropsychiatric disorders, discussing challenges in clinical adoption, their emerging solutions, and future directions.

Pancreas segmentation in CT scans: A novel MOMUNet based workflow.

Juwita J, Hassan GM, Datta A

pubmed logopapersMay 20 2025
Automatic pancreas segmentation in CT scans is crucial for various medical applications, including early diagnosis and computer-assisted surgery. However, existing segmentation methods remain suboptimal due to significant pancreas size variations across slices and severe class imbalance caused by the pancreas's small size and CT scanner movement during imaging. Traditional computer vision techniques struggle with these challenges, while deep learning-based approaches, despite their success in other domains, still face limitations in pancreas segmentation. To address these issues, we propose a novel, three-stage workflow that enhances segmentation accuracy and computational efficiency. First, we introduce External Contour Cropping (ECC), a background cleansing technique that mitigates class imbalance. Second, we propose a Size Ratio (SR) technique that restructures the training dataset based on the relative size of the target organ, improving the robustness of the model against anatomical variations. Third, we develop MOMUNet, an ultra-lightweight segmentation model with only 1.31 million parameters, designed for optimal performance on limited computational resources. Our proposed workflow achieves an improvement in Dice Score (DSC) of 2.56% over state-of-the-art (SOTA) models in the NIH-Pancreas dataset and 2.97% in the MSD-Pancreas dataset. Furthermore, applying the proposed model to another small organ, such as colon cancer segmentation in the MSD-Colon dataset, yielded a DSC of 68.4%, surpassing the SOTA models. These results demonstrate the effectiveness of our approach in significantly improving segmentation accuracy for small abdomen organs including pancreas and colon, making deep learning more accessible for low-resource medical facilities.

Expert-guided StyleGAN2 image generation elevates AI diagnostic accuracy for maxillary sinus lesions.

Zeng P, Song R, Chen S, Li X, Li H, Chen Y, Gong Z, Cai G, Lin Y, Shi M, Huang K, Chen Z

pubmed logopapersMay 20 2025
The progress of artificial intelligence (AI) research in dental medicine is hindered by data acquisition challenges and imbalanced distributions. These problems are especially apparent when planning to develop AI-based diagnostic or analytic tools for various lesions, such as maxillary sinus lesions (MSL) including mucosal thickening and polypoid lesions. Traditional unsupervised generative models struggle to simultaneously control the image realism, diversity, and lesion-type specificity. This study establishes an expert-guided framework to overcome these limitations to elevate AI-based diagnostic accuracy. A StyleGAN2 framework was developed for generating clinically relevant MSL images (such as mucosal thickening and polypoid lesion) under expert control. The generated images were then integrated into training datasets to evaluate their effect on ResNet50's diagnostic performance. Here we show: 1) Both lesion subtypes achieve satisfactory fidelity metrics, with structural similarity indices (SSIM > 0.996) and maximum mean discrepancy values (MMD < 0.032), and clinical validation scores close to those of real images; 2) Integrating baseline datasets with synthetic images significantly enhances diagnostic accuracy for both internal and external test sets, particularly improving area under the precision-recall curve (AUPRC) by approximately 8% and 14% for mucosal thickening and polypoid lesions in the internal test set, respectively. The StyleGAN2-based image generation tool effectively addressed data scarcity and imbalance through high-quality MSL image synthesis, consequently boosting diagnostic model performance. This work not only facilitates AI-assisted preoperative assessment for maxillary sinus lift procedures but also establishes a methodological framework for overcoming data limitations in medical image analysis.

A multi-modal model integrating MRI habitat and clinicopathology to predict platinum sensitivity in patients with high-grade serous ovarian cancer: a diagnostic study.

Bi Q, Ai C, Meng Q, Wang Q, Li H, Zhou A, Shi W, Lei Y, Wu Y, Song Y, Xiao Z, Li H, Qiang J

pubmed logopapersMay 20 2025
Platinum resistance of high-grade serous ovarian cancer (HGSOC) cannot currently be recognized by specific molecular biomarkers. We aimed to compare the predictive capacity of various models integrating MRI habitat, whole slide images (WSIs), and clinical parameters to predict platinum sensitivity in HGSOC patients. A retrospective study involving 998 eligible patients from four hospitals was conducted. MRI habitats were clustered using K-means algorithm on multi-parametric MRI. Following feature extraction and selection, a Habitat model was developed. Vision Transformer (ViT) and multi-instance learning were trained to derive the patch-level prediction and WSI-level prediction on hematoxylin and eosin (H&E)-stained WSIs, respectively, forming a Pathology model. Logistic regression (LR) was used to create a Clinic model. A multi-modal model integrating Clinic, Habitat, and Pathology (CHP) was constructed using Multi-Head Attention (MHA) and compared with the unimodal models and Ensemble multi-modal models. The area under the curve (AUC) and integrated discrimination improvement (IDI) value were used to assess model performance and gains. In the internal validation cohort and the external test cohort, the Habitat model showed the highest AUCs (0.722 and 0.685) compared to the Clinic model (0.683 and 0.681) and the Pathology model (0.533 and 0.565), respectively. The AUCs (0.789 and 0.807) of the multi-modal model interating CHP based on MHA were highest than those of any unimodal models and Ensemble multi-modal models, with positive IDI values. MRI-based habitat imaging showed potentials to predict platinum sensitivity in HGSOC patients. Multi-modal integration of CHP based on MHA was helpful to improve prediction performance.

"DCSLK: Combined Large Kernel Shared Convolutional Model with Dynamic Channel Sampling".

Li Z, Luo S, Li H, Li Y

pubmed logopapersMay 20 2025
This study centers around the competition between Convolutional Neural Networks (CNNs) with large convolutional kernels and Vision Transformers in the domain of computer vision, delving deeply into the issues pertaining to parameters and computational complexity that stem from the utilization of large convolutional kernels. Even though the size of the convolutional kernels has been extended up to 51×51, the enhancement of performance has hit a plateau, and moreover, striped convolution incurs a performance degradation. Enlightened by the hierarchical visual processing mechanism inherent in humans, this research innovatively incorporates a shared parameter mechanism for large convolutional kernels. It synergizes the expansion of the receptive field enabled by large convolutional kernels with the extraction of fine-grained features facilitated by small convolutional kernels. To address the surging number of parameters, a meticulously designed parameter sharing mechanism is employed, featuring fine-grained processing in the central region of the convolutional kernel and wide-ranging parameter sharing in the periphery. This not only curtails the parameter count and mitigates the model complexity but also sustains the model's capacity to capture extensive spatial relationships. Additionally, in light of the problems of spatial feature information loss and augmented memory access during the 1×1 convolutional channel compression phase, this study further puts forward a dynamic channel sampling approach, which markedly elevates the accuracy of tumor subregion segmentation. To authenticate the efficacy of the proposed methodology, a comprehensive evaluation has been conducted on three brain tumor segmentation datasets, namely BraTs2020, BraTs2024, and Medical Segmentation Decathlon Brain 2018. The experimental results evince that the proposed model surpasses the current mainstream ConvNet and Transformer architectures across all performance metrics, proffering novel research perspectives and technical stratagems for the realm of medical image segmentation.

Artificial Intelligence and Musculoskeletal Surgical Applications.

Oettl FC, Zsidai B, Oeding JF, Samuelsson K

pubmed logopapersMay 20 2025
Artificial intelligence (AI) has emerged as a transformative force in orthopedic surgery. Potentially encompassing pre-, intra-, and postoperative processes, it can process complex medical imaging, provide real-time surgical guidance, and analyze large datasets for outcome prediction and optimization. AI has shown improvements in surgical precision, efficiency, and patient outcomes across orthopedic subspecialties, and large language models and agentic AI systems are expanding AI utility beyond surgical applications into areas such as clinical documentation, patient education, and autonomous decision support. The successful implementation of AI in orthopedic surgery requires careful attention to validation, regulatory compliance, and healthcare system integration. As these technologies continue to advance, maintaining the balance between innovation and patient safety remains crucial, with the ultimate goal of achieving more personalized, efficient, and equitable healthcare delivery while preserving the essential role of human clinical judgment. This review examines the current landscape and future trajectory of AI applications in orthopedic surgery, highlighting both technological advances and their clinical impact. Studies have suggested that AI-assisted procedures achieve higher accuracy and better functional outcomes compared to conventional methods, while reducing operative times and complications. However, these technologies are designed to augment rather than replace clinical expertise, serving as sophisticated tools to enhance surgeons' capabilities and improve patient care.

New approaches to lesion assessment in multiple sclerosis.

Preziosa P, Filippi M, Rocca MA

pubmed logopapersMay 19 2025
To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research. Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance. Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.

Effectiveness of Artificial Intelligence in detecting sinonasal pathology using clinical imaging modalities: a systematic review.

Petsiou DP, Spinos D, Martinos A, Muzaffar J, Garas G, Georgalas C

pubmed logopapersMay 19 2025
Sinonasal pathology can be complex and requires a systematic and meticulous approach. Artificial Intelligence (AI) has the potential to improve diagnostic accuracy and efficiency in sinonasal imaging, but its clinical applicability remains an area of ongoing research. This systematic review evaluates the methodologies and clinical relevance of AI in detecting sinonasal pathology through radiological imaging. Key search terms included "artificial intelligence," "deep learning," "machine learning," "neural network," and "paranasal sinuses,". Abstract and full-text screening was conducted using predefined inclusion and exclusion criteria. Data were extracted on study design, AI architectures used (e.g., Convolutional Neural Networks (CNN), Machine Learning classifiers), and clinical characteristics, such as imaging modality (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)). A total of 53 studies were analyzed, with 85% retrospective, 68% single-center, and 92.5% using internal databases. CT was the most common imaging modality (60.4%), and chronic rhinosinusitis without nasal polyposis (CRSsNP) was the most studied condition (34.0%). Forty-one studies employed neural networks, with classification as the most frequent AI task (35.8%). Key performance metrics included Area Under the Curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Quality assessment based on CONSORT-AI yielded a mean score of 16.0 ± 2. AI shows promise in improving sinonasal imaging interpretation. However, as existing research is predominantly retrospective and single-center, further studies are needed to evaluate AI's generalizability and applicability. More research is also required to explore AI's role in treatment planning and post-treatment prediction for clinical integration.
Page 92 of 1291287 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.