Sort by:
Page 1 of 325 results

Evolution of Cortical Lesions and Function-Specific Cognitive Decline in People With Multiple Sclerosis.

Krijnen EA, Jelgerhuis J, Van Dam M, Bouman PM, Barkhof F, Klawiter EC, Hulst HE, Strijbis EMM, Schoonheim MM

pubmed logopapersJun 1 2025
Cortical lesions in multiple sclerosis (MS) severely affect cognition, but their longitudinal evolution and impact on specific cognitive functions remain understudied. This study investigates the evolution of function-specific cognitive functioning over 10 years in people with MS and assesses the influence of cortical lesion load and formation on these trajectories. In this prospectively collected study, people with MS underwent 3T MRI (T1 and fluid-attenuated inversion recovery) at 3 study visits between 2008 and 2022. Cognitive functioning was evaluated based on neuropsychological assessment reflecting 7 cognitive functions: attention; executive functioning (EF); information processing speed (IPS); verbal fluency; and verbal, visuospatial, and working memory. Cortical lesions were manually identified on artificial intelligence-generated double-inversion recovery images. Linear mixed models were constructed to assess the temporal evolution between cortical lesion load and function-specific cognitive decline. In addition, analyses were stratified by MS disease stage: early and late relapsing-remitting MS (cutoff disease duration at 15 years) and progressive MS. The study included 223 people with MS (mean age, 47.8 ± 11.1 years; 153 women) and 62 healthy controls. All completed 5-year follow-up, and 37 healthy controls and 94 with MS completed 10-year follow-up. At baseline, people with MS exhibited worse functioning of IPS and working memory. Over 10 years, cognitive decline was most severe in attention, verbal memory, and EF. At baseline, people with MS had a median cortical lesion count of 7 (range 0-73), which was related to subsequent decline in attention (B[95% CI] = -0.22 [-0.40 to -0.03]) and verbal fluency (B[95% CI] = -0.23[-0.37 to -0.09]). Over time, cortical lesions increased by a median count of 4 (range -2 to 71), particularly in late and progressive disease, and was related to decline in verbal fluency (B [95% CI] = -0.33 [-0.51 to -0.15]). The associations between (change in) cortical lesion load and cognitive decline were not modified by MS disease stage. Cognition worsened over 10 years, particularly affecting attention, verbal memory, and EF, while preexisting impairments were worst in other functions such as IPS. Worse baseline cognitive functioning was related to baseline cortical lesions, whereas baseline cortical lesions and cortical lesion formation were related to cognitive decline in functions less affected at baseline. Accumulating cortical damage leads to spreading of cognitive impairments toward additional functions.

Computer-aided assessment for enlarged fetal heart with deep learning model.

Nurmaini S, Sapitri AI, Roseno MT, Rachmatullah MN, Mirani P, Bernolian N, Darmawahyuni A, Tutuko B, Firdaus F, Islami A, Arum AW, Bastian R

pubmed logopapersMay 16 2025
Enlarged fetal heart conditions may indicate congenital heart diseases or other complications, making early detection through prenatal ultrasound essential. However, manual assessments by sonographers are often subjective, time-consuming, and inconsistent. This paper proposes a deep learning approach using the You Only Look Once (YOLO) architecture to automate fetal heart enlargement assessment. Using a set of ultrasound videos, YOLOv8 with a CBAM module demonstrated superior performance compared to YOLOv11 with self-attention. Incorporating the ResNeXtBlock-a residual network with cardinality-additionally enhanced accuracy and prediction consistency. The model exhibits strong capability in detecting fetal heart enlargement, offering a reliable computer-aided tool for sonographers during prenatal screenings. Further validation is required to confirm its clinical applicability. By improving early and accurate detection, this approach has the potential to enhance prenatal care, facilitate timely interventions, and contribute to better neonatal health outcomes.

Challenges in Implementing Artificial Intelligence in Breast Cancer Screening Programs: Systematic Review and Framework for Safe Adoption.

Goh S, Goh RSJ, Chong B, Ng QX, Koh GCH, Ngiam KY, Hartman M

pubmed logopapersMay 15 2025
Artificial intelligence (AI) studies show promise in enhancing accuracy and efficiency in mammographic screening programs worldwide. However, its integration into clinical workflows faces several challenges, including unintended errors, the need for professional training, and ethical concerns. Notably, specific frameworks for AI imaging in breast cancer screening are still lacking. This study aims to identify the challenges associated with implementing AI in breast screening programs and to apply the Consolidated Framework for Implementation Research (CFIR) to discuss a practical governance framework for AI in this context. Three electronic databases (PubMed, Embase, and MEDLINE) were searched using combinations of the keywords "artificial intelligence," "regulation," "governance," "breast cancer," and "screening." Original studies evaluating AI in breast cancer detection or discussing challenges related to AI implementation in this setting were eligible for review. Findings were narratively synthesized and subsequently mapped directly onto the constructs within the CFIR. A total of 1240 results were retrieved, with 20 original studies ultimately included in this systematic review. The majority (n=19) focused on AI-enhanced mammography, while 1 addressed AI-enhanced ultrasound for women with dense breasts. Most studies originated from the United States (n=5) and the United Kingdom (n=4), with publication years ranging from 2019 to 2023. The quality of papers was rated as moderate to high. The key challenges identified were reproducibility, evidentiary standards, technological concerns, trust issues, as well as ethical, legal, societal concerns, and postadoption uncertainty. By aligning these findings with the CFIR constructs, action plans targeting the main challenges were incorporated into the framework, facilitating a structured approach to addressing these issues. This systematic review identifies key challenges in implementing AI in breast cancer screening, emphasizing the need for consistency, robust evidentiary standards, technological advancements, user trust, ethical frameworks, legal safeguards, and societal benefits. These findings can serve as a blueprint for policy makers, clinicians, and AI developers to collaboratively advance AI adoption in breast cancer screening. PROSPERO CRD42024553889; https://tinyurl.com/mu4nwcxt.

Automated Microbubble Discrimination in Ultrasound Localization Microscopy by Vision Transformer.

Wang R, Lee WN

pubmed logopapersMay 15 2025
Ultrasound localization microscopy (ULM) has revolutionized microvascular imaging by breaking the acoustic diffraction limit. However, different ULM workflows depend heavily on distinct prior knowledge, such as the impulse response and empirical selection of parameters (e.g., the number of microbubbles (MBs) per frame M), or the consistency of training-test dataset in deep learning (DL)-based studies. We hereby propose a general ULM pipeline that reduces priors. Our approach leverages a DL model that simultaneously distills microbubble signals and reduces speckle from every frame without estimating the impulse response and M. Our method features an efficient channel attention vision transformer (ViT) and a progressive learning strategy, enabling it to learn global information through training on progressively increasing patch sizes. Ample synthetic data were generated using the k-Wave toolbox to simulate various MB patterns, thus overcoming the deficiency of labeled data. The ViT output was further processed by a standard radial symmetry method for sub-pixel localization. Our method performed well on model-unseen public datasets: one in silico dataset with ground truth and four in vivo datasets of mouse tumor, rat brain, rat brain bolus, and rat kidney. Our pipeline outperformed conventional ULM, achieving higher positive predictive values (precision in DL, 0.88-0.41 vs. 0.83-0.16) and improved accuracy (root-mean-square errors: 0.25-0.14 λ vs. 0.31-0.13 λ) across a range of signal-to-noise ratios from 60 dB to 10 dB. Our model could detect more vessels in diverse in vivo datasets while achieving comparable resolutions to the standard method. The proposed ViT-based model, seamlessly integrated with state-of-the-art downstream ULM steps, improved the overall ULM performance with no priors.

Application of artificial intelligence medical imaging aided diagnosis system in the diagnosis of pulmonary nodules.

Yang Y, Wang P, Yu C, Zhu J, Sheng J

pubmed logopapersMay 14 2025
The application of artificial intelligence (AI) technology has realized the transformation of people's production and lifestyle, and also promoted the rapid development of the medical field. At present, the application of intelligence in the medical field is increasing. Using its advanced methods and technologies of AI, this paper aims to realize the integration of medical imaging-aided diagnosis system and AI, which is helpful to analyze and solve the loopholes and errors of traditional artificial diagnosis in the diagnosis of pulmonary nodules. Drawing on the principles and rules of image segmentation methods, the construction and optimization of a medical image-aided diagnosis system is carried out to realize the precision of the diagnosis system in the diagnosis of pulmonary nodules. In the diagnosis of pulmonary nodules carried out by traditional artificial and medical imaging-assisted diagnosis systems, 231 nodules with pathology or no change in follow-up for more than two years were also tested in 200 cases. The results showed that the AI software detected a total of 881 true nodules with a sensitivity of 99.10% (881/889). The radiologists detected 385 true nodules with a sensitivity of 43.31% (385/889). The sensitivity of AI software in detecting non-calcified nodules was significantly higher than that of radiologists (99.01% vs 43.30%, P < 0.001), and the difference was statistically significant.

Improving AI models for rare thyroid cancer subtype by text guided diffusion models.

Dai F, Yao S, Wang M, Zhu Y, Qiu X, Sun P, Qiu C, Yin J, Shen G, Sun J, Wang M, Wang Y, Yang Z, Sang J, Wang X, Sun F, Cai W, Zhang X, Lu H

pubmed logopapersMay 13 2025
Artificial intelligence applications in oncology imaging often struggle with diagnosing rare tumors. We identify significant gaps in detecting uncommon thyroid cancer types with ultrasound, where scarce data leads to frequent misdiagnosis. Traditional augmentation strategies do not capture the unique disease variations, hindering model training and performance. To overcome this, we propose a text-driven generative method that fuses clinical insights with image generation, producing synthetic samples that realistically reflect rare subtypes. In rigorous evaluations, our approach achieves substantial gains in diagnostic metrics, surpasses existing methods in authenticity and diversity measures, and generalizes effectively to other private and public datasets with various rare cancers. In this work, we demonstrate that text-guided image augmentation substantially enhances model accuracy and robustness for rare tumor detection, offering a promising avenue for more reliable and widespread clinical adoption.

Deep Learning for Detecting Periapical Bone Rarefaction in Panoramic Radiographs: A Systematic Review and Critical Assessment.

da Silva-Filho JE, da Silva Sousa Z, de-Araújo APC, Fornagero LDS, Machado MP, de Aguiar AWO, Silva CM, de Albuquerque DF, Gurgel-Filho ED

pubmed logopapersMay 12 2025
To evaluate deep learning (DL)-based models for detecting periapical bone rarefaction (PBRs) in panoramic radiographs (PRs), analyzing their feasibility and performance in dental practice. A search was conducted across seven databases and partial grey literature up to November 15, 2024, using Medical Subject Headings and entry terms related to DL, PBRs, and PRs. Studies assessing DL-based models for detecting and classifying PBRs in conventional PRs were included, while those using non-PR imaging or focusing solely on non-PBR lesions were excluded. Two independent reviewers performed screening, data extraction, and quality assessment using the Quality Assessment of Diagnostic Accuracy Studies-2 tool, with conflicts resolved by a third reviewer. Twelve studies met the inclusion criteria, mostly from Asia (58.3%). The risk of bias was moderate in 10 studies (83.3%) and high in 2 (16.7%). DL models showed moderate to high performance in PBR detection (sensitivity: 26-100%; specificity: 51-100%), with U-NET and YOLO being the most used algorithms. Only one study (8.3%) distinguished Periapical Granuloma from Periapical Cysts, revealing a classification gap. Key challenges included limited generalization due to small datasets, anatomical superimpositions in PRs, and variability in reported metrics, compromising models comparison. This review underscores that DL-based has the potential to become a valuable tool in dental image diagnostics, but it cannot yet be considered a definitive practice. Multicenter collaboration is needed to diversify data and democratize those tools. Standardized performance reporting is critical for fair comparability between different models.

[Pulmonary vascular interventions: innovating through adaptation and advancing through differentiation].

Li J, Wan J

pubmed logopapersMay 12 2025
Pulmonary vascular intervention technology, with its minimally invasive and precise advantages, has been a groundbreaking advancement in the treatment of pulmonary vascular diseases. Techniques such as balloon pulmonary angioplasty (BPA), pulmonary artery stenting, and percutaneous pulmonary artery denervation (PADN) have significantly improved the prognoses for conditions such as chronic thromboembolic pulmonary hypertension (CTEPH), pulmonary artery stenosis, and pulmonary arterial hypertension (PAH). Although based on coronary intervention (PCI) techniques such as guidewire manipulation and balloon dilatation, pulmonary vascular interventions require specific modifications to address the unique characteristics of the pulmonary circulation, low pressure, thin-walled vessels, and complex branching, to mitigate risks of perforation and thrombosis. Future directions include the development of dedicated instruments, multi-modality imaging guidance, artificial intelligence-assisted procedures, and molecular interventional therapies. These innovations aim to establish an independent theoretical framework for pulmonary vascular interventions, facilitating their transition from "adjuvant therapies" to "core treatments" in clinical practice.

Real-world Evaluation of Computer-aided Pulmonary Nodule Detection Software Sensitivity and False Positive Rate.

El Alam R, Jhala K, Hammer MM

pubmed logopapersMay 12 2025
Evaluate the false positive rate (FPR) of nodule detection software in real-world use. A total of 250 nonenhanced chest computed tomography (CT) examinations were randomly selected from an academic institution and submitted to the ClearRead nodule detection system (Riverain Technologies). Detected findings were reviewed by a thoracic imaging fellow. Nodules were classified as true nodules, lymph nodes, or other findings (branching opacity, vessel, mucus plug, etc.), and FPR was recorded. FPR was compared with the initial published FPR in the literature. True diagnosis was based on pathology or follow-up stability. For cases with malignant nodules, we recorded whether malignancy was detected by clinical radiology report (which was performed without software assistance) and/or ClearRead. Twenty-one CTs were excluded due to a lack of thin-slice images, and 229 CTs were included. A total of 594 findings were reported by ClearRead, of which 362 (61%) were true nodules and 232 (39%) were other findings. Of the true nodules, 297 were solid nodules, of which 79 (27%) were intrapulmonary lymph nodes. The mean findings identified by ClearRead per scan was 2.59. ClearRead mean FPR was 1.36, greater than the published rate of 0.58 (P<0.0001). If we consider true lung nodules <6 mm as false positive, FPR is 2.19. A malignant nodule was present in 30 scans; ClearRead identified it in 26 (87%), and the clinical report identified it in 28 (93%) (P=0.32). In real-world use, ClearRead had a much higher FPR than initially reported but a similar sensitivity for malignant nodule detection compared with unassisted radiologists.

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

H M Dipu Kabir, Subrota Kumar Mondal, Mohammad Ali Moni

arxiv logopreprintMay 10 2025
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal
Page 1 of 325 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.