Sort by:
Page 103 of 1521519 results

MEF-Net: Multi-scale and edge feature fusion network for intracranial hemorrhage segmentation in CT images.

Zhang X, Zhang S, Jiang Y, Tian L

pubmed logopapersJun 1 2025
Intracranial Hemorrhage (ICH) refers to cerebral bleeding resulting from ruptured blood vessels within the brain. Delayed and inaccurate diagnosis and treatment of ICH can lead to fatality or disability. Therefore, early and precise diagnosis of intracranial hemorrhage is crucial for protecting patients' lives. Automatic segmentation of hematomas in CT images can provide doctors with essential diagnostic support and improve diagnostic efficiency. CT images of intracranial hemorrhage exhibit characteristics such as multi-scale, multi-target, and blurred edges. This paper proposes a Multi-scale and Edge Feature Fusion Network (MEF-Net) to effectively extract multi-scale and edge features and fully fuse these features through a fusion mechanism. The network first extracts the multi-scale features and edge features of the image through the encoder and the edge detection module respectively, then fuses the deep information, and employs the multi-kernel attention module to process the shallow features, enhancing the multi-target recognition capability. Finally, the feature maps from each module are combined to produce the segmentation result. Experimental results indicate that this method has achieved average DICE scores of 0.7508 and 0.7443 in two public datasets respectively, surpassing those of several advanced methods in medical image segmentation currently available. The proposed MEF-Net significantly improves the accuracy of intracranial hemorrhage segmentation.

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

Choi A, Kim HG, Choi MH, Ramasamy SK, Kim Y, Jung SE

pubmed logopapersJun 1 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels. A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined. GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275). GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

Conversion of Mixed-Language Free-Text CT Reports of Pancreatic Cancer to National Comprehensive Cancer Network Structured Reporting Templates by Using GPT-4.

Kim H, Kim B, Choi MH, Choi JI, Oh SN, Rha SE

pubmed logopapersJun 1 2025
To evaluate the feasibility of generative pre-trained transformer-4 (GPT-4) in generating structured reports (SRs) from mixed-language (English and Korean) narrative-style CT reports for pancreatic ductal adenocarcinoma (PDAC) and to assess its accuracy in categorizing PDCA resectability. This retrospective study included consecutive free-text reports of pancreas-protocol CT for staging PDAC, from two institutions, written in English or Korean from January 2021 to December 2023. Both the GPT-4 Turbo and GPT-4o models were provided prompts along with the free-text reports via an application programming interface and tasked with generating SRs and categorizing tumor resectability according to the National Comprehensive Cancer Network guidelines version 2.2024. Prompts were optimized using the GPT-4 Turbo model and 50 reports from Institution B. The performances of the GPT-4 Turbo and GPT-4o models in the two tasks were evaluated using 115 reports from Institution A. Results were compared with a reference standard that was manually derived by an abdominal radiologist. Each report was consecutively processed three times, with the most frequent response selected as the final output. Error analysis was guided by the decision rationale provided by the models. Of the 115 narrative reports tested, 96 (83.5%) contained both English and Korean. For SR generation, GPT-4 Turbo and GPT-4o demonstrated comparable accuracies (92.3% [1592/1725] and 92.2% [1590/1725], respectively; <i>P</i> = 0.923). In the resectability categorization, GPT-4 Turbo showed higher accuracy than GPT-4o (81.7% [94/115] vs. 67.0% [77/115], respectively; <i>P</i> = 0.002). In the error analysis of GPT-4 Turbo, the SR generation error rate was 7.7% (133/1725 items), which was primarily attributed to inaccurate data extraction (54.1% [72/133]). The resectability categorization error rate was 18.3% (21/115), with the main cause being violation of the resectability criteria (61.9% [13/21]). Both GPT-4 Turbo and GPT-4o demonstrated acceptable accuracy in generating NCCN-based SRs on PDACs from mixed-language narrative reports. However, oversight by human radiologists is essential for determining resectability based on CT findings.

Eigenhearts: Cardiac diseases classification using eigenfaces approach.

Groun N, Villalba-Orero M, Casado-Martín L, Lara-Pezzi E, Valero E, Le Clainche S, Garicano-Mena J

pubmed logopapersJun 1 2025
In the realm of cardiovascular medicine, medical imaging plays a crucial role in accurately classifying cardiac diseases and making precise diagnoses. However, the integration of data science techniques in this field presents significant challenges, as it requires a large volume of images, while ethical constraints, high costs, and variability in imaging protocols limit data acquisition. As a consequence, it is necessary to investigate different avenues to overcome this challenge. In this contribution, we offer an innovative tool to conquer this limitation. In particular, we delve into the application of a well recognized method known as the eigenfaces approach to classify cardiac diseases. This approach was originally motivated for efficiently representing pictures of faces using principal component analysis, which provides a set of eigenvectors (aka eigenfaces), explaining the variation between face images. Given its effectiveness in face recognition, we sought to evaluate its applicability to more complex medical imaging datasets. In particular, we integrate this approach with convolutional neural networks to classify echocardiography images taken from mice in five distinct cardiac conditions (healthy, diabetic cardiomyopathy, myocardial infarction, obesity and TAC hypertension). The results show a substantial and noteworthy enhancement when employing the singular value decomposition for pre-processing, with classification accuracy increasing by approximately 50%.

PRECISE framework: Enhanced radiology reporting with GPT for improved readability, reliability, and patient-centered care.

Tripathi S, Mutter L, Muppuri M, Dheer S, Garza-Frias E, Awan K, Jha A, Dezube M, Tabari A, Bizzo BC, Dreyer KJ, Bridge CP, Daye D

pubmed logopapersJun 1 2025
The PRECISE framework, defined as Patient-Focused Radiology Reports with Enhanced Clarity and Informative Summaries for Effective Communication, leverages GPT-4 to create patient-friendly summaries of radiology reports at a sixth-grade reading level. The purpose of the study was to evaluate the effectiveness of the PRECISE framework in improving the readability, reliability, and understandability of radiology reports. We hypothesized that the PRECISE framework improves the readability and patient understanding of radiology reports compared to the original versions. The PRECISE framework was assessed using 500 chest X-ray reports. Readability was evaluated using the Flesch Reading Ease, Gunning Fog Index, and Automated Readability Index. Reliability was gauged by clinical volunteers, while understandability was assessed by non-medical volunteers. Statistical analyses including t-tests, regression analyses, and Mann-Whitney U tests were conducted to determine the significance of the differences in readability scores between the original and PRECISE-generated reports. Readability scores significantly improved, with the mean Flesch Reading Ease score increasing from 38.28 to 80.82 (p-value < 0.001), the Gunning Fog Index decreasing from 13.04 to 6.99 (p-value < 0.001), and the ARI score improving from 13.33 to 5.86 (p-value < 0.001). Clinical volunteer assessments found 95 % of the summaries reliable, and non-medical volunteers rated 97 % of the PRECISE-generated summaries as fully understandable. The application of the PRECISE approach demonstrates promise in enhancing patient understanding and communication without adding significant burden to radiologists. With improved reliability and patient-friendly summaries, this approach holds promise for fostering patient engagement and understanding in healthcare decision-making. The PRECISE framework represents a pivotal step towards more inclusive and patient-centric care delivery.

Deep learning for multiple sclerosis lesion classification and stratification using MRI.

Umirzakova S, Shakhnoza M, Sevara M, Whangbo TK

pubmed logopapersJun 1 2025
Multiple sclerosis (MS) is a chronic neurological disease characterized by inflammation, demyelination, and neurodegeneration within the central nervous system. Conventional magnetic resonance imaging (MRI) techniques often struggle to detect small or subtle lesions, particularly in challenging regions such as the cortical gray matter and brainstem. This study introduces a novel deep learning-based approach, combined with a robust preprocessing pipeline and optimized MRI protocols, to improve the precision of MS lesion classification and stratification. We designed a convolutional neural network (CNN) architecture specifically tailored for high-resolution T2-weighted imaging (T2WI), augmented by deep learning-based reconstruction (DLR) techniques. The model incorporates dual attention mechanisms, including spatial and channel attention modules, to enhance feature extraction. A comprehensive preprocessing pipeline was employed, featuring bias field correction, skull stripping, image registration, and intensity normalization. The proposed framework was trained and validated on four publicly available datasets and evaluated using precision, sensitivity, specificity, and area under the curve (AUC) metrics. The model demonstrated exceptional performance, achieving a precision of 96.27 %, sensitivity of 95.54 %, specificity of 94.70 %, and an AUC of 0.975. It outperformed existing state-of-the-art methods, particularly in detecting lesions in underdiagnosed regions such as the cortical gray matter and brainstem. The integration of advanced attention mechanisms enabled the model to focus on critical MRI features, leading to significant improvements in lesion classification and stratification. This study presents a novel and scalable approach for MS lesion detection and classification, offering a practical solution for clinical applications. By integrating advanced deep learning techniques with optimized MRI protocols, the proposed framework achieves superior diagnostic accuracy and generalizability, paving the way for enhanced patient care and more personalized treatment strategies. This work sets a new benchmark for MS diagnosis and management in both research and clinical practice.

Myo-Guide: A Machine Learning-Based Web Application for Neuromuscular Disease Diagnosis With MRI.

Verdu-Diaz J, Bolano-Díaz C, Gonzalez-Chamorro A, Fitzsimmons S, Warman-Chardon J, Kocak GS, Mucida-Alvim D, Smith IC, Vissing J, Poulsen NS, Luo S, Domínguez-González C, Bermejo-Guerrero L, Gomez-Andres D, Sotoca J, Pichiecchio A, Nicolosi S, Monforte M, Brogna C, Mercuri E, Bevilacqua JA, Díaz-Jara J, Pizarro-Galleguillos B, Krkoska P, Alonso-Pérez J, Olivé M, Niks EH, Kan HE, Lilleker J, Roberts M, Buchignani B, Shin J, Esselin F, Le Bars E, Childs AM, Malfatti E, Sarkozy A, Perry L, Sudhakar S, Zanoteli E, Di Pace FT, Matthews E, Attarian S, Bendahan D, Garibaldi M, Fionda L, Alonso-Jiménez A, Carlier R, Okhovat AA, Nafissi S, Nalini A, Vengalil S, Hollingsworth K, Marini-Bettolo C, Straub V, Tasca G, Bacardit J, Díaz-Manera J

pubmed logopapersJun 1 2025
Neuromuscular diseases (NMDs) are rare disorders characterized by progressive muscle fibre loss, leading to replacement by fibrotic and fatty tissue, muscle weakness and disability. Early diagnosis is critical for therapeutic decisions, care planning and genetic counselling. Muscle magnetic resonance imaging (MRI) has emerged as a valuable diagnostic tool by identifying characteristic patterns of muscle involvement. However, the increasing complexity of these patterns complicates their interpretation, limiting their clinical utility. Additionally, multi-study data aggregation introduces heterogeneity challenges. This study presents a novel multi-study harmonization pipeline for muscle MRI and an AI-driven diagnostic tool to assist clinicians in identifying disease-specific muscle involvement patterns. We developed a preprocessing pipeline to standardize MRI fat content across datasets, minimizing source bias. An ensemble of XGBoost models was trained to classify patients based on intramuscular fat replacement, age at MRI and sex. The SHapley Additive exPlanations (SHAP) framework was adapted to analyse model predictions and identify disease-specific muscle involvement patterns. To address class imbalance, training and evaluation were conducted using class-balanced metrics. The model's performance was compared against four expert clinicians using 14 previously unseen MRI scans. Using our harmonization approach, we curated a dataset of 2961 MRI samples from genetically confirmed cases of 20 paediatric and adult NMDs. The model achieved a balanced accuracy of 64.8% ± 3.4%, with a weighted top-3 accuracy of 84.7% ± 1.8% and top-5 accuracy of 90.2% ± 2.4%. It also identified key features relevant for differential diagnosis, aiding clinical decision-making. Compared to four expert clinicians, the model obtained the highest top-3 accuracy (75.0% ± 4.8%). The diagnostic tool has been implemented as a free web platform, providing global access to the medical community. The application of AI in muscle MRI for NMD diagnosis remains underexplored due to data scarcity. This study introduces a framework for dataset harmonization, enabling advanced computational techniques. Our findings demonstrate the potential of AI-based approaches to enhance differential diagnosis by identifying disease-specific muscle involvement patterns. The developed tool surpasses expert performance in diagnostic ranking and is accessible to clinicians worldwide via the Myo-Guide online platform.

Semi-supervised spatial-frequency transformer for metal artifact reduction in maxillofacial CT and evaluation with intraoral scan.

Li Y, Ma C, Li Z, Wang Z, Han J, Shan H, Liu J

pubmed logopapersJun 1 2025
To develop a semi-supervised domain adaptation technique for metal artifact reduction with a spatial-frequency transformer (SFTrans) model (Semi-SFTrans), and to quantitatively compare its performance with supervised models (Sup-SFTrans and ResUNet) and traditional linear interpolation MAR method (LI) in oral and maxillofacial CT. Supervised models, including Sup-SFTrans and a state-of-the-art model termed ResUNet, were trained with paired simulated CT images, while semi-supervised model, Semi-SFTrans, was trained with both paired simulated and unpaired clinical CT images. For evaluation on the simulated data, we calculated Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on the images corrected by four methods: LI, ResUNet, Sup-SFTrans, and Semi-SFTrans. For evaluation on the clinical data, we collected twenty-two clinical cases with real metal artifacts, and the corresponding intraoral scan data. Three radiologists visually assessed the severity of artifacts using Likert scales on the original, Sup-SFTrans-corrected, and Semi-SFTrans-corrected images. Quantitative MAR evaluation was conducted by measuring Mean Hounsfield Unit (HU) values, standard deviations, and Signal-to-Noise Ratios (SNRs) across Regions of Interest (ROIs) such as the tongue, bilateral buccal, lips, and bilateral masseter muscles, using paired t-tests and Wilcoxon signed-rank tests. Further, teeth integrity in the corrected images was assessed by comparing teeth segmentation results from the corrected images against the ground-truth segmentation derived from registered intraoral scan data, using Dice Score and Hausdorff Distance. Sup-SFTrans outperformed LI, ResUNet and Semi-SFTrans on the simulated dataset. Visual assessments from the radiologists showed that average scores were (2.02 ± 0.91) for original CT, (4.46 ± 0.51) for Semi-SFTrans CT, and (3.64 ± 0.90) for Sup-SFTrans CT, with intra correlation coefficients (ICCs)>0.8 of all groups and p < 0.001 between groups. On soft tissue, both Semi-SFTrans and Sup-SFTrans significantly reduced metal artifacts in tongue (p < 0.001), lips, bilateral buccal regions, and masseter muscle areas (p < 0.05). Semi-SFTrans achieved superior metal artifact reduction than Sup-SFTrans in all ROIs (p < 0.001). SNR results indicated significant differences between Semi-SFTrans and Sup-SFTrans in tongue (p = 0.0391), bilateral buccal (p = 0.0067), lips (p = 0.0208), and bilateral masseter muscle areas (p = 0.0031). Notably, Semi-SFTrans demonstrated better teeth integrity preservation than Sup-SFTrans (Dice Score: p < 0.001; Hausdorff Distance: p = 0.0022). The semi-supervised MAR model, Semi-SFTrans, demonstrated superior metal artifact reduction performance over supervised counterparts in real dental CT images.

Pediatric chest X-ray diagnosis using neuromorphic models.

Bokhari SM, Sohaib S, Shafi M

pubmed logopapersJun 1 2025
This research presents an innovative neuromorphic method utilizing Spiking Neural Networks (SNNs) to analyze pediatric chest X-rays (PediCXR) to identify prevalent thoracic illnesses. We incorporate spiking-based machine learning models such as Spiking Convolutional Neural Networks (SCNN), Spiking Residual Networks (S-ResNet), and Hierarchical Spiking Neural Networks (HSNN), for pediatric chest radiographic analysis utilizing the publically available benchmark PediCXR dataset. These models employ spatiotemporal feature extraction, residual connections, and event-driven processing to improve diagnostic precision. The HSNN model surpasses benchmark approaches from the literature, with a classification accuracy of 96% across six thoracic illness categories, with an F1-score of 0.95 and a specificity of 1.0 in pneumonia detection. Our research demonstrates that neuromorphic computing is a feasible and biologically inspired approach to real-time medical imaging diagnostics, significantly improving performance.

Visceral Fat Quantified by a Fully Automated Deep-Learning Algorithm and Risk of Incident and Recurrent Diverticulitis.

Ha J, Bridge CP, Andriole KP, Kambadakone A, Clark MJ, Narimiti A, Rosenthal MH, Fintelmann FJ, Gollub RL, Giovannucci EL, Strate LL, Ma W, Chan AT

pubmed logopapersJun 1 2025
Obesity is a risk factor for diverticulitis. However, it remains unclear whether visceral fat area, a more precise measurement of abdominal fat, is associated with the risk of diverticulitis. To estimate the risk of incident and recurrent diverticulitis according to visceral fat area. A retrospective cohort study. The Mass General Brigham Biobank. A total of 6654 patients who underwent abdominal CT for clinical indications and had no diagnosis of diverticulitis, IBD, or cancer before the scan were included. Visceral fat area, subcutaneous fat area, and skeletal muscle area were quantified using a deep-learning model applied to abdominal CT. The main exposures were z -scores of body composition metrics normalized by age, sex, and race. Diverticulitis cases were identified using the International Classification of Diseases codes for the primary or admitting diagnosis from the electronic health records. The risks of incident diverticulitis, complicated diverticulitis, and recurrent diverticulitis requiring hospitalization according to quartiles of body composition metrics z -scores were estimated. A higher visceral fat area z -score was associated with an increased risk of incident diverticulitis (multivariable HR comparing the highest vs lowest quartile, 2.09; 95% CI, 1.48-2.95; p for trend <0.0001), complicated diverticulitis (HR, 2.56; 95% CI, 1.10-5.99; p for trend = 0.02), and recurrence requiring hospitalization (HR, 2.76; 95% CI, 1.15-6.62; p for trend = 0.03). The association between visceral fat area and diverticulitis was not materially different among different strata of BMI. Subcutaneous fat area and skeletal muscle area were not significantly associated with diverticulitis. The study population was limited to individuals who underwent CT scans for medical indication. Higher visceral fat area derived from CT was associated with incident and recurrent diverticulitis. Our findings provide insight into the underlying pathophysiology of diverticulitis and may have implications for preventive strategies. See Video Abstract . ANTECEDENTES:La obesidad es un factor de riesgo de la diverticulitis. Sin embargo, sigue sin estar claro si el área de grasa visceral, con medida más precisa de la grasa abdominal esté asociada con el riesgo de diverticulitis.OBJETIVO:Estimar el riesgo de diverticulitis incidente y recurrente de acuerdo con el área de grasa visceral.DISEÑO:Un estudio de cohorte retrospectivo.AJUSTE:El Biobanco Mass General Brigham.PACIENTES:6.654 pacientes sometidos a una TC abdominal por indicaciones clínicas y sin diagnóstico de diverticulitis, enfermedad inflamatoria intestinal o cáncer antes de la exploración.PRINCIPALES MEDIDAS DE RESULTADOS:Se cuantificaron, área de grasa visceral, área de grasa subcutánea y área de músculo esquelético, utilizando un modelo de aprendizaje profundo aplicado a la TC abdominal. Las principales exposiciones fueron puntuaciones z de métricas de composición corporal, normalizadas por edad, sexo y raza. Los casos de diverticulitis se definieron con los códigos ICD para el diagnóstico primario o de admisión de los registros de salud electrónicos. Se estimaron los riesgos de diverticulitis incidente, diverticulitis complicada y diverticulitis recurrente que requiriera hospitalización según los cuartiles de las puntuaciones z de las métricas de composición corporal.RESULTADOS:Una puntuación z más alta del área de grasa visceral se asoció con un mayor riesgo de diverticulitis incidente (HR multivariable que compara el cuartil más alto con el más bajo, 2,09; IC del 95 %, 1,48-2,95; P para la tendencia < 0,0001), diverticulitis complicada (HR, 2,56; IC del 95 %, 1,10-5,99; P para la tendencia = 0,02) y recurrencia que requiriera hospitalización (HR, 2,76; IC del 95 %, 1,15-6,62; P para la tendencia = 0,03). La asociación entre el área de grasa visceral y la diverticulitis no fue materialmente diferente entre los diferentes estratos del índice de masa corporal. El área de grasa subcutánea y el área del músculo esquelético no se asociaron significativamente con la diverticulitis.LIMITACIONES:La población del estudio se limitó a individuos sometidos a tomografías computarizadas por indicación médica.CONCLUSIÓN:Una mayor área de grasa visceral derivada de la tomografía computarizada se asoció con diverticulitis incidente y recurrente. Nuestros hallazgos brindan información sobre la fisiopatología subyacente de la diverticulitis y pueden tener implicaciones para las estrategias preventivas. (Traducción: Dr. Fidel Ruiz Healy ).
Page 103 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.