Sort by:
Page 46 of 6156144 results

Liu X, Xin J, Shen Q, Huang Z, Wang Z

pubmed logopapersOct 13 2025
Radiology report provides important references for physicians' treatment decisions by including descriptions and diagnostic results of imaging. Automatic generation of radiology report reduces the workload of physicians and significantly improves work efficiency. However, the existing report generation methods use image-text conversion to generate reports directly from medical images, and fail to fully simulate the radiologist's diagnostic process of "examine first, describe later". Therefore, existing methods often can only generate general normal descriptions, and it is difficult to accurately describe the specific lesion features. To address this issue, we mimic the working mode of radiologists by first checking whether the patient suffers from a certain disease, and then using the learned medical knowledge to describe the images to form a report. We propose a soft label-guided transformer (SLGT) for radiology report generation. Firstly, the pseudo-labels of the samples are obtained, and the soft label-guided attention mechanism is utilized to highlight features related to the disease labels in the encoding stage. Secondly, text features from the decoding phase and image features are aligned, and the generated text features are used to guide the potential representations. Finally, a hybrid loss is designed that includes losses for text generation, disease classification, and visual-textual alignment. Optimization of SLGT using the hybrid loss allows the model to learn richer features that are more relevant to disease abnormalities, which improves the performance of the model. The proposed SLGT is evaluated on the widely used IU X-ray, MIMIC-CXR, and COV-CTR datasets. The experiments show that the proposed model SLGT outperforms the previous state-of-the-art models on three datasets. This work improves the performance of automatically generating medical reports, making their application in computer-aided diagnosis feasible.

Zhang B, Guo M, Chai M, Zhang X, Wei F, He Y, Hao Z, Liu Y, Tian X, Zhou S, Mao C

pubmed logopapersOct 13 2025
Chronic low back pain (cLBP) is often accompanied by emotional disorders. The locus coeruleus (LC) plays an important role in the establishment and perpetuation of chronic pain and is involved in modulating the emotional aspects of pain. However, the specific regulatory mechanisms of LC are poorly characterized in cLBP. In this study, seventy-three patients with cLBP and 52 healthy controls (HCs) were recruited. The resting-state functional connectivity (rsFC) and effective connectivity (EC) of the LC were evaluated. Then support vector machines (SVM) and logistic regression (LR) were used to distinguish the cLBP from the HCs based on the connectivity features. Compared to HCs, patients with cLBP exhibited significantly decreased rsFC of the left LC-superior frontal gyrus (SFG), middle cingulate cortex (MCC), right LC-left cerebellum, right middle frontal gyrus (MFG) pathways, and enhanced regions including left precuneus gyrus and right angular gyrus (P < 0.05, Family Wise Error corrected). The EC showed increased from the right LC to left cerebellum in cLBP. In addition, SVM and LR achieved high performance in differentiating the patients with cLBP from HC (accuracy = 0.78 and 0.74, respectively) in training cohort and the independent test cohort (accuracy = 0.74 and 0.69, respectively) based on rsFC of the LC. Our results suggest that impaired LC-cerebellar-cortical circuits mediate the modulation of emotions and chronic pain. Critically, Support vector machine and logistic regression models demonstrated the potential of LC connectivity patterns as objective neuroimaging biomarkers for differentiating patients with cLBP from HCs.

Liu X, Wu Y, Zhou Y

pubmed logopapersOct 13 2025
With the continuous advancement of medical enterprise, intelligent medical technologies supported by natural language processing and knowledge representation have made significant progress. However, with the continuous generation of vast amounts of medical data, the current methods still perform poorly in handling specialized medical data, particularly unlabeled medical diagnostic data. Inspired by the outstanding performance of large language models in various downstream expert tasks in recent years, this article leverages large language models to handle the massive unlabelled medical data, aiming to provide more accurate technical solutions for medical image classification tasks. Specifically, we propose a novel Cross-Modal Knowledge Representation framework (CMKR) to handle vast unlabeled medical data, which utilizes large language models to extract implicit knowledge from medical images, while also extracting explicit textual knowledge with the aid of knowledge graphs. To better utilize the associative information between medical images and textual records, we have designed a cross-modal alignment strategy that enhances knowledge representation capabilities both intra- and inter-modal. We conducted extensive experiments on public datasets, demonstrating that our method outperforms most mainstream approaches.

Zhou T, Luo J, Sun Y, Tan Y, Yao S, Haouchine N, Raymond S

pubmed logopapersOct 13 2025
Accurate MRI-to-CT translation promises the integration of complementary imaging information without the need for additional imaging sessions. Given the practical challenges associated with acquiring paired MRI and CT scans, the development of robust methods capable of leveraging unpaired datasets is essential for advancing the MRI-to-CT translation. Current unpaired MRI-to-CT translation methods, which predominantly rely on cycle consistency and contrastive learning frameworks, frequently encounter challenges in accurately translating anatomical features that are highly discernible on CT but less distinguishable on MRI, such as bone structures. This limitation renders these approaches less suitable for applications in radiation therapy, where precise bone representation is essential for accurate treatment planning. To address this challenge, we propose a path- and bone-contour regularized approach for unpaired MRI-to-CT translation. In our method, MRI and CT images are projected to a shared latent space, where the MRI-to-CT mapping is modeled as a continuous flow governed by neural ordinary differential equations. The optimal mapping is obtained by minimizing the transition path length of the flow. To enhance the accuracy of translated bone structures, we introduce a trainable neural network to generate bone contours from MRI and implement mechanisms to directly and indirectly encourage the model to focus on bone contours and their adjacent regions. Evaluations conducted on three datasets demonstrate that our method outperforms existing unpaired MRI-to-CT translation approaches, achieving lower overall error rates. Moreover, in a downstream bone segmentation task, our approach exhibits superior performance in preserving the fidelity of bone structures. Our code is available at: https://github.com/kennysyp/PaBoT.

Yu, T., Kokenberger, G., Wang, J., Meng, X., Davar, D., Storkus, W., Kirkwood, J., Zarour, H., Pu, J.

medrxiv logopreprintOct 13 2025
ObjectiveThis study explored the association between low-dose computed tomography (LDCT)-derived body composition and melanoma incidence risk. MethodsLDCT scans from the Pittsburgh Lung Screening Study (n=3,422, 22 follow-up years) were analyzed. Body composition features were segmented and quantified from baseline scans using in-house artificial intelligence algorithms. Features were selected before modeling. Fine-Gray subdistribution hazard models assessed the association between body composition and melanoma incidence. Model performance was evaluated using time-dependent area under the curve (AUC). Restricted mean survival time (RMST) compared melanoma-free survival across BMI and body composition groups at 5, 10, and 15 years. Participants were stratified into risk groups, with risk estimated at each time point. Sex-specific analyses were conducted separately. Statistical significance was defined as p<0.05. ResultsAmong 3,422 participants, 80 developed melanoma (43 males, 37 females). In the overall model, visceral adipose tissue (VAT) volume (hazard ratio [HR]=1.27), skeletal muscle (SM) density (HR=0.81), and bone density (HR=1.33) were included, achieving a 21-year AUC of 0.68 (95% CI: 0.65-0.70). The male-specific model included only SM density (HR=0.74; AUC=0.67, 95% CI: 0.65-0.68). The female-specific model (AUC=0.68, 95% CI: 0.65-0.71) included VAT volume (HR=1.47), intramuscular adipose tissue (IMAT) ratio (HR=0.67), and bone density (HR=1.75). Higher VAT, IMAT volume, and lower SM density showed shorter melanoma-free survival and stratified risk better than BMI. Males exhibited higher estimated risk than females. ConclusionLDCT-derived body composition metrics may provide incidental insights into melanoma risk during lung cancer screening, though their predictive utility remains limited and warrants further investigation. Key PointsO_ST_ABSQuestionC_ST_ABSTo investigate the association between CT-derived three-dimensional (3D) body composition and the risk of developing melanoma. FindingsCT-derived body composition was associated with melanoma incidence. Males demonstrated higher estimated risk than females over both short- and long-term follow-up periods. Clinical RelevanceGiven melanomas high mortality and the limited effectiveness of current screening programs, these findings highlight the potential of leveraging routinely acquired lung cancer screening CT scans to enhance melanoma risk assessment.

Zhou, J., Demeke, D. S., Li, X., Dinh, T., O'Connor, C., Liu, J., Zee, J., Ozeki, T., Chen, Y., Janowczyk, A., Holzman, L., Mariani, L., Bitzer, M., Barisoni, L., Hodgin, J. B., Lafata, K.

medrxiv logopreprintOct 13 2025
BackgroundThe current semi-qualitative methods used to score sclerosis and hyalinosis in arteries and arterioles in clinical practice are limited in standardization and reproducibility. We developed a computational pipeline designed to accurately and consistently quantify prognostic arterial and arteriolar characteristics in digital kidney biopsies of patients with focal segmental glomerulosclerosis (FSGS) and minimal change disease (MCD) through segmentation and pathomic feature extraction. MethodsWe utilized one trichrome-stained WSI from 225 participants in the NEPTUNE/CureGN studies, comprising 127 cases of focal segmental glomerulosclerosis (FSGS) and 98 cases of minimal change disease (MCD). We developed, validated, and quality-controlled deep learning models to segment muscular vessels and their internal compartments (lumen, intima, media, and hyalinosis), including (i) arcuate arteries, (ii) interlobular arteries, and (iii) arterioles with two muscle layers. Arterioles, interlobular, and arcuate arteries were visually scored for sclerosis and hyalinosis on a scale of 0 to 3. Area- and thickness-based pathomic feature extraction was performed on each compartment (lumen, intima, media, and hyalinosis) through radial sampling and ray casting. A correlation study was performed between pathomic and visual semiquantitative visual scores, and the association of both visual scores and pathomic features with disease progression (40% eGFR decline or renal failure) was assessed. Summary statistics (maximum, median, and 75th percentile) were computed for each WSI and analyzed using LASSO-regularized Cox proportional hazards models, adjusted for clinical and demographic factors. ResultsA total of 1,499 arterioles, 686 interlobular arteries, and 131 arcuate arteries were segmented. Statistically significant correlations were found between pathologists visual scores and the average intima-media thickness ratio (Spearman {rho} = 0.27, p < 0.001 for arterioles; {rho} = 0.69, p < 0.001 for interlobular arteries; and {rho} = 0.80, p < 0.001 for arcuate arteries) and arteriolar hyalinosis ({rho} = 0.46, p < 0.001). Incorporating pathomic features from trichrome-stained WSIs improved the prediction of disease progression, enhancing the concordance index from 0.70 to 0.75 in arterioles and from 0.69 to 0.74 in arcuate arteries, compared to using demographics and clinical characteristics alone. ConclusionOur computational approach offers a novel and reliable method for segmenting and analyzing the pathomic features of sclerosis and hylalinosis in arteries and arterioles. This technique has demonstrated potential as a valuable tool for enhancing the clinical assessment performed by pathologists. Key PointsO_LIA computational pipeline was developed and validated to segment arteries and arterioles and to quantify lumen, intima, media, and hyalinosis in kidney biopsies from patients with FSGS and MCD. C_LIO_LIPathomic features, such as intima-media thickness ratio and hyalinosis area, significantly correlated with pathologists semi-quantitative sclerosis and hyalinosis scores. C_LIO_LIIntegrating pathomic features into clinical models improved disease progression prediction accuracy C_LI

Hentel, J. Z., teichman, k. t., Shih, G.

medrxiv logopreprintOct 13 2025
The 21st Century Cures Act mandates patient access to electronic health information, yet radiology reports often remain inaccessible due to specialized terminology and widespread low health literacy. This study evaluates large language model (LLM)-based workflows for generating patient-friendly explanations (PFx) of incidental MRI findings. Four approaches--zero-shot, few-shot, multiple few-shot, and agentic--were benchmarked using ICD-10 code alignment for accuracy and Flesch Reading Ease scores for readability. Across 407 outputs per workflow, the agentic method demonstrated the strongest overall performance, achieving a sixth-grade reading level and the highest accuracy. Compared with prior work limited by small sample sizes or suboptimal readability, these results indicate that structured, agent-based LLM workflows can improve both clarity and diagnostic consistency at scale. By translating complex radiology findings into accessible language, AI-generated PFx provide a scalable strategy to reduce health literacy disparities and advance the Cures Acts goal of making medical data both transparent and usable for patients.

Bachmann, R., Nexmann, A., Mudannayake, J., Jensen, J., Sundland, S. L., Lundemann, M. J.

medrxiv logopreprintOct 13 2025
IntroductionEmergency Department (ED) overcrowding, often exacerbated by prolonged patient length of stay (LOS), is a global challenge. Patients presenting with suspected fractures--many of whom are triaged as low-acuity, contribute significantly to the ED burden. Radiographer-led discharge (RLD), supported by artificial intelligence (AI), presents a potential strategy to streamline care, reduce LOS, and maintain diagnostic safety. MethodsThis multi-centre, retrospective study evaluates whether diagnostic radiographers, assisted by the radiological AI decision support tool RBfracture 2.6, can safely discharge patients with no acute skeletal or joint injury. Fifteen radiographers from three countries will independently assess 340 retrospective radiographic examinations (300 consecutive and 40 enriched with rare findings). Referral notes and AI predictions are available. Reference standards are established by consensus among three MSK radiologists/reporting radiographers. Primary outcomes include ED workload reduction (true negatives) and false negative rate. Secondary outcomes will assess AI standalone performance and inter-country comparison. ResultsThe primary object is to evaluate whether diagnostic radiographers from three different countries, assisted by the AI tool Rbfracture 2.6, can safely reduce the emergency department (ED) workload by discharging patients without acute skeletal or joint injuries, specifically those referred with suspected bone fracture or joint dislocation. A secondary objective is to validate the performance of RBfracture 2.6 in detecting fractures, joint dislocations, elbow effusions, knee effusions, and knee lipohemarthrosis.

Zhang W, Zhang Y, Shen Y, Sun C, Chen J, Wei Y, Kang J, Chen Z, Yang J, Yang J, Su C

pubmed logopapersOct 12 2025
To investigate the effect of manual acupuncture manipulations (MAMs) on subcutaneous muscle tissue, by developing quantitative models of "lifting and thrusting" and "twisting and rotating", based on machine learning techniques. A depth camera was used to capture the acupuncture operator's hand movements during "lifting and thrusting" and "twisting and rotating" of needle. Simultaneously, the ultrasound imaging was employed to record the muscle tissue responses of the participants. Amplitude and angular features were extracted from the movement data of operators, and muscle fascicle slope features were derived from the data of ultrasound images. The dynamic time warping barycenter averaging algorithm was adopted to align the dual-source data. Various machine learning techniques were applied to build quantitative models, and the performance of each model was compared. The most optimal model was further analyzed for its interpretability. Among the quantitative models built for the two types of MAMs, the random forest model demonstrated the best performance. For the quantitative model of the "lifting and thrusting" technique, the coefficient of determination (<i>R</i><sup>2</sup>) was 0.825. For the "twisting and rotating" technique, <i>R</i><sup>2</sup> reached 0.872. Machine learning can be used to effectively develop the models and quantify the effects of MAMs on subcutaneous muscle tissue. It provides a new perspective to understand the mechanism of acupuncture therapy and lays a foundation for optimizing acupuncture technology and designing personalized treatment regimen in the future.

Zhang, J., Liu, X., Zheng, S., Zhang, W., Gu, J.

biorxiv logopreprintOct 12 2025
Accurate retinal Microvascular segmentation demands a balanced combination of anatomical fidelity and hemodynamic relevance. However, existing methods fall short in preserving critical structures such as capillary junctions and bifurcations, thus limiting clinical applications and causing fragmentation. To address these limitations, we propose DFMS Net, a novel dual-field segmentation framework that synergistically integrates geometric field modeling achieved through the Spatial Pathway Extractor (SPE) and Transformer-based Topology Interaction (TTI) for preserving structural continuity and functional field optimization realized by the Semantic Attention Amplification (SAA) module for enhancing semantic visibility via a unified Dual-Field Hemodynamic Attention (DFHA) mechanism. This core module enables joint enhancement of vessel continuity, accurate resolution of complex branching patterns, and recovery of low-contrast capillaries, all under physiological guidance. By co-optimizing geometric and functional cues within a unified attention learning paradigm, DFMS-Net produces segmentations that are both morphologically accurate and hemodynamically plausible. Furthermore, we propose two specialized variants using a streamlined Double SPE Attention for vessel continuity refinement. To address the directionality-dependent nature of structural damage in retinal ischemia and glaucoma, we propose Variant1 a streamlined architecture that emphasizes dual-stage directional refinement to enhance trajectory coherence and improve the detection of topological disruptions, while Variant2 supports high-resolution analysis of capillary dropout in early diabetic retinopathy through detailed microvascular recovery. Extensive experiments on retinal (DRIVE, STARE) and coronary angiography (DCA1, CHUAC) datasets demonstrate that DFMS-Net achieves state-of-the-art performance. Meanwhile, its strong generalization capability offers a promising foundation for diagnosing both retinal and cardiovascular diseases. The code will be avail- able at https://github.com/699zjl/DFMS-Net-new.
Page 46 of 6156144 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.