Sort by:
Page 75 of 1331324 results

High-Quality CEST Mapping With Lorentzian-Model Informed Neural Representation.

Chen C, Liu Y, Park SW, Li J, Chan KWY, Huang J, Morel JM, Chan RH

pubmed logopapersMay 28 2025
Chemical Exchange Saturation Transfer (CEST) MRI has demonstrated its remarkable ability to enhance the detection of macromolecules and metabolites with low concentrations. While CEST mapping is essential for quantifying molecular information, conventional methods face critical limitations: model-based approaches are constrained by limited sensitivity and robustness depending heavily on parameter setups, while data-driven deep learning methods lack generalizability across heterogeneous datasets and acquisition protocols. To overcome these challenges, we propose a Lorentzian-model Informed Neural Representation (LINR) framework for high-quality CEST mapping. LINR employs a self-supervised neural architecture embedding the Lorentzian equation - the fundamental biophysical model of CEST signal evolution - to directly reconstruct high-sensitivity parameter maps from raw z-spectra, eliminating dependency on labeled training data. Convergence of the self-supervised training strategy is guaranteed theoretically, ensuring LINR's mathematical validity. The superior performance of LINR in capturing CEST contrasts is revealed through comprehensive evaluations based on synthetic phantoms and in-vivo experiments (including tumor and Alzheimer's disease models). The intuitive parameter-free design enables adaptive integration into diverse CEST imaging workflows, positioning LINR as a versatile tool for non-invasive molecular diagnostics and pathophysiological discovery.

Toward diffusion MRI in the diagnosis and treatment of pancreatic cancer.

Lee J, Lin T, He Y, Wu Y, Qin J

pubmed logopapersMay 28 2025
Pancreatic cancer is a highly aggressive malignancy with rising incidence and mortality rates, often diagnosed at advanced stages. Conventional imaging methods, such as computed tomography (CT) and magnetic resonance imaging (MRI), struggle to assess tumor characteristics and vascular involvement, which are crucial for treatment planning. This paper explores the potential of diffusion magnetic resonance imaging (dMRI) in enhancing pancreatic cancer diagnosis and treatment. Diffusion-based techniques, such as diffusion-weighted imaging (DWI), diffusion tensor imaging (DTI), intravoxel incoherent motion (IVIM), and diffusion kurtosis imaging (DKI), combined with emerging AI‑powered analysis, provide insights into tissue microstructure, allowing for earlier detection and improved evaluation of tumor cellularity. These methods may help assess prognosis and monitor therapy response by tracking diffusion and perfusion metrics. However, challenges remain, such as standardized protocols and robust data analysis pipelines. Ongoing research, including deep learning applications, aims to improve reliability, and dMRI shows promise in providing functional insights and improving patient outcomes. Further clinical validation is necessary to maximize its benefits.

Image analysis research in neuroradiology: bridging clinical and technical domains.

Pareto D, Naval-Baudin P, Pons-Escoda A, Bargalló N, Garcia-Gil M, Majós C, Rovira À

pubmed logopapersMay 28 2025
Advancements in magnetic resonance imaging (MRI) analysis over the past decades have significantly reshaped the field of neuroradiology. The ability to extract multiple quantitative measures from each MRI scan, alongside the development of extensive data repositories, has been fundamental to the emergence of advanced methodologies such as radiomics and artificial intelligence (AI). This educational review aims to delineate the importance of image analysis, highlight key paradigm shifts, examine their implications, and identify existing constraints that must be addressed to facilitate integration into clinical practice. Particular attention is given to aiding junior neuroradiologists in navigating this complex and evolving landscape. A comprehensive review of the available analysis toolboxes was conducted, focusing on major technological advancements in MRI analysis, the evolution of data repositories, and the rise of AI and radiomics in neuroradiology. Stakeholders within the field were identified and their roles examined. Additionally, current challenges and barriers to clinical implementation were analyzed. The analysis revealed several pivotal shifts, including the transition from qualitative to quantitative imaging, the central role of large datasets in developing AI tools, and the growing importance of interdisciplinary collaboration. Key stakeholders-including academic institutions, industry partners, regulatory bodies, and clinical practitioners-were identified, each playing a distinct role in advancing the field. However, significant barriers remain, particularly regarding standardization, data sharing, regulatory approval, and integration into clinical workflows. While advancements in MRI analysis offer tremendous potential to enhance neuroradiology practice, realizing this potential requires overcoming technical, regulatory, and practical barriers. Education and structured support for junior neuroradiologists are essential to ensure they are well-equipped to participate in and drive future developments. A coordinated effort among stakeholders is crucial to facilitate the seamless translation of these technological innovations into everyday clinical practice.

Estimating Total Lung Volume from Pixel-Level Thickness Maps of Chest Radiographs Using Deep Learning.

Dorosti T, Schultheiss M, Schmette P, Heuchert J, Thalhammer J, Gassert FT, Sellerer T, Schick R, Taphorn K, Mechlem K, Birnbacher L, Schaff F, Pfeiffer F, Pfeiffer D

pubmed logopapersMay 28 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To estimate the total lung volume (TLV) from real and synthetic frontal chest radiographs (CXR) on a pixel level using lung thickness maps generated by a U-Net deep learning model. Materials and Methods This retrospective study included 5,959 chest CT scans from two public datasets: the lung nodule analysis 2016 (<i>n</i> = 656) and the Radiological Society of North America (RSNA) pulmonary embolism detection challenge 2020 (<i>n</i> = 5,303). Additionally, 72 participants were selected from the Klinikum Rechts der Isar dataset (October 2018 to December 2019), each with a corresponding chest radiograph taken within seven days. Synthetic radiographs and lung thickness maps were generated using forward projection of CT scans and their lung segmentations. A U-Net model was trained on synthetic radiographs to predict lung thickness maps and estimate TLV. Model performance was assessed using mean squared error (MSE), Pearson correlation coefficient <b>(r)</b>, and two-sided Student's t-distribution. Results The study included 72 participants (45 male, 27 female, 33 healthy: mean age 62 years [range 34-80]; 39 with chronic obstructive pulmonary disease: mean age 69 years [range 47-91]). TLV predictions showed low error rates (MSEPublic-Synthetic = 0.16 L<sup>2</sup>, MSEKRI-Synthetic = 0.20 L<sup>2</sup>, MSEKRI-Real = 0.35 L<sup>2</sup>) and strong correlations with CT-derived reference standard TLV (nPublic-Synthetic = 1,191, r = 0.99, <i>P</i> < .001; nKRI-Synthetic = 72, r = 0.97, <i>P</i> < .001; nKRI-Real = 72, r = 0.91, <i>P</i> < .001). When evaluated on different datasets, the U-Net model achieved the highest performance for TLV estimation on the Luna16 test dataset, with the lowest mean squared error (MSE = 0.09 L<sup>2</sup>) and strongest correlation (<i>r</i> = 0.99, <i>P</i> <.001) compared with CT-derived TLV. Conclusion The U-Net-generated pixel-level lung thickness maps successfully estimated TLV for both synthetic and real radiographs. ©RSNA, 2025.

Contrast-Enhanced Ultrasound for Hepatocellular Carcinoma Diagnosis-<i>AJR</i> Expert Panel Narrative Review.

Li L, Burgio MD, Fetzer DT, Ferraioli G, Lyshchik A, Meloni MF, Rafailidis V, Sidhu PS, Vilgrain V, Wilson SR, Zhou J

pubmed logopapersMay 28 2025
Despite growing clinical use of contrast-enhanced ultrasound (CEUS), inconsistency remains in the modality's role in clinical pathways for hepatocellular carcinoma (HCC) diagnosis and management. This AJR Expert Panel Narrative Review provides practical insights on the use of CEUS for the diagnosis of HCC across populations, including individuals at high risk for HCC, individuals with metabolic dysfunction-associated steatotic liver disease, and remaining individuals not at high risk for HCC. Considerations addressed with respect to high-risk patients include CEUS diagnostic criteria for HCC, use of CEUS for differentiating HCC from non-HCC malignancy, use of CEUS for small (≤2 cm) lesions, use of CEUS for characterizing occult lesions on B-mode ultrasound, and use of CEUS for indeterminate lesions on CT or MRI. Representative literature addressing the use of CEUS for HCC diagnosis as well as gaps in knowledge requiring further investigation are highlighted. Throughout these discussions, the article distinguishes two broad types of ultrasound contrast agents used for liver imaging: pure blood-pool agents and a combined blood-pool and Kupffer-cell agent. Additional topics include the use of CEUS for treatment response assessment after nonradiation therapies and implications of artificial intelligence technologies. The article concludes with a series of consensus statements from the author panel.

A vessel bifurcation landmark pair dataset for abdominal CT deformable image registration (DIR) validation.

Criscuolo ER, Zhang Z, Hao Y, Yang D

pubmed logopapersMay 28 2025
Deformable image registration (DIR) is an enabling technology in many diagnostic and therapeutic tasks. Despite this, DIR algorithms have limited clinical use, largely due to a lack of benchmark datasets for quality assurance during development. DIRs of intra-patient abdominal CTs are among the most challenging registration scenarios due to significant organ deformations and inconsistent image content. To support future algorithm development, here we introduce our first-of-its-kind abdominal CT DIR benchmark dataset, comprising large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Abdominal CT image pairs of 30 patients were acquired from several publicly available repositories as well as the authors' institution with IRB approval. The two CTs of each pair were originally acquired for the same patient but on different days. An image processing workflow was developed and applied to each CT image pair: (1) Abdominal organs were segmented with a deep learning model, and image intensity within organ masks was overwritten. (2) Matching image patches were manually identified between two CTs of each image pair. (3) Vessel bifurcation landmarks were labeled on one image of each image patch pair. (4) Image patches were deformably registered, and landmarks were projected onto the second image. (5) Landmark pair locations were refined manually or with an automated process. This workflow resulted in 1895 total landmark pairs, or 63 per case on average. Estimates of the landmark pair accuracy using digital phantoms were 0.7 mm ± 1.2 mm. The data are published in Zenodo at https://doi.org/10.5281/zenodo.14362785. Instructions for use can be found at https://github.com/deshanyang/Abdominal-DIR-QA. This dataset is a first-of-its-kind for abdominal DIR validation. The number, accuracy, and distribution of landmark pairs will allow for robust validation of DIR algorithms with precision beyond what is currently available.

Deep learning radiomics fusion model to predict visceral pleural invasion of clinical stage IA lung adenocarcinoma: a multicenter study.

Zhao J, Wang T, Wang B, Satishkumar BM, Ding L, Sun X, Chen C

pubmed logopapersMay 28 2025
To assess the predictive performance, risk stratification capabilities, and auxiliary diagnostic utility of radiomics, deep learning, and fusion models in identifying visceral pleural invasion (VPI) in lung adenocarcinoma. A total of 449 patients (female:male, 263:186; 59.8 ± 10.5 years) diagnosed with clinical IA stage lung adenocarcinoma (LAC) from two distinct hospitals were enrolled in the study and divided into a training cohort (n = 289) and an external test cohort (n = 160). The fusion models were constructed from the feature level and the decision level respectively. A comprehensive analysis was conducted to assess the prediction ability and prognostic value of radiomics, deep learning, and fusion models. The diagnostic performance of radiologists of varying seniority with and without the assistance of the optimal model was compared. The late fusion model demonstrated superior diagnostic performance (AUC = 0.812) compared to clinical (AUC = 0.650), radiomics (AUC = 0.710), deep learning (AUC = 0.770), and the early fusion models (AUC = 0.586) in the external test cohort. The multivariate Cox regression analysis showed that the VPI status predicted by the late fusion model were independently associated with patient disease-free survival (DFS) (p = 0.044). Furthermore, model assistance significantly improved radiologist performance, particularly for junior radiologists; the AUC increased by 0.133 (p < 0.001) reaching levels comparable to the senior radiologist without model assistance (AUC: 0.745 vs. 0.730, p = 0.790). The proposed decision-level (late fusion) model significantly reducing the risk of overfitting and demonstrating excellent robustness in multicenter external validation, which can predict VPI status in LAC, aid in prognostic stratification, and assist radiologists in achieving higher diagnostic performance.

Operationalizing postmortem pathology-MRI association studies in Alzheimer's disease and related disorders with MRI-guided histology sampling.

Athalye C, Bahena A, Khandelwal P, Emrani S, Trotman W, Levorse LM, Khodakarami Z, Ohm DT, Teunissen-Bermeo E, Capp N, Sadaghiani S, Arezoumandan S, Lim SA, Prabhakaran K, Ittyerah R, Robinson JL, Schuck T, Lee EB, Tisdall MD, Das SR, Wolk DA, Irwin DJ, Yushkevich PA

pubmed logopapersMay 28 2025
Postmortem neuropathological examination, while the gold standard for diagnosing neurodegenerative diseases, often relies on limited regional sampling that may miss critical areas affected by Alzheimer's disease and related disorders. Ultra-high resolution postmortem MRI can help identify regions that fall outside the diagnostic sampling criteria for additional histopathologic evaluation. However, there are no standardized guidelines for integrating histology and MRI in a traditional brain bank. We developed a comprehensive protocol for whole hemisphere postmortem 7T MRI-guided histopathological sampling with whole-slide digital imaging and histopathological analysis, providing a reliable pipeline for high-volume brain banking in heterogeneous brain tissue. Our method uses patient-specific 3D printed molds built from postmortem MRI, allowing standardized tissue processing with a permanent spatial reference frame. To facilitate pathology-MRI association studies, we created a semi-automated MRI to histology registration pipeline and developed a quantitative pathology scoring system using weakly supervised deep learning. We validated this protocol on a cohort of 29 brains with diagnosis on the AD spectrum that revealed correlations between cortical thickness and phosphorylated tau accumulation. This pipeline has broad applicability across neuropathological research and brain banking, facilitating large-scale studies that integrate histology with neuroimaging. The innovations presented here provide a scalable and reproducible approach to studying postmortem brain pathology, with implications for advancing diagnostic and therapeutic strategies for Alzheimer's disease and related disorders.

Efficient feature extraction using light-weight CNN attention-based deep learning architectures for ultrasound fetal plane classification.

Sivasubramanian A, Sasidharan D, Sowmya V, Ravi V

pubmed logopapersMay 28 2025
Ultrasound fetal imaging is beneficial to support prenatal development because it is affordable and non-intrusive. Nevertheless, fetal plane classification (FPC) remains challenging and time-consuming for obstetricians since it depends on nuanced clinical aspects, which increases the difficulty in identifying relevant features of the fetal anatomy. Thus, to assist with its accurate feature extraction, a lightweight artificial intelligence architecture leveraging convolutional neural networks and attention mechanisms is proposed to classify the largest benchmark ultrasound dataset. The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k. to classify key fetal planes such as the brain, femur, thorax, cervix, and abdomen. Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576. Importantly, the model has 40x fewer trainable parameters than existing benchmark ensemble or transformer pipelines, facilitating easy deployment on edge devices to help clinical practitioners with real-time FPC. The findings are also interpreted using GradCAM to carry out clinical correlation to aid doctors with diagnostics and improve treatment plans for expectant mothers.
Page 75 of 1331324 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.