Sort by:
Page 80 of 99982 results

Optimizing MR-based attenuation correction in hybrid PET/MR using deep learning: validation with a flatbed insert and consistent patient positioning.

Wang H, Wang Y, Xue Q, Zhang Y, Qiao X, Lin Z, Zheng J, Zhang Z, Yang Y, Zhang M, Huang Q, Huang Y, Cao T, Wang J, Li B

pubmed logopapersJun 1 2025
To address the challenges of verifying MR-based attenuation correction (MRAC) in PET/MR due to CT positional mismatches and alignment issues, this study utilized a flatbed insert and arms-down positioning during PET/CT scans to achieve precise MR-CT matching for accurate MRAC evaluation. A validation dataset of 21 patients underwent whole-body [<sup>18</sup>F]FDG PET/CT followed by [<sup>18</sup>F]FDG PET/MR. A flatbed insert ensured consistent positioning, allowing direct comparison of four MRAC methods-four-tissue and five-tissue models with discrete and continuous μ-maps-against CT-based attenuation correction (CTAC). A deep learning-based framework, trained on a dataset of 300 patients, was used to generate synthesized-CTs from MR images, forming the basis for all MRAC methods. Quantitative analyses were conducted at the whole-body, region of interest, and lesion levels, with lesion-distance analysis evaluating the impact of bone proximity on standardized uptake value (SUV) quantification. Distinct differences were observed among MRAC methods in spine and femur regions. Joint histogram analysis showed MRAC-4 (continuous μ-map) closely aligned with CTAC. Lesion-distance analysis revealed MRAC-4 minimized bone-induced SUV interference (r = 0.01, p = 0.8643). However, tissues prone to bone segmentation interference, such as the spine and liver, exhibited greater SUV variability and lower reproducibility in MRAC-4 compared to MRAC-2 (2D bone segmentation, discrete μ-map) and MRAC-3 (3D bone segmentation, discrete μ-map). Using a flatbed insert, this study validated MRAC with high precision. Continuous μ-value MRAC method (MRAC-4) demonstrated superior accuracy and minimized bone-related SUV errors but faced challenges in reproducibility, particularly in bone-rich regions.

Advances and current research status of early diagnosis for gallbladder cancer.

He JJ, Xiong WL, Sun WQ, Pan QY, Xie LT, Jiang TA

pubmed logopapersJun 1 2025
Gallbladder cancer (GBC) is the most common malignant tumor in the biliary system, characterized by high malignancy, aggressiveness, and poor prognosis. Early diagnosis holds paramount importance in ameliorating therapeutic outcomes. Presently, the clinical diagnosis of GBC primarily relies on clinical-radiological-pathological approach. However, there remains a potential for missed diagnosis and misdiagnose in the realm of clinical practice. We firstly analyzed the blood-based biomarkers, such as carcinoembryonic antigen and carbohydrate antigen 19-9. Subsequently, we evaluated the diagnostic performance of various imaging modalities, including ultrasound (US), endoscopic ultrasound (EUS), computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/computed tomography (PET/CT) and pathological examination, emphasizing their strengths and limitations in detecting early-stage GBC. Furthermore, we explored the potential of emerging technologies, particularly artificial intelligence (AI) and liquid biopsy, to revolutionize GBC diagnosis. AI algorithms have demonstrated improved image analysis capabilities, while liquid biopsy offers the promise of non-invasive and real-time monitoring. However, the translation of these advancements into clinical practice necessitates further validation and standardization. The review highlighted the advantages and limitations of current diagnostic approaches and underscored the need for innovative strategies to enhance diagnostic accuracy of GBC. In addition, we emphasized the importance of multidisciplinary collaboration to improve early diagnosis of GBC and ultimately patient outcomes. This review endeavoured to impart fresh perspectives and insights into the early diagnosis of GBC.

Artificial intelligence driven plaque characterization and functional assessment from CCTA using OCT-based automation: A prospective study.

Han J, Wang Z, Chen T, Liu S, Tan J, Sun Y, Feng L, Zhang D, Ma L, Liu H, Tao H, Fang C, Yu H, Zeng M, Jia H, Yu B

pubmed logopapersJun 1 2025
We aimed to develop and validate an Artificial Intelligence (AI) model that leverages CCTA and optical coherence tomography (OCT) images for automated analysis of plaque characteristics and coronary function. A total of 100 patients who underwent invasive coronary angiography, OCT, and CCTA before discharge were included in this study. The data were randomly divided into a training set (80 %) and a test set (20 %). The training set, comprising 21,471 tomography images, was used to train a deep-learning convolutional neural network. Subsequently, the AI model was integrated with flow reserve score calculation software developed by Ruixin Medical. The results from the test set demonstrated excellent agreement between the AI model and OCT analysis for calcified plaque (McNemar test, p = 0.683), non-calcified plaque (McNemar test, p = 0.752), mixed plaque (McNemar test, p = 1.000), and low-attenuation plaque (McNemar test, p = 1.000). Additionally, there was excellent agreement for deep learning-derived minimum lumen diameter (intraclass correlation coefficient [ICC] 0.91, p < 0.001), mean vessel diameter (ICC 0.88, p < 0.001), and percent diameter stenosis (ICC 0.82, p < 0.001). In diagnosing >50 % coronary stenosis, the diagnostic accuracy of the AI model surpassed that of conventional CCTA (AUC 0.98 vs. 0.76, p = 0.008). When compared with quantitative flow fraction, there was excellent agreement between QFR and AI-derived CT-FFR (ICC 0.745, p < 0.0001). Our AI model effectively provides automated analysis of plaque characteristics from CCTA images, with the analysis results showing strong agreement with OCT findings. Moreover, the CT-FFR automatically analyzed by the AI model exhibits high consistency with QFR derived from coronary angiography.

MRI and CT radiomics for the diagnosis of acute pancreatitis.

Tartari C, Porões F, Schmidt S, Abler D, Vetterli T, Depeursinge A, Dromain C, Violi NV, Jreige M

pubmed logopapersJun 1 2025
To evaluate the single and combined diagnostic performances of CT and MRI radiomics for diagnosis of acute pancreatitis (AP). We prospectively enrolled 78 patients (mean age 55.7 ± 17 years, 48.7 % male) diagnosed with AP between 2020 and 2022. Patients underwent contrast-enhanced CT (CECT) within 48-72 h of symptoms and MRI ≤ 24 h after CECT. The entire pancreas was manually segmented tridimensionally by two operators on portal venous phase (PVP) CECT images, T2-weighted imaging (WI) MR sequence and non-enhanced and PVP T1-WI MR sequences. A matched control group (n = 77) with normal pancreas was used. Dataset was randomly split into training and test, and various machine learning algorithms were compared. Receiver operating curve analysis was performed. The T2WI model exhibited significantly better diagnostic performance than CECT and non-enhanced and venous T1WI, with sensitivity, specificity and AUC of 73.3 % (95 % CI: 71.5-74.7), 80.1 % (78.2-83.2), and 0.834 (0.819-0.844) for T2WI (p = 0.001), 74.4 % (71.5-76.4), 58.7 % (56.3-61.1), and 0.654 (0.630-0.677) for non-enhanced T1WI, 62.1 % (60.1-64.2), 78.7 % (77.1-81), and 0.787 (0.771-0.810) for venous T1WI, and 66.4 % (64.8-50.9), 48.4 % (46-50.9), and 0.610 (0.586-0.626) for CECT, respectively.The combination of T2WI with CECT enhanced diagnostic performance compared to T2WI, achieving sensitivity, specificity and AUC of 81.4 % (80-80.3), 78.1 % (75.9-80.2), and 0.911 (0.902-0.920) (p = 0.001). The MRI radiomics outperformed the CT radiomics model to detect diagnosis of AP and the combination of MRI with CECT showed better performance than single models. The translation of radiomics into clinical practice may improve detection of AP, particularly MRI radiomics.

Automated neuroradiological support systems for multiple cerebrovascular disease markers - A systematic review and meta-analysis.

Phitidis J, O'Neil AQ, Whiteley WN, Alex B, Wardlaw JM, Bernabeu MO, Hernández MV

pubmed logopapersJun 1 2025
Cerebrovascular diseases (CVD) can lead to stroke and dementia. Stroke is the second leading cause of death world wide and dementia incidence is increasing by the year. There are several markers of CVD that are visible on brain imaging, including: white matter hyperintensities (WMH), acute and chronic ischaemic stroke lesions (ISL), lacunes, enlarged perivascular spaces (PVS), acute and chronic haemorrhagic lesions, and cerebral microbleeds (CMB). Brain atrophy also occurs in CVD. These markers are important for patient management and intervention, since they indicate elevated risk of future stroke and dementia. We systematically reviewed automated systems designed to support radiologists reporting on these CVD imaging findings. We considered commercially available software and research publications which identify at least two CVD markers. In total, we included 29 commercial products and 13 research publications. Two distinct types of commercial support system were available: those which identify acute stroke lesions (haemorrhagic and ischaemic) from computed tomography (CT) scans, mainly for the purpose of patient triage; and those which measure WMH and atrophy regionally and longitudinally. In research, WMH and ISL were the markers most frequently analysed together, from magnetic resonance imaging (MRI) scans; lacunes and PVS were each targeted only twice and CMB only once. For stroke, commercially available systems largely support the emergency setting, whilst research systems consider also follow-up and routine scans. The systems to quantify WMH and atrophy are focused on neurodegenerative disease support, where these CVD markers are also of significance. There are currently no openly validated systems, commercially, or in research, performing a comprehensive joint analysis of all CVD markers (WMH, ISL, lacunes, PVS, haemorrhagic lesions, CMB, and atrophy).

Beyond traditional orthopaedic data analysis: AI, multimodal models and continuous monitoring.

Oettl FC, Zsidai B, Oeding JF, Hirschmann MT, Feldt R, Tischer T, Samuelsson K

pubmed logopapersJun 1 2025
Multimodal artificial intelligence (AI) has the potential to revolutionise healthcare by enabling the simultaneous processing and integration of various data types, including medical imaging, electronic health records, genomic information and real-time data. This review explores the current applications and future potential of multimodal AI across healthcare, with a particular focus on orthopaedic surgery. In presurgical planning, multimodal AI has demonstrated significant improvements in diagnostic accuracy and risk prediction, with studies reporting an Area under the receiving operator curve presenting good to excellent performance across various orthopaedic conditions. Intraoperative applications leverage advanced imaging and tracking technologies to enhance surgical precision, while postoperative care has been advanced through continuous patient monitoring and early detection of complications. Despite these advances, significant challenges remain in data integration, standardisation, and privacy protection. Technical solutions such as federated learning (allowing decentralisation of models) and edge computing (allowing data analysis to happen on site or closer to site instead of multipurpose datacenters) are being developed to address these concerns while maintaining compliance with regulatory frameworks. As this field continues to evolve, the integration of multimodal AI promises to advance personalised medicine, improve patient outcomes, and transform healthcare delivery through more comprehensive and nuanced analysis of patient data. Level of Evidence: Level V.

PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer.

Ma B, Guo J, Dijk LVV, Langendijk JA, Ooijen PMAV, Both S, Sijtsema NM

pubmed logopapersJun 1 2025
In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET and CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers and image-fusion strategies, could achieve comparable performance as SOTA models. The HECKTOR 2022 dataset comprises 489 oropharyngeal cancer (OPC) patients from seven distinct centers. It was randomly divided into a training set (n = 369) and an independent test set (n = 120). Furthermore, an additional dataset of 400 OPC patients, who underwent chemo(radiotherapy) at our center, was employed for external testing. Each patients' data included pre-treatment CT- and PET-scans, manually generated GTV (Gross tumour volume) contours for primary tumors and lymph nodes, and RFP information. The present study compared the performance of DenseNet against three SOTA models developed on the HECKTOR 2022 dataset. When inputting CT, PET and GTV using the early fusion (considering them as different channels of input) approach, DenseNet81 (with 81 layers) obtained an internal test C-index of 0.69, a performance metric comparable with SOTA models. Notably, the removal of GTV from the input data yielded the same internal test C-index of 0.69 while improving the external test C-index from 0.59 to 0.63. Furthermore, compared to PET-only models, when utilizing the late fusion (concatenation of extracted features) with CT and PET, DenseNet81 demonstrated superior C-index values of 0.68 and 0.66 in both internal and external test sets, while using early fusion was better in only the internal test set. The basic DenseNet architecture with 81 layers demonstrated a predictive performance on par with SOTA models featuring more intricate architectures in the internal test set, and better performance in the external test. The late fusion of CT and PET imaging data yielded superior performance in the external test.

The integration of artificial intelligence into clinical medicine: Trends, challenges, and future directions.

Aravazhi PS, Gunasekaran P, Benjamin NZY, Thai A, Chandrasekar KK, Kolanu ND, Prajjwal P, Tekuru Y, Brito LV, Inban P

pubmed logopapersJun 1 2025
AI has emerged as a transformative force in clinical medicine, changing the diagnosis, treatment, and management of patients. Tools have been derived for working with ML, DL, and NLP algorithms to analyze large complex medical datasets with unprecedented accuracy and speed, thereby improving diagnostic precision, treatment personalization, and patient care outcomes. For example, CNNs have dramatically improved the accuracy of medical imaging diagnoses, and NLP algorithms have greatly helped extract insights from unstructured data, including EHRs. However, there are still numerous challenges that face AI integration into clinical workflows, including data privacy, algorithmic bias, ethical dilemmas, and problems with the interpretability of "black-box" AI models. These barriers have thus far prevented the widespread application of AI in health care, and its possible trends, obstacles, and future implications are necessary to be systematically explored. The purpose of this paper is, therefore, to assess the current trends in AI applications in clinical medicine, identify those obstacles that are hindering adoption, and identify possible future directions. This research hopes to synthesize evidence from other peer-reviewed articles to provide a more comprehensive understanding of the role that AI plays to advance clinical practices, improve patient outcomes, or enhance decision-making. A systematic review was done according to the PRISMA guidelines to explore the integration of Artificial Intelligence in clinical medicine, including trends, challenges, and future directions. PubMed, Cochrane Library, Web of Science, and Scopus databases were searched for peer-reviewed articles from 2014 to 2024 with keywords such as "Artificial Intelligence in Medicine," "AI in Clinical Practice," "Machine Learning in Healthcare," and "Ethical Implications of AI in Medicine." Studies focusing on AI application in diagnostics, treatment planning, and patient care reporting measurable clinical outcomes were included. Non-clinical AI applications and articles published before 2014 were excluded. Selected studies were screened for relevance, and then their quality was critically appraised to synthesize data reliably and rigorously. This systematic review includes the findings of 8 studies that pointed out the transformational role of AI in clinical medicine. AI tools, such as CNNs, had diagnostic accuracy more than the traditional methods, particularly in radiology and pathology. Predictive models efficiently supported risk stratification, early disease detection, and personalized medicine. Despite these improvements, significant hurdles, including data privacy, algorithmic bias, and resistance from clinicians regarding the "black-box" nature of AI, had yet to be surmounted. XAI has emerged as an attractive solution that offers the promise to enhance interpretability and trust. As a whole, AI appeared promising in enhancing diagnostics, treatment personalization, and clinical workflows by dealing with systemic inefficiencies. The transformation potential of AI in clinical medicine can transform diagnostics, treatment strategies, and efficiency. Overcoming obstacles such as concerns about data privacy, the danger of algorithmic bias, and difficulties with interpretability may pave the way for broader use and facilitate improvement in patient outcomes while transforming clinical workflows to bring sustainability into healthcare delivery.

Integrating finite element analysis and physics-informed neural networks for biomechanical modeling of the human lumbar spine.

Ahmadi M, Biswas D, Paul R, Lin M, Tang Y, Cheema TS, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 1 2025
Comprehending the biomechanical characteristics of the human lumbar spine is crucial for managing and preventing spinal disorders. Precise material properties derived from patient-specific CT scans are essential for simulations to accurately mimic real-life scenarios, which is invaluable in creating effective surgical plans. The integration of Finite Element Analysis (FEA) with Physics-Informed Neural Networks (PINNs) offers significant clinical benefits by automating lumbar spine segmentation and meshing. We developed a FEA model of the lumbar spine incorporating detailed anatomical and material properties derived from high-quality CT and MRI scans. The model includes vertebrae and intervertebral discs, segmented and meshed using advanced imaging and computational techniques. PINNs were implemented to integrate physical laws directly into the neural network training process, ensuring that the predictions of material properties adhered to the governing equations of mechanics. The model achieved an accuracy of 94.30% in predicting material properties such as Young's modulus (14.88 GPa for cortical bone and 1.23 MPa for intervertebral discs), Poisson's ratio (0.25 and 0.47, respectively), bulk modulus (9.87 GPa and 6.56 MPa, respectively), and shear modulus (5.96 GPa and 0.42 MPa, respectively). We developed a lumbar spine FEA model using anatomical and material properties from CT and MRI scans. Vertebrae and discs were segmented and meshed with advanced imaging techniques, while PINNs ensured material predictions followed mechanical laws. The integration of FEA and PINNs allows for accurate, automated prediction of material properties and mechanical behaviors of the lumbar spine, significantly reducing manual input and enhancing reliability. This approach ensures dependable biomechanical simulations and supports the development of personalized treatment plans and surgical strategies, ultimately improving clinical outcomes for spinal disorders. This method improves surgical planning and outcomes, contributing to better patient care and recovery in spinal disorders.

Advanced image preprocessing and context-aware spatial decomposition for enhanced breast cancer segmentation.

Kalpana G, Deepa N, Dhinakaran D

pubmed logopapersJun 1 2025
The segmentation of breast cancer diagnosis and medical imaging contains issues such as noise, variation in contrast, and low resolutions which make it challenging to distinguish malignant sites. In this paper, we propose a new solution that integrates with AIPT (Advanced Image Preprocessing Techniques) and CASDN (Context-Aware Spatial Decomposition Network) to overcome these problems. The preprocessing pipeline apply bunch of methods including Adaptive Thresholding, Hierarchical Contrast Normalization, Contextual Feature Augmentation, Multi-Scale Region Enhancement, and Dynamic Histogram Equalization for image quality. These methods smooth edges, equalize the contrasting picture and inlay contextual details in a way which effectively eliminate the noise and make the images clearer and with fewer distortions. Experimental outcomes demonstrate its effectiveness by delivering a Dice Coefficient of 0.89, IoU of 0.85, and a Hausdorff Distance of 5.2 demonstrating its enhanced capability in segmenting significant tumor margins over other techniques. Furthermore, the use of the improved preprocessing pipeline benefits classification models with improved Convolutional Neural Networks having a classification accuracy of 85.3 % coupled with AUC-ROC of 0.90 which shows a significant enhancement from conventional techniques.•Enhanced segmentation accuracy with advanced preprocessing and CASDN, achieving superior performance metrics.•Robust multi-modality compatibility, ensuring effectiveness across mammograms, ultrasounds, and MRI scans.
Page 80 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.