Sort by:
Page 108 of 2212205 results

Enhancing pancreatic cancer detection in CT images through secretary wolf bird optimization and deep learning.

Mekala S, S PK

pubmed logopapersJun 5 2025
The pancreas is a gland in the abdomen that helps to produce hormones and digest food. The irregular development of tissues in the pancreas is termed as pancreatic cancer. Identification of pancreatic tumors early is significant for enhancing survival rate and providing appropriate treatment. Thus, an efficient Secretary Wolf Bird Optimization (SeWBO)_Efficient DenseNet is presented for pancreatic tumor detection using Computed Tomography (CT) scans. Firstly, the input pancreatic CT image is accumulated from a database and subjected to image preprocessing using a bilateral filter. After this, lesion is segmented by utilizing Parallel Reverse Attention Network (PraNet), and hyperparameters of PraNet are enhanced by using the proposed SeWBO. The SeWBO is designed by incorporating Wolf Bird Optimization (WBO) and the Secretary Bird Optimization Algorithm (SBOA). Then, features like Complete Local Binary Pattern (CLBP) with Discrete Wavelet Transformation (DWT), statistical features, and Shape Local Binary Texture (SLBT) are extracted. Finally, pancreatic tumor detection is performed by SeWBO_Efficient DenseNet. Here, Efficient DenseNet is developed by combining EfficientNet and DenseNet. Moreover, the proposed SeWBO_Efficient DenseNet achieves better True Negative Rate (TNR), accuracy, and True Positive Rate (TPR), of 93.596%, 94.635%, and 92.579%.

Noise-induced self-supervised hybrid UNet transformer for ischemic stroke segmentation with limited data annotations.

Soh WK, Rajapakse JC

pubmed logopapersJun 5 2025
We extend the Hybrid Unet Transformer (HUT) foundation model, which combines the advantages of the CNN and Transformer architectures with a noisy self-supervised approach, and demonstrate it in an ischemic stroke lesion segmentation task. We introduce a self-supervised approach using a noise anchor and show that it can perform better than a supervised approach under a limited amount of annotated data. We supplement our pre-training process with an additional unannotated CT perfusion dataset to validate our approach. Compared to the supervised version, the noisy self-supervised HUT (HUT-NSS) outperforms its counterpart by a margin of 2.4% in terms of dice score. HUT-NSS, on average, gained a further margin of 7.2% dice score and 28.1% Hausdorff Distance score over the state-of-the-art network USSLNet on the CT perfusion scans of the Ischemic Stroke Lesion Segmentation (ISLES2018) dataset. In limited annotated data sets, we show that HUT-NSS gained 7.87% of the dice score over USSLNet when we used 50% of the annotated data sets for training. HUT-NSS gained 7.47% of the dice score over USSLNet when we used 10% of the annotated datasets, and HUT-NSS gained 5.34% of the dice score over USSLNet when we used 1% of the annotated datasets for training. The code is available at https://github.com/vicsohntu/HUTNSS_CT .

GNNs surpass transformers in tumor medical image segmentation.

Xiao H, Yang G, Li Z, Yi C

pubmed logopapersJun 5 2025
To assess the suitability of Transformer-based architectures for medical image segmentation and investigate the potential advantages of Graph Neural Networks (GNNs) in this domain. We analyze the limitations of the Transformer, which models medical images as sequences of image patches, limiting its flexibility in capturing complex and irregular tumor structures. To address it, we propose U-GNN, a pure GNN-based U-shaped architecture designed for medical image segmentation. U-GNN retains the U-Net-inspired inductive bias while leveraging GNNs' topological modeling capabilities. The architecture consists of Vision GNN blocks stacked into a U-shaped structure. Additionally, we introduce the concept of multi-order similarity and propose a zero-computation-cost approach to incorporate higher-order similarity in graph construction. Each Vision GNN block segments the image into patch nodes, constructs multi-order similarity graphs, and aggregates node features via multi-order node information aggregation. Experimental evaluations on multi-organ and cardiac segmentation datasets demonstrate that U-GNN significantly outperforms existing CNN- and Transformer-based models. U-GNN achieves a 6% improvement in Dice Similarity Coefficient (DSC) and an 18% reduction in Hausdorff Distance (HD) compared to state-of-the-art methods. The source code will be released upon paper acceptance.

Matrix completion-informed deep unfolded equilibrium models for self-supervised <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space interpolation in MRI.

Luo C, Wang H, Liu Y, Xie T, Chen G, Jin Q, Liang D, Cui ZX

pubmed logopapersJun 5 2025
Self-supervised methods for magnetic resonance imaging (MRI) reconstruction have garnered significant interest due to their ability to address the challenges of slow data acquisition and scarcity of fully sampled labels. Current regularization-based self-supervised techniques merge the theoretical foundations of regularization with the representational strengths of deep learning and enable effective reconstruction under higher acceleration rates, yet often fall short in interpretability, leaving their theoretical underpinnings lacking. In this paper, we introduce a novel self-supervised approach that provides stringent theoretical guarantees and interpretable networks while circumventing the need for fully sampled labels. Our method exploits the intrinsic relationship between convolutional neural networks and the null space within structural low-rank models, effectively integrating network parameters into an iterative reconstruction process. Our network learns gradient descent steps of the projected gradient descent algorithm without changing its convergence property, which implements a fully interpretable unfolded model. We design a non-expansive mapping for the network architecture, ensuring convergence to a fixed point. This well-defined framework enables complete reconstruction of missing <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space data grounded in matrix completion theory, independent of fully sampled labels. Qualitative and quantitative experimental results on multi-coil MRI reconstruction demonstrate the efficacy of our self-supervised approach, showing marked improvements over existing self-supervised and traditional regularization methods, achieving results comparable to supervised learning in selected scenarios. Our method surpasses existing self-supervised approaches in reconstruction quality and also delivers competitive performance under supervised settings. This work not only advances the state-of-the-art in MRI reconstruction but also enhances interpretability in deep learning applications for medical imaging.

Prenatal detection of congenital heart defects using the deep learning-based image and video analysis: protocol for Clinical Artificial Intelligence in Fetal Echocardiography (CAIFE), an international multicentre multidisciplinary study.

Patey O, Hernandez-Cruz N, D'Alberti E, Salovic B, Noble JA, Papageorghiou AT

pubmed logopapersJun 5 2025
Congenital heart defect (CHD) is a significant, rapidly emerging global problem in child health and a leading cause of neonatal and childhood death. Prenatal detection of CHDs with the help of ultrasound allows better perinatal management of such pregnancies, leading to reduced neonatal mortality, morbidity and developmental complications. However, there is a wide variation in reported fetal heart problem detection rates from 34% to 85%, with some low- and middle-income countries detecting as low as 9.3% of cases before birth. Research has shown that deep learning-based or more general artificial intelligence (AI) models can support the detection of fetal CHDs more rapidly than humans performing ultrasound scan. Progress in this AI-based research depends on the availability of large, well-curated and diverse data of ultrasound images and videos of normal and abnormal fetal hearts. Currently, CHD detection based on AI models is not accurate enough for practical clinical use, in part due to the lack of ultrasound data available for machine learning as CHDs are rare and heterogeneous, the retrospective nature of published studies, the lack of multicentre and multidisciplinary collaboration, and utilisation of mostly standard planes still images of the fetal heart for AI models. Our aim is to develop AI models that could support clinicians in detecting fetal CHDs in real time, particularly in nonspecialist or low-resource settings where fetal echocardiography expertise is not readily available. We have designed the Clinical Artificial Intelligence Fetal Echocardiography (CAIFE) study as an international multicentre multidisciplinary collaboration led by a clinical and an engineering team at the University of Oxford. This study involves five multicountry hospital sites for data collection (Oxford, UK (n=1), London, UK (n=3) and Southport, Australia (n=1)). We plan to curate 14 000 retrospective ultrasound scans of fetuses with normal hearts (n=13 000) and fetuses with CHDs (n=1000), as well as 2400 prospective ultrasound cardiac scans, including the proposed research-specific CAIFE 10 s video sweeps, from fetuses with normal hearts (n=2000) and fetuses diagnosed with major CHDs (n=400). This gives a total of 16 400 retrospective and prospective ultrasound scans from the participating hospital sites. We will build, train and validate computational models capable of differentiating between normal fetal hearts and those diagnosed with CHDs and recognise specific types of CHDs. Data will be analysed using statistical metrics, namely, sensitivity, specificity and accuracy, which include calculating positive and negative predictive values for each outcome, compared with manual assessment. We will disseminate the findings through regional, national and international conferences and through peer-reviewed journals. The study was approved by the Health Research Authority, Care Research Wales and the Research Ethics Committee (Ref: 23/EM/0023; IRAS Project ID: 317510) on 8 March 2023. All collaborating hospitals have obtained the local trust research and development approvals.

Enhancing image quality in fast neutron-based range verification of proton therapy using a deep learning-based prior in LM-MAP-EM reconstruction.

Setterdahl LM, Skjerdal K, Ratliff HN, Ytre-Hauge KS, Lionheart WRB, Holman S, Pettersen HES, Blangiardi F, Lathouwers D, Meric I

pubmed logopapersJun 5 2025
This study investigates the use of list-mode (LM) maximum a posteriori (MAP) expectation maximization (EM) incorporating prior information predicted by a convolutional neural network for image reconstruction in fast neutron (FN)-based proton therapy range verification.&#xD;Approach. A conditional generative adversarial network (pix2pix) was trained on progressively noisier data, where detector resolution effects were introduced gradually to simulate realistic conditions. FN data were generated using Monte Carlo simulations of an 85 MeV proton pencil beam in a computed tomography (CT)-based lung cancer patient model, with range shifts emulating weight gain and loss. The network was trained to estimate the expected two-dimensional (2D) ground truth FN production distribution from simple back-projection images. Performance was evaluated using mean squared error (MSE), structural similarity index (SSIM), and the correlation between shifts in predicted distributions and true range shifts. &#xD;Main results. Our results show that pix2pix performs well on noise-free data but suffers from significant degradation when detector resolution effects are introduced. Among the LM-MAP-EM approaches tested, incorporating a mean prior estimate into the reconstruction process improved performance, with LM-MAP-EM using a mean prior estimate outperforming naïve LM maximum likelihood EM (LM-MLEM) and conventional LM-MAP-EM with a smoothing quadratic energy function in terms of SSIM. &#xD;Significance. Findings suggest that deep learning techniques can enhance iterative reconstruction for range verification in proton therapy. However, the effectiveness of the model is highly dependent on data quality, limiting its robustness in high-noise scenarios.&#xD.

Preliminary analysis of AI-based thyroid nodule evaluation in a non-subspecialist endocrinology setting.

Fernández Velasco P, Estévez Asensio L, Torres B, Ortolá A, Gómez Hoyos E, Delgado E, de Luís D, Díaz Soto G

pubmed logopapersJun 5 2025
Thyroid nodules are commonly evaluated using ultrasound-based risk stratification systems, which rely on subjective descriptors. Artificial intelligence (AI) may improve assessment, but its effectiveness in non-subspecialist settings is unclear. This study evaluated the impact of an AI-based decision support system (AI-DSS) on thyroid nodule ultrasound assessments by general endocrinologists (GE) without subspecialty thyroid imaging training. A prospective cohort study was conducted on 80 patients undergoing thyroid ultrasound in GE outpatient clinics. Thyroid ultrasound was performed based on clinical judgment as part of routine care by GE. Images were retrospectively analyzed using an AI-DSS (Koios DS), independently of clinician assessments. AI-DSS results were compared with initial GE evaluations and, when referred, with expert evaluations at a subspecialized thyroid nodule clinic (TNC). Agreement in ultrasound features, risk classification by the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) and American Thyroid Association guidelines, and referral recommendations was assessed. AI-DSS differed notably from GE, particularly assessing nodule composition (solid: 80%vs.36%,p < 0.01), echogenicity (hypoechoic:52%vs.16%,p < 0.01), and echogenic foci (microcalcifications:10.7%vs.1.3%,p < 0.05). AI-DSS classification led to a higher referral rate compared to GE (37.3%vs.30.7%, not statistically significant). Agreement between AI-DSS and GE in ACR TI-RADS scoring was moderate (r = 0.337;p < 0.001), but improved when comparing GE to AI-DSS and TNC subspecialist (r = 0.465;p < 0.05 and r = 0.607;p < 0.05, respectively). In a non-subspecialist setting, non-adjunct AI-DSS use did not significantly improve risk stratification or reduce hypothetical referrals. The system tended to overestimate risk, potentially leading to unnecessary procedures. Further optimization is required for AI to function effectively in low-prevalence environment.

Development of a deep learning model for measuring sagittal parameters on cervical spine X-ray.

Wang S, Li K, Zhang S, Zhang D, Hao Y, Zhou Y, Wang C, Zhao H, Ma Y, Zhao D, Chen J, Li X, Wang H, Li Z, Shi J, Wang X

pubmed logopapersJun 5 2025
To develop a deep learning model to automatically measure the curvature-related sagittal parameters on cervical spinal X-ray images. This retrospective study collected a total of 700 lateral cervical spine X-ray images from three hospitals, consisting of 500 training sets, 100 internal test sets, and 100 external test sets. 6 measured parameters and 34 landmarks were measured and labeled by two doctors and averaged as the gold standard. A Convolutional neural network (CNN) model was built by training on 500 images and testing on 200 images. Statistical analysis is used to evaluate labeling differences and model performance. The percentages of the difference in distance between landmarks within 4 mm were 96.90% (Dr. A vs. Dr. B), 98.47% (Dr. A vs. model), and 97.31% (Dr. B vs. model); within 3 mm were 94.88% (Dr. A vs. Dr. B), 96.43% (Dr. A vs. model), and 94.16% (Dr. B vs. model). The mean difference of the algorithmic model in labeling landmarks was 1.17 ± 1.14 mm. The mean absolute error (MAE) of the algorithmic model for the Borden method, Cervical curvature index (CCI), Vertebral centroid measurement cervical lordosis (CCL), C<sub>0</sub>-C<sub>7</sub> Cobb, C<sub>1</sub>-C<sub>7</sub> Cobb, C<sub>2</sub>-C<sub>7</sub> Cobb in the test sets are 1.67 mm, 2.01%, 3.22°, 2.37°, 2.49°, 2.81°, respectively; symmetric mean absolute percentage error (SMAPE) was 20.06%, 21.68%, 20.02%, 6.68%, 5.28%, 20.46%, respectively. Also, the algorithmic model of the six cervical sagittal parameters is in good agreement with the gold standard (intraclass correlation efficiency was 0.983; p < 0.001). Our deep learning algorithmic model had high accuracy in recognizing the landmarks of the cervical spine and automatically measuring cervical spine-related parameters, which can help radiologists improve their diagnostic efficiency.

CT-based radiogenomic analysis to predict high-risk colon cancer (ATTRACT): a multicentric trial.

Caruso D, Polici M, Zerunian M, Monterubbiano A, Tarallo M, Pilozzi E, Belloni L, Scafetta G, Valanzuolo D, Pugliese D, De Santis D, Vecchione A, Mercantini P, Iannicelli E, Fiori E, Laghi A

pubmed logopapersJun 5 2025
Clinical staging on CT has several biases, and a radiogenomics approach could be proposed. The study aimed to test the performance of a radiogenomics approach in identifying high-risk colon cancer. ATTRACT is a multicentric trial, registered in ClinicalTrials.gov (NCT06108310). Three hundred non-metastatic colon cancer patients were retrospectively enrolled and divided into two groups, high-risk and no-risk, according to the pathological staging. Radiological evaluations were performed by two abdominal radiologists. For 151 patients, we achieved genomics. The baseline CT scans were used to evaluate the radiological assessment and to perform 3D cancer segmentation. One expert radiologist used open-source software to perform the volumetric cancer segmentations on baseline CT scans in the portal phase (3DSlicer v4.10.2). Implementing the classical LASSO with a machine-learning library method was used to select the optimal features to build Model 1 (clinical-radiological plus radiomic feature, 300 patients) and Model 2 (Model 1 plus genomics, 151 patients). The performance of clinical-radiological interpretation was assessed regarding the area under the curve (AUC), sensitivity, specificity, and accuracy. The average performance of Models 1 and 2 was also calculated. In total, 262/300 were classified as high-risk and 38/300 as no-risk. Clinical-radiological interpretation by the two radiologists achieved an AUC of 0.58-0.82 (95% CI: 0.52-0.63 and 0.76-0.85, p < 0.001, respectively), sensitivity: 67.9-93.8%, specificity: 47.4-68.4%, and accuracy: 65.3-90.7%, respectively. Model 1 yielded AUC: 0.74 (95% CI: 0.61-0.88, p < 0.005), sensitivity: 86%, specificity: 48%, and accuracy: 81%. Model2 reached AUC: 0.84, (95% CI: 0.68-0.99, p < 0.005), sensitivity: 88%, specificity: 63%, and accuracy: 84%. The radiogenomics model outperformed radiological interpretation in identifying high-risk colon cancer. Question Can this radiogenomic model identify high-risk stages II and III colon cancer in a preoperative clinical setting? Findings This radiogenomics model outperformed both the radiomics and radiological interpretations, reducing the risk of improper staging and incorrect treatment options. Clinical relevance The radiogenomics model was demonstrated to be superior to radiological interpretation and radiomics in identifying high-risk colon cancer, and could therefore be promising in stratifying high-risk and low-risk patients.

Current State of Artificial Intelligence Model Development in Obstetrics.

Devoe LD, Muhanna M, Maher J, Evans MI, Klein-Seetharaman J

pubmed logopapersJun 5 2025
Publications on artificial intelligence (AI) applications have dramatically increased for most medical specialties, including obstetrics. Here, we review the most recent pertinent publications on AI programs in obstetrics, describe trends in AI applications for specific obstetric problems, and assess AI's possible effects on obstetric care. Searches were performed in PubMed (MeSH), MEDLINE, Ovid, ClinicalTrials.gov, Google Scholar, and Web of Science using a combination of keywords and text words related to "obstetrics," "pregnancy," "artificial intelligence," "machine learning," "deep learning," and "neural networks," for articles published between June 1, 2019, and May 31, 2024. A total of 1,768 articles met at least one search criterion. After eliminating reviews, duplicates, retractions, inactive research protocols, unspecified AI programs, and non-English-language articles, 207 publications remained for further review. Most studies were conducted outside of the United States, were published in nonobstetric journals, and focused on risk prediction. Study population sizes ranged widely from 10 to 953,909, and model performance abilities also varied widely. Evidence quality was assessed by the description of model construction, predictive accuracy, and whether validation had been performed. Most studies had patient groups differing considerably from U.S. populations, rendering their generalizability to U.S. patients uncertain. Artificial intelligence ultrasound applications focused on imaging issues are those most likely to influence current obstetric care. Other promising AI models include early risk screening for spontaneous preterm birth, preeclampsia, and gestational diabetes mellitus. The rate at which AI studies are being performed virtually guarantees that numerous applications will eventually be introduced into future U.S. obstetric practice. Very few of the models have been deployed in obstetric practice, and more high-quality studies are needed with high predictive accuracy and generalizability. Assuming these conditions are met, there will be an urgent need to educate medical students, postgraduate trainees and practicing physicians to understand how to effectively and safely implement this technology.
Page 108 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.