Sort by:
Page 67 of 3993982 results

Harnessing deep learning to optimize induction chemotherapy choices in nasopharyngeal carcinoma.

Chen ZH, Han X, Lin L, Lin GY, Li B, Kou J, Wu CF, Ai XL, Zhou GQ, Gao MY, Lu LJ, Sun Y

pubmed logopapersJul 28 2025
Currently, there is no guidance for personalized choice of induction chemotherapy (IC) regimens (TPF, docetaxel + cisplatin + 5-Fu; or GP, gemcitabine + cisplatin) for locoregionally advanced nasopharyngeal carcinoma (LA-NPC). This study aimed to develop deep learning models for IC response prediction in LA-NPC. For 1438 LA-NPC patients, pretreatment magnetic resonance imaging (MRI) scans and complete biological response (cBR) information after 3 cycles of IC were collected from two centers. All models were trained in 969 patients (TPF: 548, GP: 421), and internally validated in 243 patients (TPF: 138, GP: 105), then tested on an internal dataset of 226 patients (TPF: 125, GP: 101). MRI models for the TPF and GP cohorts were constructed to predict cBR from MRI using radiomics and graph convolutional network (GCN). The MRI-Clinical models were built based on both MRI and clinical parameters. The MRI models and MRI-Clinical models achieved high discriminative accuracy in both TPF cohorts (MRI model: AUC, 0.835; MRI-Clinical model: AUC, 0.838) and GP cohorts (MRI model: AUC, 0.764; MRI-Clinical model: AUC, 0.777). The MRI-Clinical models also showed good performance in the risk stratification. The survival curve revealed that the 3-year disease-free survival of the high-sensitivity group was better than that of the low-sensitivity group in both the TPF and GP cohorts. An online tool guiding personalized choice of IC regimen was developed based on MRI-Clinical models. Our radiomics and GCN-based IC response prediction tool has robust predictive performance and may provide guidance for personalized treatment.

Determining the scanning range of coronary computed tomography angiography based on deep learning.

Zhao YH, Fan YH, Wu XY, Qin T, Sun QT, Liang BH

pubmed logopapersJul 28 2025
Coronary computed tomography angiography (CCTA) is essential for diagnosing coronary artery disease as it provides detailed images of the heart's blood vessels to identify blockages or abnormalities. Traditionally, determining the computed tomography (CT) scanning range has relied on manual methods due to limited automation in this area. To develop and evaluate a novel deep learning approach to automate the determination of CCTA scan ranges using anteroposterior scout images. A retrospective analysis was conducted on chest CT data from 1388 patients at the Radiology Department of the First Affiliated Hospital of a university-affiliated hospital, collected between February 27 and March 27, 2024. A deep learning model was trained on anteroposterior scout images with annotations based on CCTA standards. The dataset was split into training (672 cases), validation (167 cases), and test (167 cases) sets to ensure robust model evaluation. The study demonstrated exceptional performance on the test set, achieving a mean average precision (mAP50) of 0.995 and mAP50-95 of 0.994 for determining CCTA scan ranges. This study demonstrates that: (1) Anteroposterior scout images can effectively estimate CCTA scan ranges; and (2) Estimates can be dynamically adjusted to meet the needs of various medical institutions.

The evolving role of multimodal imaging, artificial intelligence and radiomics in the radiologic assessment of immune related adverse events.

Das JP, Ma HY, DeJong D, Prendergast C, Baniasadi A, Braumuller B, Giarratana A, Khonji S, Paily J, Shobeiri P, Yeh R, Dercle L, Capaccione KM

pubmed logopapersJul 28 2025
Immunotherapy, in particular checkpoint blockade, has revolutionized the treatment of many advanced cancers. Imaging plays a critical role in assessing both treatment response and the development of immune toxicities. Both conventional imaging and molecular imaging techniques can be used to evaluate multisystemic immune related adverse events (irAEs), including thoracic, abdominal and neurologic irAEs. As artificial intelligence (AI) proliferates in medical imaging, radiologic assessment of irAEs will become more efficient, improving the diagnosis, prognosis, and management of patients affected by immune-related toxicities. This review addresses some of the advancements in medical imaging including the potential future role of radiomics in evaluating irAEs, which may facilitate clinical decision-making and improvements in patient care.

Continual learning in medical image analysis: A comprehensive review of recent advancements and future prospects.

Kumari P, Chauhan J, Bozorgpour A, Huang B, Azad R, Merhof D

pubmed logopapersJul 28 2025
Medical image analysis has witnessed remarkable advancements, even surpassing human-level performance in recent years, driven by the rapid development of advanced deep-learning algorithms. However, when the inference dataset slightly differs from what the model has seen during one-time training, the model performance is greatly compromised. The situation requires restarting the training process using both the old and the new data, which is computationally costly, does not align with the human learning process, and imposes storage constraints and privacy concerns. Alternatively, continual learning has emerged as a crucial approach for developing unified and sustainable deep models to deal with new classes, tasks, and the drifting nature of data in non-stationary environments for various application areas. Continual learning techniques enable models to adapt and accumulate knowledge over time, which is essential for maintaining performance on evolving datasets and novel tasks. Owing to its popularity and promising performance, it is an active and emerging research topic in the medical field and hence demands a survey and taxonomy to clarify the current research landscape of continual learning in medical image analysis. This systematic review paper provides a comprehensive overview of the state-of-the-art in continual learning techniques applied to medical image analysis. We present an extensive survey of existing research, covering topics including catastrophic forgetting, data drifts, stability, and plasticity requirements. Further, an in-depth discussion of key components of a continual learning framework, such as continual learning scenarios, techniques, evaluation schemes, and metrics, is provided. Continual learning techniques encompass various categories, including rehearsal, regularization, architectural, and hybrid strategies. We assess the popularity and applicability of continual learning categories in various medical sub-fields like radiology and histopathology. Our exploration considers unique challenges in the medical domain, including costly data annotation, temporal drift, and the crucial need for benchmarking datasets to ensure consistent model evaluation. The paper also addresses current challenges and looks ahead to potential future research directions.

A novel multimodal medical image fusion model for Alzheimer's and glioma disease detection based on hybrid fusion strategies in non-subsampled shearlet transform domain.

Alabduljabbar A, Khan SU, Altherwy YN, Almarshad F, Alsuhaibani A

pubmed logopapersJul 27 2025
BackgroundMedical professionals may increase diagnostic accuracy using multimodal medical image fusion techniques to peer inside organs and tissues.ObjectiveThis research work aims to propose a solution for diverse medical diagnostic challenges.MethodsWe propose a dual-purpose model. Initially, we developed a pair of images using the intensity, hue, and saturation (IHS) approach. Next, we applied non-subsampled shearlet transform (NSST) decomposition to these images to obtain the low-frequency and high-frequency coefficients. We then enhanced the structure and background details of the low-frequency coefficients using a novel structure feature modification technique. For the high-frequency coefficients, we utilized the layer-weighted pulse coupled neural network fusion technique to acquire complementary pixel-level information. Finally, we employed reversed NSST and IHS to generate the fused resulting image.ResultsThe proposed approach has been verified on 1350 image sets from two different diseases, Alzheimer's and glioma, across numerous imaging modalities. Our proposed method beats existing cutting-edge models, as proven by both qualitative and quantitative evaluations, and provides valuable information for medical diagnosis. In the majority of cases, our proposed method performed well in terms of entropy, structure similarity index, standard deviation, average distance, and average pixel intensity due to the careful selection of unique fusion strategies in our model. However, in a few cases, NSSTSIPCA performs better than our proposed work in terms of intensity variations (mean absolute error and average distance).ConclusionsThis research work utilized various fusion strategies in the NSST domain to efficiently enhance structural, anatomical, and spectral information.

Multi-Attention Stacked Ensemble for Lung Cancer Detection in CT Scans

Uzzal Saha, Surya Prakash

arxiv logopreprintJul 27 2025
In this work, we address the challenge of binary lung nodule classification (benign vs malignant) using CT images by proposing a multi-level attention stacked ensemble of deep neural networks. Three pretrained backbones - EfficientNet V2 S, MobileViT XXS, and DenseNet201 - are each adapted with a custom classification head tailored to 96 x 96 pixel inputs. A two-stage attention mechanism learns both model-wise and class-wise importance scores from concatenated logits, and a lightweight meta-learner refines the final prediction. To mitigate class imbalance and improve generalization, we employ dynamic focal loss with empirically calculated class weights, MixUp augmentation during training, and test-time augmentation at inference. Experiments on the LIDC-IDRI dataset demonstrate exceptional performance, achieving 98.09 accuracy and 0.9961 AUC, representing a 35 percent reduction in error rate compared to state-of-the-art methods. The model exhibits balanced performance across sensitivity (98.73) and specificity (98.96), with particularly strong results on challenging cases where radiologist disagreement was high. Statistical significance testing confirms the robustness of these improvements across multiple experimental runs. Our approach can serve as a robust, automated aid for radiologists in lung cancer screening.

Performance of AI-Based software in predicting malignancy risk in breast lesions identified on targeted ultrasound.

Lima IRM, Cruz RM, de Lima Rodrigues CL, Lago BM, da Cunha RF, Damião SQ, Wanderley MC, Bitencourt AGV

pubmed logopapersJul 27 2025
Targeted ultrasound is commonly used to identify lesions characterized on magnetic resonance imaging (MRI) that were not recognized on initial mammography or ultrasound and is especially valuable for guiding percutaneous biopsies. Although artificial intelligence (AI) algorithms have been used to differentiate benign from malignant breast lesions on ultrasound, their application in classifying lesions on targeted ultrasound has not yet been studied. To evaluate the performance of AI-based software in predicting malignancy risk in breast lesions identified on targeted ultrasound. This was a retrospective, cross-sectional, single-center study that included patients with breast lesions identified on MRI who underwent targeted ultrasound and percutaneous ultrasound-guided biopsy. The ultrasound findings were analyzed using AI-based software and subsequently correlated with the pathological results. 334 lesions were evaluated, including 183 mass and 151 non-mass lesions. On histological analysis, there were 257 (76.9 %) benign lesions, and 77 (23.1 %) malignant. Both the AI software and radiologists demonstrated high sensitivity in predicting the malignancy risk of the lesions. The specificity was higher when evaluated by the radiologist using the AI software compared to the radiologist's evaluation alone (p < 0.001). All lesions classified as BI-RADS 2 or 3 on targeted ultrasound by the radiologist or the AI software (n = 72; 21.6 %) showed benign pathology results. The AI software, when integrated into the radiologist's evaluation, demonstrated high diagnostic accuracy and improved specificity for both mass and non-mass lesions on targeted ultrasound, supporting more accurate biopsy decisions and potentially reducing false positives without missing cancers.

Unpaired T1-weighted MRI synthesis from T2-weighted data using unsupervised learning.

Zhao J, Zeng N, Zhao L, Li N

pubmed logopapersJul 27 2025
Magnetic Resonance Imaging (MRI) is indispensable for modern diagnostics because of its detailed anatomical and functional information without the use of ionizing radiation. However, acquiring multiple imaging sequences - such as T1-weighted (T1w) and T2-weighted (T2w) scans - can prolong scan times, increase patient discomfort, and raise healthcare costs. In this study, we propose an unsupervised framework based on a contrast-sensitive domain translation network with adaptive feature normalization to translate unpaired T2w MRI images into clinically acceptable T1w images. Our method employs adversarial training, along with cycle consistency, identity, and attention-guided loss functions. These components ensure that the generated images not only preserve essential anatomical details but also exhibit high visual fidelity compared to ground truth T1w images. Quantitative evaluation on a publicly available MRI dataset yielded a mean Peak Signal-to-Noise Ratio (PSNR) of 22.403 dB, a mean Structural Similarity Index (SSIM) of 0.775, Root Mean Squared Error (RMSE) of 0.078, and Mean Absolute Error (MAE) of 0.036. Additional analysis of pixel intensity and grayscale distributions further supported the consistency between the generated and ground truth images. Qualitative assessment included visual comparison to assess perceptual fidelity. These promising results suggest that a contrast-sensitive domain translation network with an adaptive feature normalization framework can effectively generate realistic T1w images from T2w inputs, potentially reducing the need for acquiring multiple sequences and thereby streamlining MRI protocols.

Brainwide hemodynamics predict EEG neural rhythms across sleep and wakefulness in humans

Jacob, L. P. L., Bailes, S. M., Williams, S. D., Stringer, C., Lewis, L. D.

biorxiv logopreprintJul 26 2025
The brain exhibits rich oscillatory dynamics that play critical roles in vigilance and cognition, such as the neural rhythms that define sleep. These rhythms continuously fluctuate, signaling major changes in vigilance, but the widespread brain dynamics underlying these oscillations are difficult to investigate. Using simultaneous EEG and fast fMRI in humans who fell asleep inside the scanner, we developed a machine learning approach to investigate which fMRI regions and networks predict fluctuations in neural rhythms. We demonstrated that the rise and fall of alpha (8-12 Hz) and delta (1-4 Hz) power, two canonical EEG bands critically involved with cognition and vigilance, can be predicted from fMRI data in subjects that were not present in the training set. This approach also identified predictive information in individual brain regions across the cortex and subcortex. Finally, we developed an approach to identify shared and unique predictive information, and found that information about alpha rhythms was highly separable in two networks linked to arousal and visual systems. Conversely, delta rhythms were diffusely represented on a large spatial scale primarily across the cortex. These results demonstrate that EEG rhythms can be predicted from fMRI data, identify large-scale network patterns that underlie alpha and delta rhythms, and establish a novel framework for investigating multimodal brain dynamics.

Quantification of hepatic steatosis on post-contrast computed tomography scans using artificial intelligence tools.

Derstine BA, Holcombe SA, Chen VL, Pai MP, Sullivan JA, Wang SC, Su GL

pubmed logopapersJul 26 2025
Early detection of steatotic liver disease (SLD) is critically important. In clinical practice, hepatic steatosis is frequently diagnosed using computed tomography (CT) performed for unrelated clinical indications. An equation for estimating magnetic resonance proton density fat fraction (MR-PDFF) using liver attenuation on non-contrast CT exists, but no equivalent equation exists for post-contrast CT. We sought to (1) determine whether an automated workflow can accurately measure liver attenuation, (2) validate previously identified optimal thresholds for liver or liver-spleen attenuation in post-contrast studies, and (3) develop a method for estimating MR-PDFF (FF) on post-contrast CT. The fully automated TotalSegmentator 'total' machine learning model was used to segment 3D liver and spleen from non-contrast and post-contrast CT scans. Mean attenuation was extracted from liver (L) and spleen (S) volumes and from manually placed regions of interest (ROIs) in multi-phase CT scans of two cohorts: derivation (n = 1740) and external validation (n = 1044). Non-linear regression was used to determine the optimal coefficients for three phase-specific (arterial, venous, delayed) increasing exponential decay equations relating post-contrast L to non-contrast L. MR-PDFF was estimated from non-contrast CT and used as the reference standard. The mean attenuation for manual ROIs versus automated volumes were nearly perfectly correlated for both liver and spleen (r > .96, p < .001). For moderate-to-severe steatosis (L < 40 HU), the density of the liver (L) alone was a better classifier than either liver-spleen difference (L-S) or ratio (L/S) on post-contrast CTs. Fat fraction calculated using a corrected post-contrast liver attenuation measure agreed with non-contrast FF > 15% in both the derivation and external validation cohort, with AUROC between 0.92 and 0.97 on arterial, venous, and delayed phases. Automated volumetric mean attenuation of liver and spleen can be used instead of manually placed ROIs for liver fat assessments. Liver attenuation alone in post-contrast phases can be used to assess the presence of moderate-to-severe hepatic steatosis. Correction equations for liver attenuation on post-contrast phase CT scans enable reasonable quantification of liver steatosis, providing potential opportunities for utilizing clinical scans to develop large scale screening or studies in SLD.
Page 67 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.