You are here
Home > Blog > Oncology > Radiomics in Cancer Care: Breaking Down the Evidence Behind the Buzz

Radiomics in Cancer Care: Breaking Down the Evidence Behind the Buzz

Radiomics in Cancer Care: Breaking Down the Evidence Behind the Buzz


Radiomics

 


Introduction

Radiomics represents a rapidly advancing field within oncology that extracts high-dimensional quantitative data from medical imaging, offering the potential to transform cancer diagnosis, prognosis, and treatment strategies. This emerging discipline is built on the central hypothesis that medical images harbor biological, prognostic, and predictive information that often remains undetectable through conventional visual assessment by radiologists. By applying advanced computational methods, radiomics has quickly gained recognition as a powerful analytical tool with applications across a wide range of malignancies.

Unlike traditional radiology, which relies predominantly on qualitative interpretation, radiomics enables a more comprehensive evaluation of tumor characteristics through the systematic extraction of imaging features. These features, which include shape, texture, intensity, and spatial distribution, collectively form what is known as the radiomics phenotype. This phenotype provides a quantitative representation of tumor biology and heterogeneity, allowing clinicians to assess the entire disease state in a non-invasive manner. In thoracic oncology and beyond, such analyses can support precision medicine by integrating imaging-derived biomarkers with clinical and molecular data.

The potential applications of radiomics are broad and clinically significant. Radiomics has demonstrated promise in early tumor characterization, prognostic modeling, risk stratification, treatment planning, therapy response monitoring, and long-term surveillance. By quantifying tumor microenvironmental factors and spatial heterogeneity, radiomics may complement traditional histopathology and genomics, offering a more holistic view of the disease process. Furthermore, because radiomics is derived from standard-of-care imaging modalities, it has the advantage of being widely applicable without imposing additional risks or invasive procedures on patients.

Despite its promise, several challenges currently limit the translation of radiomics into routine clinical practice. Technical limitations include variability in imaging acquisition protocols, differences in feature extraction algorithms, and a lack of standardization across platforms. Methodological issues such as overfitting, small sample sizes, and inadequate validation further complicate reproducibility and generalizability. For instance, while the evidence base for radiomics continues to expand, a review of 30 studies involving 4,839 patients investigating outcome prediction after conventional radiotherapy revealed that only five studies achieved external validation. This underscores the urgent need for standardized workflows, large-scale multicenter trials, and robust validation methods before radiomics can be reliably adopted as a clinical biomarker.

In summary, radiomics holds substantial potential to advance precision oncology by enabling the extraction of hidden information from routine imaging, thereby enhancing clinical decision-making. However, realizing this potential requires addressing current technical and methodological barriers, strengthening external validation efforts, and fostering integration with genomic and clinical datasets. As the field continues to evolve, radiomics may become a cornerstone of individualized cancer care, bridging the gap between imaging, molecular biology, and therapeutic outcomes.

Keywords: radiomics, oncology, medical imaging, precision medicine, biomarkers, clinical decision-making

 

Understanding Radiomics in Oncology Practice

The field of radiomics elevates medical imaging from visual inspection to mathematical analysis, transforming standard clinical images into mineable data. Coined in 2012, radiomics has witnessed exponential growth, with over 1,500 publications in 2020 alone. This approach operates on a fundamental premise: biomedical images contain disease-specific information imperceptible to the human eye yet accessible through quantitative analysis.

What is radiomics? A data-driven imaging approach

Radiomics represents a computational methodology that extracts numerous quantitative features from medical images using advanced mathematical algorithms. Unlike traditional radiology that relies on visual assessment, radiomics quantifies textural information through artificial intelligence analysis methods. This process converts images to higher-dimensional data, subsequently mining these data for improved decision support.

The radiomics workflow follows a systematic progression: image acquisition, pre-processing, segmentation of regions-of-interest, feature extraction, and finally statistical analysis to identify correlations with clinical endpoints. Essentially, radiomics does not automate diagnostic processes but rather enhances them by providing additional quantitative data.

A key advantage of the radiomics approach is its applicability across multiple imaging modalities, including CT, MRI, PET, and ultrasound, allowing for integrated cross-modality analysis instead of evaluating each modality independently. Additionally, radiomics enables assessment of both inter-tumoral and intra-tumoral heterogeneity, which has been associated with reduced response to treatments such as immunotherapy.

Radiomics vs traditional imaging biomarkers

Radiomics generates numerous imaging-derived features, yet not all qualify as imaging biomarkers. A critical distinction exists: radiomic features become imaging biomarkers only after rigorous validation confirms their consistent correlation with clinical outcomes or biological phenomena. Consequently, radiomics broadly encompasses data mining, whereas imaging biomarkers constitute clinically validated subsets.

Traditional imaging biomarkers typically involve semantic features—qualitative descriptors used by radiologists such as size, shape, location, vascularity, speculation, or necrosis. In contrast, radiomics extracts agnostic features—mathematically derived quantitative descriptors that may not be intuitively understood but potentially reveal deeper patterns within the image.

The selection between radiomics, traditional imaging biomarkers, and AI-driven methods depends primarily on clinical context, research objectives, data availability, and interpretability needs. Specifically, radiomics is suitable for smaller datasets requiring high interpretability, whereas AI-driven methods excel in data-rich environments prioritizing predictive accuracy.

Radiomics phenotype and its clinical relevance

The radiomics phenotype refers to the collective quantitative features extracted from medical images that comprehensively characterize tissue patterns, particularly their heterogeneity and spatial distribution. These features generally fall into several categories:

  1. First-order statistics derived from the histogram of voxel intensities, providing characteristics such as mean, skewness, kurtosis, entropy, and uniformity
  2. Shape-based features that mathematically describe tumor morphology, including volume, surface area, compactness, and sphericity
  3. Second-order texture features based on spatial intensity distribution, describing relationships between voxels with similar or different intensity values
  4. Higher-order features calculated after applying filters or mathematical transformations to the original image

The clinical relevance of radiomics phenotypes extends across the entire patient journey. Research has demonstrated associations between radiomic features and specific biological processes. For instance, texture entropy features, cluster features, and voxel intensity variance features have been linked to immune system function, p53 pathway activity, and cell cycle regulation. Moreover, these features show predictive value for driver mutations such as EGFR and KRAS.

Radiomics phenotypes can potentially serve as “virtual biopsies” that overcome limitations of standard biopsies by providing non-invasive analysis of entire tumors rather than focal samples. This approach also facilitates longitudinal monitoring at multiple time points, offering insights into disease evolution that would be impractical with repeated invasive procedures.

Despite its promise, the clinical translation of radiomics faces several challenges, including reproducibility issues due to variations in acquisition protocols, the need for standardization, and concerns about model overfitting. The Image Biomarker Standardization Initiative (IBSI) aims to address these challenges by offering common reference definitions and benchmarking of radiomic features.

 

Radiomics

Radiomics Workflow: From Image to Insight

The technical implementation of radiomics follows a systematic multistep process that transforms medical images into quantifiable data. This process requires meticulous attention to standardization across each phase to ensure reproducible results. The radiomics workflow typically involves curation of clinical and imaging data through image preprocessing, tumor segmentation, feature extraction, model development, and validation.

Image acquisition and segmentation standards

Standardized imaging protocols represent the cornerstone of reliable radiomics analysis. CT acquisition parameters like tube voltage (120kVp), tube current (53-400mA), and slice thickness (typically 3mm) must be specified to optimize image quality. For MRI studies, field strength (commonly 3T), sequence parameters, and contrast administration protocols (0.2mL/kg at 2-3mL/s) directly influence radiomic feature values. These parameters must be consistent since numerous studies have demonstrated the strong dependency of resulting feature measurements on the imaging protocol.

After acquisition, tumor segmentation identifies the region of interest (ROI) from which radiomic features will be extracted. This process can be performed manually, semi-automatically, or fully automatically depending on available resources and tumor type. Manual segmentation by radiologists remains common but introduces considerable observer-bias—studies have shown many radiomic features are not robust against intra- and inter-observer variations. Accordingly, assessments of reproducibility should be conducted using metrics like the intraclass correlation coefficient (ICC), with features showing ICC>0.8 considered robust.

When creating segmentations, practitioners must decide between 2D (single section) or 3D (multiple sections) approaches. Three-dimensional ROIs capture additional information yet require more time for manual delineation. Furthermore, segmentation can target the tumor, tumor subregions (“habitats”), or peritumoral zones depending on the research hypothesis. The peritumoral region, often defined as extending 3-5mm beyond the tumor boundary, may contain valuable information about tumor invasion or host immune response.

Radiomics features extraction using Pyradiomics

Before feature extraction, image preprocessing standardizes data by reducing noise and enhancing quality. Key preprocessing steps include intensity normalization, voxel size resampling, and discretization. For qualitative imaging modalities like non-quantitative MRI, normalization methods such as z-score, WhiteStripe, or histogram-based techniques standardize intensities. Additionally, resampling images to isotropic voxel spacing (commonly 1×1×1mm) increases reproducibility between datasets.

PyRadiomics stands as the most widely used open-source Python package for feature extraction, adhering to Image Biomarker Standardization Initiative (IBSI) guidelines. This package extracts approximately 1,500 features per image, categorized as:

  • Shape features: Describe geometric properties like volume, surface area, and sphericity
  • First-order features: Characterize the distribution of voxel intensities (mean, median, entropy)
  • Texture features: Capture spatial relationships through matrices including GLCM, GLRLM, GLSZM, GLDM, and NGTDM
  • Derived features: Extracted from filtered images using wavelets, Laplacian of Gaussian, or other transforms

Intensity discretization represents a critical parameter in feature extraction, enhancing noise reduction and improving reproducibility. The fixed bin width method (predefined intensity intervals) often outperforms fixed bin count (predefined number of bins) in reproducibility studies.

Machine learning model training and validation

Once features are extracted, statistical models predict study endpoints such as tumor type or survival time. Feature selection reduces dimensionality, removing redundant or irrelevant features. Common methods include LASSO (Least Absolute Shrinkage and Selection Operator), minimum redundancy–maximum relevance (mRMR), or recursive feature elimination.

Proper data partitioning prevents information leakage and avoids biasing the training process. The hold-out method divides data into training (model development), validation (hyperparameter tuning), and testing (final performance evaluation) sets. Alternatively, resampling techniques like k-fold cross-validation provide more robust estimates of model performance.

Various machine learning algorithms serve different classification tasks—logistic regression, support vector machines, random forests, and neural networks represent common choices. Eventually, model performance evaluation typically employs metrics like the area under the receiver operating characteristic curve (AUC).

External validation—testing models on data from different institutions—remains the gold standard for assessing generalizability. Nevertheless, this ideal validation approach is not always feasible, making temporal splitting (using newer patients for validation) a practical alternative.

 

 

Common Pitfalls in Radiomics Research

Despite promising advances in radiomics technology, numerous methodological challenges threaten the reliability and clinical utility of radiomics models. These challenges manifest throughout the radiomics workflow and require careful consideration to produce robust, generalizable results.

Overfitting due to high-dimensional data

Radiomics epitomizes the “large-predictors (p) and small-number of patients (n)” dilemma, creating an environment where the likelihood of finding spurious correlations increases dramatically. In this “small n-to-p” scenario, the number of measurements vastly exceeds the number of independent samples. As the volume of data space expands exponentially with each additional dimension, data sparsity becomes inevitable. This sparsity requires notably larger patient cohorts to achieve statistical validity—a requirement often unmet in current radiomics studies.

Overfitting occurs when models capture noise and random fluctuations rather than underlying biological patterns. These models perform excellently on training data (low bias) yet fail to generalize to new datasets (high variance). The multiplicity of data likewise increases false-positive rates, further complicating model reliability.

Feature selection represents a critical step for dimensionality reduction, removing redundant or irrelevant features while preserving core information. Common techniques include LASSO (Least Absolute Shrinkage and Selection Operator), which prevents overfitting by reducing model complexity through regularization. Even so, no feature reduction method works optimally across all datasets, hence this remains an active research area.

Lack of external validation in published models

A vast majority of current radiomics studies lack robustness and external validation, primarily stemming from false-positive discovery rates, improper study design, and sensitivity to unwanted variations unrelated to disease phenotypes. Without external validation—testing models on independent datasets from different institutions—assessing true generalizability becomes impossible.

External validation reveals whether a model performs similarly across different populations examined with different scanners and protocols. Yet most studies rely solely on internal validation methods, which inadequately assess performance on truly unseen data. In the absence of external datasets, cross-validation offers an alternative approach, splitting data into training and validation subsets. Nevertheless, improper data partitioning often allows inadvertent information leakage between sets.

Common validation errors include: performing feature normalization and selection using the entire dataset; tuning hyperparameters without a proper held-out test set; and basing model selection on test set performance. Even when external test sets exist, they are frequently incorporated into feature selection or model development, corrupting their independence. These methodological flaws artificially inflate reported performance metrics and create unreliable models.

Inconsistent feature definitions across software

The radiomics community still lacks consensus regarding feature terminology, underlying mathematics, and implementation across software programs. This absence of standardization creates a scenario where features extracted using different toolboxes cannot be used to build or validate the same model, hampering generalizability.

Comparative analyzes reveal inconsistencies among radiomics software. Although good agreement exists for many features, morphology features show particularly poor concordance across programs. The computation of morphology features involves complex transformations where volume data converts to mesh data, introducing variations not present in statistical or textural features.

Similarly, programs utilizing different gray-level discretization approaches produce different values. Even basic parameters like whether minimum intensity starts at 0 or 1 can drastically alter feature values by introducing or eliminating partial terms in calculations. The RadiomiX, LIFEx, and syngo.via platforms share only 41 common features, consisting of merely 3 shape features, 8 intensity features, and 30 texture features.

The Image Biomarker Standardization Initiative (IBSI) attempts to address these issues by proposing definitions for 11 commonly used feature classes and creating digital phantoms with benchmark values. Yet even with IBSI-compliance, feature reproducibility between applications remains unguaranteed. For radiomics to advance toward clinical utility, standardization must improve across all implementation aspects.

 

Radiomics

Reproducibility and Standardization Challenges

Reproducibility emerges as a critical barrier in translating radiomics research into clinical practice. The lack of reproducibility and validation in high-throughput quantitative image analysis studies presents major challenges for advancing the field toward practical applications.

Impact of acquisition protocols on radiomics features

Scanner parameters profoundly alter the numerical values of radiomics features, creating substantial variability across studies. In CT imaging, variations in tube current remarkably influence feature reproducibility—only 20 (8%) features remain reproducible when comparing 50 versus 100 mA acquisitions, yet this improves to 63 (25%) features when comparing 400 versus 500 mA. Indeed, random noise at lower tube currents increases pixel-by-pixel intensity variability, directly affecting textural measurements.

Reconstruction algorithms likewise alter feature values. One study demonstrated that only 30.14% of features maintained reproducibility (CCC ≥ 0.90) across test-retest settings. Notably, the percentage of reproducible features decreases with increasing levels of iterative reconstruction—88% of features remain reproducible at 10% ASiR (adaptive statistical iterative reconstruction), yet only 38% remain reproducible at 100% ASiR.

Different imaging modalities exhibit varying levels of feature stability. Orhlac et al. showed that CT reconstruction kernels and slice thickness markedly influence radiomics texture features. Meanwhile, CT-derived radiomic features display intrinsic dependencies on voxel size and gray-level discretization methods, highlighting the need for careful parameter selection.

Image Biomarker Standardization Initiative (IBSI)

The IBSI addresses standardization challenges through consensus-based guidelines for translating medical images into biomarkers. The initiative operates across multiple phases:

  • Phase I: Standardization of radiomic feature computations using digital phantoms
  • Phase II: Feature standardization under general image processing using CT data
  • Phase III: Validation using multi-modality imaging datasets

Through these efforts, IBSI established reference values for 169 commonly used features, created a standard radiomics processing scheme, and developed reporting guidelines. Initial consensus on reference values was weak for 232 of 302 features (76.8%) in Phase I and 703 of 1075 features (65.4%) in Phase II. In contrast, by the final iteration, weak consensus remained for merely 0.4% of features in Phase I and 1.4% in Phase II.

The IBSI recently extended its efforts to standardize imaging filters (IBSI 2) launched in June 2020, focusing on convolutional filters like wavelets and Laplacian of Gaussian that highlight specific image characteristics.

ComBat harmonization for multi-center studies

ComBat harmonization addresses the scanner effect that hampers multi-center radiomics studies. Originally developed for genomics data, this batch-effect correction standardizes the means (location) and variances (scale) of features across batches using an empirical Bayes approach.

In practice, ComBat effectively removes non-biological variations between different scanner manufacturers and field strengths. One study examining T2-weighted abdominal MRI found that 76.7%-90.1% of radiomic features and 89.0%-89.3% of deep features showed major differences between field strengths and manufacturers prior to harmonization. After ComBat application, no notable differences remained based on t-tests or ANOVA tests.

Ligero et al. applied ComBat to address variance sources including manufacturer-dependent convolution kernels and slice thickness, demonstrating that the method minimized radiomics data variability regardless of differences in CT acquisition protocols. For PET imaging, ComBat successfully aligned SUVmax values between different reconstructions (p-value improved from 0.0002 to 0.6994).

The method proves beneficial even with limited data—previous studies show ComBat successfully harmonizes features with as few as 20 patients per site. Currently, ComBat implementations exist across multiple platforms (R, Python, MATLAB) and through free online applications, facilitating wide adoption.

 

 

Clinical Translation Barriers in Radiomics

Despite radiomics’ potential for enhancing cancer diagnosis and treatment, a substantial gap exists between research results and clinical implementation. Multiple barriers impede this translation, requiring structured approaches to overcome technical and practical challenges.

Low Radiomics Quality Score (RQS) in current studies

The Radiomics Quality Score (RQS) serves as a critical metric for evaluating methodological quality in radiomics research. Unfortunately, most studies fail to achieve adequate scores. A comprehensive meta-analysis of 3258 quality scores extracted from 130 review articles revealed a mean score of just 9.4 ± 6.4, corresponding to merely 26.1% of the maximum possible value. Only 7.2% of assessed studies achieved at least half of the maximum RQS. Throughout other research domains, the pattern persists—prostate MRI radiomics studies averaged 7.93 ± 5.13 points (23 ± 13%), cholangiocarcinoma studies showed median scores between 8-10 (22-28%), and a broader review found median RQS of 21.00%.

In practice, certain factors correlate with higher quality scores. Studies published in journals with impact factors above 4 demonstrated higher RQS values compared to those in lower-impact publications. Likewise, studies including more than 100 patients typically achieved superior scores. Initially encouraging, quality scores show positive correlation with publication date, yet remain insufficient for supporting clinical translation.

Limited interpretability of black-box models

The opacity of complex radiomics algorithms presents another major barrier to clinical adoption. Physicians understandably hesitate to trust models when they cannot comprehend the underlying decision process. This concern has gained regulatory attention—in Europe, the General Data Protection Regulation establishes that individuals have the right to receive clear explanations of AI decisions affecting them.

Explainable AI techniques attempt to address this challenge. Methods such as SHAP (SHapley Additive exPlanations) values offer insights into model decisions, although post-hoc explanations like saliency maps often provide insufficient clarity about feature connections and weighting.

A fundamental trade-off exists between model interpretability and performance. High-performance models like deep neural networks, with their complex structures and numerous parameters, effectively capture intricate patterns yet remain difficult to interpret. Conversely, simpler models like linear regression offer greater interpretability but frequently deliver insufficient performance with complex medical data.

Integration with clinical workflows and EHRs

For radiomics to impact patient care, tools must seamlessly integrate into existing clinical workflows. Current PACS and specialty radiomics systems lack necessary integration points to advance both research and clinical practice. Although newer standards like HL7 FHIR and DICOMweb facilitate system integration, functionality gaps persist.

Beyond technical interoperability, effective radiomics tools must incorporate clinical context. Current state-of-the-art models typically consider only pixel values without data informing clinical context. Yet experienced clinicians interpret imaging findings within the appropriate patient context, leading to more accurate diagnoses and improved outcomes. To achieve similar capabilities, radiomics models must process contextual data from electronic health records alongside pixel data.

The urgency for improved workflow integration increases as imaging volumes grow. Currently, radiologists may need to interpret an image every 3-4 seconds over an 8-hour workday, contributing to fatigue and increased error rates. Throughout this demanding environment, radiomics tools that fail to integrate seamlessly risk becoming additional burdens rather than valuable assistants.

 

 

Radiomics in Cancer Care: Use Cases and Evidence

Concrete clinical applications of radiomics continue to emerge across various cancer types, offering robust evidence for its integration into oncology practice. The utility of radiomics spans from treatment planning to outcome prediction, with several promising applications gaining traction in recent years.

Radiomics in lung cancer radiotherapy planning

Radiomics models show substantial value in predicting tumor response during radiotherapy treatment. For locally advanced non-small cell lung cancer (NSCLC), radiomic features extracted from three-dimensional planning CT scans effectively model tumor shrinkage during chemoradiotherapy courses. Moreover, pathological response to neoadjuvant chemoradiotherapy can be predicted using diagnostic and planning CT scans that incorporate both tumor and peritumoral parenchymal features. In studies evaluating radiotherapy response, texture analysis of PET-CT data revealed that an intensity feature named “contrast” successfully predicted treatment response and overall survival when comparing pre- and post-chemoradiotherapy images. Currently, radiomics studies in lung cancer radiotherapy predominantly evaluate patients undergoing treatment with cytotoxic chemo-radiotherapy.

Predicting immunotherapy response in NSCLC

The ability to predict immunotherapy response represents a critical application of radiomics. In NSCLC patients receiving immunotherapy, a combined radiomics score (COMB-Radscore) developed from PET, CT, and PET/CT images achieved impressive predictive performance with AUC values of 0.894 and 0.819 in training and testing cohorts, respectively. This score demonstrates excellent dynamic predictive capabilities with an AUC of 0.857, enabling earlier detection of potential disease progression compared to traditional size-based evaluations. Alternatively, a machine learning model based on CT radiomic features predicted tumor mutational burden (TMB) with AUC values of 0.816 in training and 0.762 in external validation datasets. Presently, this radiomics-based TMB prediction offers a cost-effective improvement for immunotherapy patient selection.

Lesion-level prediction in metastatic melanoma

Radiomics enables lesion-specific analysis in multi-metastatic disease, offering advantages over traditional patient-level assessments. In metastatic melanoma, lesion-level radiomics models have demonstrated an approximate 4.5-fold increase in predictive capacity for doxorubicin monotherapy response. Interestingly, radiomics captures intra-tumoral heterogeneity that correlates with genetic mutations, epigenetic alterations, and tumor microenvironment interactions. To illustrate, a radiomic model for metastatic melanoma patients achieved 82% AUC in internal testing, correctly predicting subjects with favorable prognosis with 85% accuracy. Furthermore, six critical radiomic features—including wavelet features reflecting variability in lesion intensity and high-gray-level zone emphasis quantifying tumor density—showed statistically significant differences between prognostic groups.

Unlike conventional approaches that analyze only one lesion, multi-lesion radiomics analysis provides comprehensive evaluation of inter-lesion heterogeneity, especially relevant for predicting treatment response. This approach shifts cancer management toward precision medicine by facilitating personalized treatment planning, therapy response prediction, and adaptive treatment adjustments.

 

 

Patient-Level vs Lesion-Level Modeling Approaches

A fundamental challenge in radiomics involves the transition from single lesion analysis to comprehensive patient-level assessment. Cancer patients frequently present with multiple lesions exhibiting varying biological characteristics, necessitating advanced approaches that accurately capture this heterogeneity.

Entropy and heterogeneity metrics for aggregation

Entropy emerges as a valuable quantitative imaging biomarker for characterizing cancer imaging phenotypes, demonstrating associations with tumor gene expression, metabolism, staging, prognosis, and treatment response. Research involving 112 patients with matched synchronous metastases revealed entropy correlation across spatial scale filters with Spearman’s Rho ranging between 0.41 and 0.59. Furthermore, multivariate analysis indicated that entropy values correlate with primary tumor type, ROI area size, metastasis site, and reference tissue measurements.

Intertumoral variability within patients can be substantial—coefficients of variation for radiomics features range from 1.6% for Inverse Difference Moment Normalized to 321% for Skewness. This heterogeneity persists even in oligo-metastatic patients and within lesions sharing the same host organ. Interestingly, spatial distribution metrics such as the minimal value and entropy distribution have demonstrated connections with patient outcomes. In immunotherapy studies, the minimal CD8 radiomics score of a patient’s least infiltrated lesion showed association with overall survival (HR=0.28, 95% CI 0.10-0.79).

Conventional subsampling techniques often underestimate heterogeneity between tumors in the same patient—one-lesion sampling completely misses this variation, highlighting the inadequacy of traditional approaches. Consequently, radiomic features at the patient level typically demonstrate superior predictive power compared to features derived from only the hottest or largest lesion.

Multiple-instance learning for patient-level prediction

Multiple-instance learning (MIL) offers an elegant solution for patient-level prediction without requiring lesion-specific annotations. Operating as a weakly supervised method, MIL allows labeling a series of images (bag) rather than individual slices (instances), thereby facilitating extraction of global features while minimizing false positive influence.

In practical implementation, each patient represents a bag containing all image slices as instances. MIL approaches employ various aggregation strategies—max pooling, convolutional pooling, or attention mechanisms—to convert instance-level features into bag-level representations. Among these, attention-based MIL stands out by determining instance weights through neural networks, effectively prioritizing clinically relevant information.

The attention mechanism proves particularly valuable for interpretability, directing clinicians toward features triggering diagnostic recommendations. For instance, deep multiple instance learning models employing attention mechanisms have successfully predicted chemotherapeutic response in non-small cell lung cancer using pretreatment CT images.

Research demonstrates heterogeneous treatment responses in 36.1% of patients receiving tyrosine kinase inhibitors, underscoring the value of lesion-level prediction for identifying which lesions may benefit from specific treatments. Through these advanced modeling approaches, radiomics continues to evolve toward more personalized cancer assessment.

 

 

Future Directions: Toward Clinical Integration

The advancement of radiomics toward clinical utility requires integration with complementary technologies, standardized monitoring approaches, and robust regulatory frameworks.

Multimodal biomarkers: radiomics + genomics

Integrating radiomics with genomics creates a powerful synergy termed radiogenomics, offering non-invasive insights into cancer’s genetic makeup. This approach can predict valuable biomarkers such as IDH mutations in glioma and EGFR amplification in lung cancer. Radiogenomics enables correlation between tumor imaging phenotypes and underlying gene expression patterns, thereby addressing limitations of traditional biopsies that often miss tumor heterogeneity. For instance, combined radiomics, histopathology, and genomic data demonstrated complementary power in identifying high and low-risk NSCLC patients receiving immunotherapy. Fundamentally, radiogenomics helps guide biopsy site selection, minimizing sampling errors that contribute to histopathology’s high error rates.

Delta-radiomics for treatment monitoring

Delta-radiomics evaluates the relative net change in radiomic features between longitudinal images, offering deeper understanding of treatment-induced effects. This approach assesses changes that occur after therapy introduction, potentially predicting response earlier than conventional methods. In metastatic melanoma patients, combining conventional and delta radiomics improved treatment response prediction by differentiating between pseudo-progression and disease advancement. Similarly, delta radiomics signatures effectively distinguished responders from non-responders in advanced NSCLC and metastatic bladder cancer. Nevertheless, technical challenges persist—the field requires larger multicentric cohort studies with prospective designs to validate these promising findings.

Regulatory and ethical considerations

As radiomics applications mature, regulatory oversight becomes increasingly important. Currently, 79% of medical AI devices authorized by FDA in 2023 were in radiology. Yet successful clinical translation demands comprehensive frameworks addressing privacy concerns, data security, and algorithm transparency. In Europe, regulations like General Data Protection Regulation (GDPR) and medical device regulations are evolving to accommodate these technologies. Beyond technical considerations, ethical challenges include potential insurance discrimination based on genomic profiles and ensuring equitable access across populations. Public-private partnerships between regulatory agencies and medical societies may prove crucial for validation of radiomics algorithms through real-world evidence collection.

 

Radiomics


Conclusion Led

Radiomics stands at the intersection of medical imaging, computational analysis, and precision oncology, offering unprecedented opportunities to enhance cancer care through quantitative feature extraction. Throughout this article, we have examined how radiomics transforms conventional images into minable data, potentially revealing biological information invisible to the human eye. The systematic workflow—from acquisition to segmentation, feature extraction, and model development—creates a framework for converting visual patterns into actionable clinical insights.

Despite its promising applications across various cancer types, radiomics faces substantial implementation barriers. Technical challenges related to acquisition protocols fundamentally affect feature values, while inconsistent definitions across software platforms hamper reproducibility. Additionally, high-dimensional data combined with relatively small patient cohorts leads to overfitting issues, undermining model reliability. The remarkably low Radiomics Quality Score in published studies further illustrates the methodological gaps that must be addressed before widespread clinical adoption becomes feasible.

Standardization efforts such as the Image Biomarker Standardization Initiative represent crucial steps toward establishing consensus on feature definitions and computational approaches. Likewise, harmonization methods like ComBat address multi-center variability by minimizing non-biological differences between scanners. These developments, though early, pave the way for more robust and generalizable radiomics models.

Emerging applications demonstrate concrete clinical value across the oncology spectrum. Radiomics models help predict tumor response during radiotherapy for lung cancer patients, assess immunotherapy effectiveness in NSCLC, and evaluate treatment options for individual lesions in metastatic melanoma. The field has thus evolved from theoretical potential to practical utility in specific clinical scenarios.

Patient-level assessment remains particularly challenging when dealing with multiple lesions. Advanced approaches such as entropy metrics and multiple-instance learning offer promising solutions by capturing intertumoral heterogeneity without requiring lesion-specific annotations. These methods acknowledge the complexity of cancer as a systemic disease with varied manifestations across different tumor sites.

The future of radiomics likely depends on integration with complementary technologies. Radiogenomics combines imaging phenotypes with genomic profiles, providing non-invasive insights into tumor biology while overcoming limitations of traditional biopsies. Delta-radiomics tracks feature changes over time, potentially detecting treatment response earlier than conventional methods. However, both approaches require larger prospective studies before routine clinical implementation.

Regulatory and ethical considerations will undoubtedly shape radiomics development as applications mature. Clear frameworks addressing algorithm transparency, data security, and equitable access must evolve alongside technical advances to ensure responsible implementation.

Radiomics therefore represents both an opportunity and a challenge for oncology practice. When properly implemented with rigorous methodology and appropriate validation, these computational approaches may supplement clinical decision-making by extracting otherwise hidden information from routine images. The coming years will determine whether radiomics fulfills its promise of enhancing cancer diagnosis, treatment selection, and response assessment through quantitative image analysis.

Key Takeaways

Radiomics transforms medical images into quantitative data that can reveal hidden biological information about tumors, but significant barriers prevent widespread clinical adoption.

  • Radiomics extracts thousands of mathematical features from routine medical images to predict treatment response and outcomes non-invasively • Most radiomics studies suffer from poor methodology with only 26% average quality scores and lack external validation across institutions • Standardization remains critical—scanner protocols and software differences dramatically affect feature values, limiting reproducibility between studies • Clinical integration requires seamless workflow integration, improved model interpretability, and robust regulatory frameworks for patient safety • Future success depends on combining radiomics with genomics data and developing delta-radiomics for real-time treatment monitoring

The field shows promise for personalized cancer care through “virtual biopsies” that assess entire tumors rather than small tissue samples, but rigorous validation and standardization must precede clinical implementation to ensure reliable, actionable insights for oncologists.

 

Radiomics

Frequently Asked Questions:

FAQs

Q1. What are the key advantages of radiomics in cancer care? Radiomics offers non-invasive tumor characterization, enables prediction of treatment response, and provides prognostic information by extracting quantitative data from medical images. It can analyze entire tumors rather than small tissue samples, potentially revealing hidden biological information not visible to the human eye.

Q2. How does radiomics differ from traditional radiology in cancer assessment? While traditional radiology relies on visual assessment, radiomics uses advanced computational methods to extract numerous quantitative features from medical images. This approach can provide more comprehensive analysis of tumor characteristics, including spatial heterogeneity and microenvironment, potentially leading to more precise diagnoses and treatment planning.

Q3. What are the main challenges in implementing radiomics clinically? Major challenges include lack of standardization in image acquisition and feature extraction, poor reproducibility across different software and institutions, and the risk of overfitting due to high-dimensional data. Additionally, many radiomics studies suffer from low methodological quality and lack external validation, hindering clinical translation.

Q4. How is radiomics being integrated with other technologies in cancer care? Radiomics is being combined with genomics (radiogenomics) to provide non-invasive insights into tumor genetics. Delta-radiomics, which tracks changes in features over time, is being developed for treatment monitoring. These integrations aim to enhance personalized cancer care by providing more comprehensive tumor assessments.

Q5. What future developments are needed for radiomics to become widely adopted in clinical practice? For widespread clinical adoption, radiomics needs improved standardization of protocols and feature definitions, better integration with clinical workflows and electronic health records, and more interpretable models. Additionally, larger multicenter studies with prospective designs are required to validate findings, along with clear regulatory frameworks to ensure patient safety and data security.

 

 

 

Youtube


References:

[1] – https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-5/issue-1/011020/Influence-of-CT-acquisition-and-reconstruction-parameters-on-radiomic-feature/10.1117/1.JMI.5.1.011020.full
[2] – https://jnm.snmjournals.org/content/63/2/172
[3] – https://www.sciencedirect.com/science/article/pii/S2772906024005223
[4] – https://pmc.ncbi.nlm.nih.gov/articles/PMC9568579/
[5] – https://pmc.ncbi.nlm.nih.gov/articles/PMC10371799/
[6] – https://pmc.ncbi.nlm.nih.gov/articles/PMC8554949/
[7] – https://insightsimaging.springeropen.com/articles/10.1186/s13244-023-01365-1
[8] – https://pmc.ncbi.nlm.nih.gov/articles/PMC9621183/
[9] – https://www.nature.com/articles/s41597-023-02641-x
[10] – https://www.nature.com/articles/s41746-020-00341-z
[11] – https://pmc.ncbi.nlm.nih.gov/articles/PMC10870537/
[12] – https://pmc.ncbi.nlm.nih.gov/articles/PMC9935718/
[13] – https://pubs.rsna.org/doi/10.1148/ryai.240225?_gl=1*1f1x3t8*_ga*MTczOTg2OTUzNi4xNzIwNjkwMDI2*_ga_
EQ32SZ84M3*MTcyMDY5MDAyNS4xLjEuMTcyMDY5MDA1OS4yNi4wLjA.
[14] – https://iopscience.iop.org/article/10.1088/2057-1976/ac8e6f/meta
[15] – https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(24)00381-X/fulltext
[16] – https://pubmed.ncbi.nlm.nih.gov/36049399/
[17] – https://pmc.ncbi.nlm.nih.gov/articles/PMC6609433/
[18] – https://www.sciencedirect.com/science/article/pii/S0010482521001943
[19] – https://theibsi.github.io/
[20] – https://pubs.rsna.org/doi/abs/10.1148/radiol.2020191145
[21] – https://pubmed.ncbi.nlm.nih.gov/39284979/
[22] – https://pubmed.ncbi.nlm.nih.gov/39794540/
[23] – https://www.sciencedirect.com/science/article/abs/pii/S0720048X20302849
[24] – https://pmc.ncbi.nlm.nih.gov/articles/PMC11508875/
[25] – https://insightsimaging.springeropen.com/articles/10.1186/s13244-025-01950-6
[26] – https://ascopubs.org/doi/10.1200/CCI.17.00004
[27] – https://www.clinicaloncologyonline.net/article/S0936-6555(21)00372-1/fulltext
[28] – https://pmc.ncbi.nlm.nih.gov/articles/PMC7383235/
[29] – https://www.nature.com/articles/s41416-025-02948-z
[30] – https://pmc.ncbi.nlm.nih.gov/articles/PMC10593058/
[31] – https://pmc.ncbi.nlm.nih.gov/articles/PMC12293981/
[32] – https://www.clinicalradiologyonline.net/article/S0009-9260(25)00131-X/fulltext
[33] – https://pmc.ncbi.nlm.nih.gov/articles/PMC5554130/
[34] – https://link.springer.com/article/10.1007/s00259-022-05916-4
[35] – https://www.nature.com/articles/s41598-022-24278-3
[36] – https://pmc.ncbi.nlm.nih.gov/articles/PMC9310706/
[37] – https://pmc.ncbi.nlm.nih.gov/articles/PMC9966873/
[38] – https://pmc.ncbi.nlm.nih.gov/articles/PMC9134904/
[39] – https://www.nature.com/articles/s43018-022-00416-8
[40] – https://www.nature.com/articles/s41698-019-0096-z
[41] – https://pmc.ncbi.nlm.nih.gov/articles/PMC10501889/
[42] – https://www.emjreviews.com/radiology/article/editors-pick-regulatory-and-ethical-issues-in-the-new-era-of-radiomics-and-radiogenomics/

 

About Author

Similar Articles

Leave a Reply


thpxl