
Add Yardstick Metric
Guide for developing new metrics that extend the yardstick package. This skill provides best practices, code templates, and testing patterns for creating custom performance metrics.
Two Development Contexts
This skill supports two distinct development contexts:
π Extension Development (Default)
Creating a new R package that extends yardstick with custom metrics.
- β Use this for: New packages, standalone metrics, CRAN submissions
- β οΈ Constraint: Can only use exported functions (no
:::)
π§ Source Development (Advanced)
Contributing directly to yardstick via pull requests.
- β Use this for: Contributing to tidymodels/yardstick repository
- β¨ Benefit: Can use internal functions and package infrastructure
Getting Started
INSTRUCTIONS FOR CLAUDE: Run the verification script first to determine the development context:
Rscript -e 'source(Sys.glob(path.expand("~/.claude/plugins/cache/tidymodels-skills/tidymodels-dev/*/tidymodels/shared-references/scripts/verify-setup.R"))[1])'Then follow the appropriate path based on the output:
Output: βAll checks for source development complete.β β Go to Source Development Guide
Output: βAll checks for extension development complete.β (no warnings) β Go to Extension Development Guide
Output: Shows βWarning - [UUID]β messages β Go to Extension Prerequisites to resolve warnings first
Overview
Creating a custom yardstick metric provides: - Standardization with existing metrics - Automatic error handling for types and lengths - Support for multiclass implementations - NA handling - Grouped data frame support - Integration with metric_set() - Optional autoplot support for visualization (curves and confusion matrices)
Development Workflow
See Development Workflow for complete details.
Fast iteration cycle (run repeatedly):
devtools::document()- Generate documentationdevtools::load_all()- Load your packagedevtools::test()- Run tests
Final validation (run once at end):
devtools::check()- Full R CMD check
WARNING: Do NOT run check() during iteration. It takes 1-2 minutes and is unnecessary until youβre done.
Choosing Your Metric Type
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β What type of data do you have? β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββΌββββββββββββββββββββ¬ββββββββββββββββββββ
β β β β
βΌ βΌ βΌ βΌ
Regression Classification Survival Analysis Quantile Forecasting
β β β β
βΌ βΌ βΌ βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
β NUMERIC β β CLASS-BASED β β SURVIVAL β β QUANTILE β
β METRICS β β METRICS β β METRICS β β METRICS β
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
β β β β
β βββββββββββββββββββββ€ β
β β β β
β βΌ βΌ β
β βββββββββββββββ βββββββββββββββ β
β β Classes β βProbabilitiesβ β
β βββββββββββββββ βββββββββββββββ β
β β β β
β βββββββββββββββββββββ€ β
β β β β
β βΌ βΌ β
β βββββββββββββββ βββββββββββββββ β
β βUnordered? β βUnordered? β β
β β CLASS β β PROBABILITY β β
β βββββββββββββββ βββββββββββββββ β
β β β β
β βΌ βΌ β
β βββββββββββββββ βββββββββββββββ β
β βOrdered? β βOrdered? β β
β βORDERED PROB β βORDERED PROB β β
β βββββββββββββββ βββββββββββββββ β
β β
β βββββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββ€
β β β β β
βΌ βΌ βΌ βΌ βΌ
Examples: Examples: Examples: Examples: Examples:
- MAE - Accuracy - ROC AUC - Concordance - WIS
- RMSE - Precision - Log Loss - Brier - Pinball
- RΒ² - F1 Score - PR AUC - Royston's D - Coverage
- Ranked Prob Score
Survival Metrics Breakdown:
- STATIC: Single overall value (e.g., Concordance)
- DYNAMIC: Value per time point (e.g., Time-dependent Brier)
- INTEGRATED: Averaged across time (e.g., Integrated Brier)
- LINEAR PRED: From Cox models (e.g., Royston's D)
Decision guide: - Numeric metric: Truth and predictions are continuous numbers β Numeric Metrics - Class metric: Truth and predictions are unordered factor classes β Class Metrics - Probability metric: Truth is unordered factor, predictions are probabilities β Probability Metrics - Ordered probability metric: Truth is ordered factor, predictions are probabilities β Ordered Probability Metrics - Static survival metric: Truth is Surv object, single numeric prediction β Static Survival Metrics - Dynamic survival metric: Truth is Surv object, time-dependent predictions β Dynamic Survival Metrics - Integrated survival metric: Truth is Surv object, integrated across time β Integrated Survival Metrics - Linear predictor survival metric: Truth is Surv object, linear predictor from Cox model β Linear Predictor Survival Metrics - Quantile metric: Truth is numeric, predictions are quantiles β Quantile Metrics
Complete Example: Numeric Metric (MAE)
For a complete, step-by-step implementation of a numeric metric (MAE), see the comprehensive example with all required components in:
π Numeric Metrics Reference
This reference includes: - Implementation function (mae_impl) with case weights handling - Vector interface (mae_vec) with validation and NA handling - Data frame method with new_numeric_metric() wrapper - Complete test suite covering correctness, NA handling, input validation, and case weights - Working examples you can adapt for your own metrics
Quick preview of the pattern: - _impl() function: Core calculation logic - _vec() function: Validation and NA handling - .data.frame() method: Integration with yardstick system
See also Extension Development Guide for the complete implementation walkthrough.
Implementation Guide by Metric Type
Numeric Metrics
Use for: Regression metrics where truth and predictions are continuous numbers.
Pattern: Three-function approach (_impl, _vec, data.frame method)
Complete guide: Numeric Metrics
Key points: - Always use check_numeric_metric() for validation - Handle case weights with weighted.mean() - Use yardstick_remove_missing() for NA handling - Return .estimator = "standard"
Examples: MAE, RMSE, MSE, Huber Loss, R-squared
Reference implementations: - Simple metrics: R/num-mae.R, R/num-rmse.R, R/num-mse.R - Parameterized metrics: R/num-huber_loss.R (has delta parameter) - Complex metrics: R/num-ccc.R (correlation-based)
Class Metrics
Use for: Classification metrics where truth and predictions are factor classes.
Pattern: Uses confusion matrix from yardstick_table()
Complete guide: Class Metrics
Key points: - Use yardstick_table() to create weighted confusion matrix - Implement separate _binary and _estimator_impl functions - Handle factor level ordering with event_level parameter - Support multiclass with macro, micro, macro_weighted averaging
Examples: Accuracy, Precision, Recall, F1, Specificity
Reference implementations: - Simple metrics: R/class-accuracy.R, R/class-precision.R, R/class-recall.R - Combined metrics: R/class-f_meas.R (F1 score) - Balanced metrics: R/class-bal_accuracy.R (handles class imbalance)
Probability Metrics
Use for: Metrics that evaluate predicted probabilities against true classes.
Pattern: Similar to class metrics but uses probability columns
Complete guide: Probability Metrics
Key points: - Truth is factor, estimate is probabilities - Convert factor to binary for binary metrics - Handle multiple probability columns for multiclass - Use check_prob_metric() for validation
Examples: ROC AUC, Log Loss, Brier Score, PR AUC
Reference implementations: - Curve-based: R/prob-roc_auc.R, R/prob-pr_auc.R - Scoring rules: R/prob-brier_class.R, R/prob-mn_log_loss.R
Ordered Probability Metrics
Use for: Ordinal classification metrics where class ordering matters.
Pattern: Three-function approach with cumulative probabilities
Complete guide: Ordered Probability Metrics
Key points: - Truth must be ordered factor - Uses cumulative probabilities to respect ordering - Use check_ordered_prob_metric() for validation - No averaging types (works same for any number of classes)
Examples: Ranked Probability Score (RPS)
Static Survival Metrics
Use for: Overall survival metrics with single numeric predictions.
Pattern: Three-function approach with Surv objects
Complete guide: Static Survival Metrics
Key points: - Truth is Surv object from survival package - Estimate is single numeric per observation - Handles right-censoring with comparable pairs - Use check_static_survival_metric() for validation
Examples: Concordance Index (C-index)
Dynamic Survival Metrics
Use for: Time-dependent survival metrics at specific evaluation times.
Pattern: Three-function approach with list-column predictions
Complete guide: Dynamic Survival Metrics
Key points: - Truth is Surv object - Estimate is list-column of data.frames with .eval_time, .pred_survival, .weight_censored - Returns multiple rows (one per eval_time) - Uses inverse probability of censoring weights (IPCW)
Examples: Time-dependent Brier Score, Time-dependent ROC AUC
Integrated Survival Metrics
Use for: Overall survival metrics integrated across evaluation times.
Pattern: Two-function approach (calls dynamic metric, then integrates)
Complete guide: Integrated Survival Metrics
Key points: - Same input format as dynamic survival metrics - Integrates across time using trapezoidal rule - Normalizes by max evaluation time - Requires at least 2 evaluation times
Examples: Integrated Brier Score, Integrated ROC AUC
Linear Predictor Survival Metrics
Use for: Metrics for linear predictors from Cox models.
Pattern: Three-function approach with transformations
Complete guide: Linear Predictor Survival Metrics
Key points: - Truth is Surv object - Estimate is linear predictor values (unbounded) - Often uses transformations (e.g., normal scores) - Use check_linear_pred_survival_metric() for validation
Examples: Roystonβs D statistic, RΒ²_D
Quantile Metrics
Use for: Quantile prediction metrics for uncertainty quantification.
Pattern: Three-function approach with quantile_pred objects
Complete guide: Quantile Metrics
Key points: - Truth is numeric - Estimate is hardhat::quantile_pred object - Handles missing quantiles (impute, drop, or propagate) - Uses fn_options for additional parameters
Examples: Weighted Interval Score (WIS), Pinball Loss
Documentation
See Roxygen Documentation for complete templates.
Required roxygen tags:
#' @family [metric category] metrics
#' @export
#' @inheritParams [similar_metric]
#' @param data A data frame
#' @param truth Unquoted column with true values
#' @param estimate Unquoted column with predictions
#' @param na_rm Remove missing values (default TRUE)
#' @param case_weights Optional case weights column
#' @return A tibble with .metric, .estimator, .estimate columnsTesting
See Testing Patterns (Extension) for comprehensive guide.
Required test categories: 1. Correctness: Metric calculates correctly 2. NA handling: Both na_rm = TRUE and FALSE 3. Input validation: Wrong types, mismatched lengths 4. Case weights: Weighted and unweighted differ 5. Edge cases: All correct, all wrong, empty data
Common Patterns
Using the confusion matrix
See Confusion Matrix for complete guide.
# Get confusion matrix
xtab <- yardstick::yardstick_table(truth, estimate, case_weights)
# Extract values (for binary classification)
tp <- xtab[2, 2] # True positives: truth = second, pred = second
tn <- xtab[1, 1] # True negatives: truth = first, pred = first
fp <- xtab[1, 2] # False positives: truth = first, pred = second
fn <- xtab[2, 1] # False negatives: truth = second, pred = firstHandling case weights
See Case Weights for complete guide.
# Check and convert hardhat weights
if (!is.null(case_weights)) {
if (inherits(case_weights, c("hardhat_importance_weights",
"hardhat_frequency_weights"))) {
case_weights <- as.double(case_weights)
}
}
# Use in calculations
if (is.null(case_weights)) {
mean(values)
} else {
weighted.mean(values, w = case_weights)
}Multiclass averaging
See Class Metrics for complete guide.
Macro averaging: Average of per-class metrics (treats all classes equally) Micro averaging: Pool all observations, calculate once (treats all observations equally) Macro-weighted averaging: Weighted average by class prevalence
Advanced Topics
Combining Metrics with metric_set()
Once youβve created your metric, you can combine it with other metrics using metric_set():
my_metrics <- metric_set(mae, rmse, my_custom_metric)
my_metrics(data, truth = y, estimate = y_pred)Key benefits: - Calculate multiple metrics at once - More efficient (shared calculations) - Integrates with tune package - Works with grouped data
See Combining Metrics for complete guide.
Creating Groupwise Metrics
Groupwise metrics measure disparity in metric values across groups (useful for fairness):
accuracy_diff <- new_groupwise_metric(
fn = accuracy,
name = "accuracy_diff",
aggregate = function(x) diff(range(x$.estimate))
)
accuracy_diff_by_group <- accuracy_diff(group_column)
accuracy_diff_by_group(data, truth, estimate)Use cases: - Fairness analysis across demographic groups - Performance consistency across segments - Disparity quantification
See Groupwise Metrics for complete guide.
Package-Specific Patterns (Source Development)
If youβre contributing to yardstick itself, you have access to internal functions and conventions not available in extension development.
File Naming Conventions
Yardstick uses strict naming patterns: - Numeric: R/num-[name].R β R/num-mae.R - Class: R/class-[name].R β R/class-accuracy.R - Probability: R/prob-[name].R β R/prob-roc_auc.R - Tests: tests/testthat/test-num-mae.R
Internal Functions Available
When developing yardstick itself, you can use: - yardstick_mean() - Weighted mean with case weight handling - finalize_estimator_internal() - Estimator selection for multiclass - check_numeric_metric(), check_class_metric(), check_prob_metric() - Validation - Test helpers: data_altman(), data_three_class(), data_hpc_cv1()
Documentation Templates
Yardstick uses templates in man-roxygen/:
#' @templateVar fn mae
#' @template return
#' @template event_firstSnapshot Testing
Yardstick uses testthat::expect_snapshot() extensively:
test_that("mae returns correct structure", {
expect_snapshot(mae(df, truth, estimate))
})Complete source development guide: Source Development Guide
Package Integration
Package-level documentation
See Package Imports for complete guide.
Create R/{packagename}-package.R:
#' @keywords internal
"_PACKAGE"
#' @importFrom rlang .data := !! enquo enquos
#' @importFrom yardstick new_numeric_metric new_class_metric new_prob_metric
NULLExports
All metrics must be exported:
#' @export
mae <- function(data, ...) {
UseMethod("mae")
}Best Practices
See Best Practices (Extension) for complete guide.
Key principles: - Use base pipe |> not magrittr pipe %>% - Prefer for-loops over purrr::map() for better error messages - Use cli::cli_abort() for error messages - Keep functions focused on single responsibility - Validate early (in _vec), trust data in _impl
Troubleshooting
See Troubleshooting (Extension) for complete guide.
Common issues: - βNo visible global function definitionβ β Add to package imports - βObject not foundβ in tests β Use devtools::load_all() before testing - NA handling bugs β Check both na_rm = TRUE and FALSE cases - Case weights not working β Convert hardhat weights to numeric
Next Steps
For Extension Development (creating new packages):
- Extension prerequisites: Extension Prerequisites - START HERE
For Source Development (contributing to yardstick):
- Start here: Source Development Guide
- Clone repository: See Repository Access
- Study existing metrics: Browse
R/num-*.R,R/class-*.R, etc. - Follow package conventions: File naming, internal functions, templates
- Test with internal helpers: See Testing Patterns (Source)
- Submit PR: See Source Development Guide for PR process