Skip to main content

XGBoost (GBT)

Description

The XGBoost policy utilizes the XGBoost library, another popular and powerful gradient boosting framework known for its performance, regularization options, and scalability. Like LightGBM, it builds an ensemble of decision trees. It can be configured for classification, regression, or ranking tasks.

Policy Type: xgboost Supports: scoring_policy

Configuration Example (Classifier)

scoring_policy_xgboost_classifier.yaml
policy_configs:
scoring_policy:
policy_type: xgboost
# Core Parameters
mode: "classifier" # Model mode: "classifier" or "regressor"
objective: "binary:logistic" # Example objective (verify available ranking objectives if needed)
n_estimators: 100 # Number of boosting rounds
learning_rate: 0.2 # Step size shrinkage (eta)
# Tree Structure Parameters
max_depth: 16 # Max depth per tree
max_leaves: 0 # Max leaves per tree (0 = no limit)
min_child_weight: 1 # Minimum sum of instance weight needed in a child

Reference