Skip to main content

Wide & Deep (Neural Scoring)

Description

The Wide & Deep policy implements the Wide & Deep Learning model, which jointly trains a wide linear model and a deep neural network (DNN) to combine the benefits of memorization and generalization.

  • Wide Part: A generalized linear model often fed with raw sparse features and manually engineered cross-product features to effectively memorize specific feature combinations.
  • Deep Part: A standard MLP fed with dense embeddings (learned from categorical features) and potentially normalized dense features to generalize by learning complex, non-linear patterns.

The outputs are typically summed before the final prediction layer.

Policy Type: wide-deep Supports: scoring_policy

Configuration Example

scoring_policy_wide_deep.yaml
policy_configs:
scoring_policy:
policy_type: wide-deep
# Architecture
deep_hidden_units: [256, 128, 64] # Layer sizes for the deep MLP component
activation_fn: "relu" # Activation for deep layers (e.g., "relu", "sigmoid")
# Training Control
val_split: 0.1 # Proportion of data for validation during training
n_epochs: 10 # Number of training epochs

Reference