Skip to main content

SASRec (Sequential)

warning

This is an article from the Shaped 1.0 documentation. The APIs have changed and information may be outdated. Go to Shaped 2.0 docs

Description

The SASRec (Self-Attentive Sequential Recommendation) policy utilizes the Transformer architecture's self-attention mechanism to model user interaction sequences. By weighing the importance of all previous items, it captures both short-term and long-range dependencies to predict the next item the user is likely to interact with.

Policy Type: sasrec Supports: embedding_policy, scoring_policy

Configuration Example

policy_configs:
scoring_policy: # Can also be used under embedding_policy
policy_type: sasrec
# Training Hyperparameters
batch_size: 1000 # Samples per training batch
n_epochs: 1 # Number of training epochs
negative_samples_count: 2 # Negative samples per positive for contrastive loss
learning_rate: 0.001 # Optimizer learning rate
# Architecture Hyperparameters
hidden_size: 64 # Dimensionality of hidden layers/embeddings
n_heads: 2 # Number of self-attention heads
n_layers: 2 # Number of Transformer layers
attn_dropout_prob: 0.2 # Dropout rate in attention mechanism
hidden_act: "gelu" # Activation function (e.g., "gelu", "relu")
max_seq_length: 50 # Maximum input sequence length

Reference