Skip to main content

SVD (Matrix Factorization)

Description

The SVD (Singular Value Decomposition) policy implements matrix factorization, often referring to iterative algorithms inspired by SVD used for collaborative filtering (like Funk SVD or SVD++). It decomposes the user-item interaction matrix into latent factors and potentially incorporates user/item biases (mean-centering tendencies) to improve prediction accuracy. It's typically trained using optimization techniques like Stochastic Gradient Descent (SGD).

Policy Type: svd Supports: embedding_policy, scoring_policy

Hyperparameter tuning

  • factors: Number of latent factors in the matrix factorization.
  • num_epochs: Number of complete passes through the training dataset.

V1 API

policy_configs:
embedding_policy: # Can also be used under scoring
policy_type: svd
n_factors: 100 # Number of latent factors
n_epochs: 20 # Number of training iterations (epochs) for SGD
biased: true # Whether to include user/item bias terms
lr_all: 0.005 # Learning rate for SGD
reg_all: 0.02 # Regularization strength for SG

Usage

Use this model when:

  • You have explicit feedback data (ratings, reviews with scores)
  • You want to model user and item biases (mean-centering tendencies)
  • You prefer SGD-based optimization over ALS
  • You need a straightforward matrix factorization approach
  • You're working with medium-sized datasets

Choose a different model when:

  • You have only implicit feedback (ALS is typically better for this)
  • You need to leverage item content features (use Two-Tower or BeeFormer)
  • You want the most scalable solution (ELSA may be better)
  • You need to model sequential patterns (use sequential models)

Use cases

  • Product recommendations with review scores
  • Restaurant or service recommendations with ratings
  • Any domain with explicit user feedback

References