Skip to main content

EASE (Autoencoder)

Description

The EASE (Embarrassingly Shallow Autoencoder) policy implements a simple linear autoencoder for item-based collaborative filtering using a closed-form solution. It learns an item-item similarity matrix B by minimizing ||X - X * B||^2 subject to diag(B)=0 and L2 regularization on B. The solution avoids iterative optimization, making it computationally efficient to compute the similarity matrix B once. Predictions for a user are made by multiplying their interaction vector by B.

Policy Type: ease Supports: embedding_policy, scoring_policy

Hyperparameter tuning

  • batch_size: Samples per training batch.
  • n_epochs: Number of training epochs.
  • lr: Learning rate.
  • factors: Rank (number of latent factors) for the low-rank part.

V1 API

policy_configs:
embedding_policy: # Or scoring_policy
policy_type: ease
# Training Hyperparameters
batch_size: 512 # Samples per training batch
n_epochs: 20 # Number of training epochs
lr: 0.1 # Learning rate
# Model Hyperparameters
factors: 10 # Rank (number of latent factors) for the low-rank part

Usage

Use this model when:

  • You want the simplest possible autoencoder approach
  • You need fast training with a closed-form solution
  • You have implicit feedback data
  • You're working with medium-sized datasets
  • You want a quick baseline or prototype

Choose a different model when:

  • You have very large-scale datasets (ELSA scales better)
  • You need to leverage item content features (use Two-Tower or BeeFormer)
  • You want to model sequential patterns (use sequential models)
  • You need the highest accuracy (more complex models may perform better)

Use cases

  • E-commerce sites
  • Content recommendation systems
  • Product similarity for "customers who bought this also bought"

Reference