Item2Vec (Sequential)
Description
The Item2Vec policy adapts the Word2Vec algorithm (CBOW or Skip-gram) to learn item embeddings from user interaction sequences. It captures item co-occurrence patterns within a defined context window, placing items frequently interacted with together closer in the embedding space. Primarily models similarity based on co-occurrence rather than strict sequential order.
Policy Type: item2vec
Supports: embedding_policy
, scoring_policy
Configuration Example
scoring_policy_item2vec.yaml
policy_configs:
scoring_policy:
policy_type: item2vec
embedding_size: 512 # Dimensionality of the learned item embeddings
window_size: 20 # Context window size (items before/after)
min_count: 1 # Minimum frequency for an item to be included
algorithm: "cbow" # Training algorithm ('cbow' or 'skipgram')
References
- Mikolov, T., et al. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv. (Foundation Word2Vec).
- Barkan, O., & Koenigstein, N. (2016). Item2vec: neural item embedding for collaborative filtering. arXiv.