Overview
Shaped offers a rich library of model policies that power retrieval, scoring, and ranking within your recommendation and search systems. These policies range from classic collaborative filtering techniques and rule-based heuristics to state-of-the-art deep learning architectures. Explore the available policies below to understand their capabilities and how to configure them for optimal performance.
🧩 ALS
Matrix Factorization (Alternating Least Squares) for implicit feedback collaborative filtering.
🧩 SVD
Matrix Factorization (Singular Value Decomposition inspired) for collaborative filtering, often including biases.
💡 ELSA
Scalable Linear Shallow Autoencoder for implicit feedback recommendations using factorized structure.
✨ EASE
Embarrassingly Shallow Autoencoder with a simple closed-form solution for item collaborative filtering.
🌳 LightGBM
Efficient Gradient Boosting Decision Tree framework, suitable for ranking (e.g., LambdaMART).
🌳 XGBoost
Scalable Gradient Boosting Decision Tree framework, effective for classification and ranking tasks.
➕ Wide & Deep
Combines a wide linear model (memorization) and a deep neural network (generalization) for scoring.
🔗 DeepFM
Combines Factorization Machines (FM) and Deep Neural Networks (DNN) for interaction modeling.
🗼 Two-Tower
Scalable deep learning model with separate user and item networks for efficient candidate retrieval.
🐝 beeFormer
Fine-tunes sentence Transformers on interaction data, bridging semantic & behavioral signals.
#️⃣ Ngram
Simple sequential model predicting based on fixed-length interaction counts (n-grams).
🔄 Item2Vec
Learns item embeddings based on co-occurrence within interaction sequences (Word2Vec adaptation).
➡️ SASRec
Self-Attentive Sequential Recommendation model using Transformers for next-item prediction.
↔️ BERT4Rec
Bidirectional Transformer (BERT) for sequential recommendation via masked item prediction.
✅ GSASRec
Improved SASRec (Generalized Self-Attentive) reducing overconfidence in sequential models.
🛍️ Item-Content Similarity
Content-based policy comparing item features to pooled features from user's interaction history.
👤 User-Content Similarity
Content-based policy comparing user profile features to pooled features of item interactors.
🖇️ User-Item Content Similarity
Content-based policy directly comparing embeddings from user attributes and item attributes.
🕒 Chronological
Rule-based policy ranking items by timestamp (newest or oldest first).
⭐ Popular
Rule-based policy ranking items by overall interaction counts (popularity).
🔥 Recently Popular
Trending policy ranking items by popularity with time decay (e.g., HackerNews style).
📈 Rising Popularity
Trending policy ranking items by recent increase in engagement (momentum).
🎲 Random
Rule-based policy assigning random scores, useful for baselines or exploration.
✨ Auto-Tune
Automatically finds the best model policy and hyperparameters for your data.
➕ Score Ensemble
Combines ranked lists from multiple scoring policies, often via interleaving.
🚫 No-Operation
Placeholder policy that performs no action, used for testing or disabling stages.