Skip to main content

Rank Time Configuration

Shaped offers several advanced options to fine-tune the recommendations at ranking time:

  • Filter Predicate: Customize which items are returned by filtering based on specific item attributes.
  • Diversity Factor: Enhance the variety of recommendations to ensure users encounter a broader range of relevant items.
  • Exploration Factor: Boost the visibility of new and potentially interesting content, even if it’s less familiar to users.

Filter Predicate

Shaped provides a way to filter the items returned by the Rank API based on their metadata (i.e. the columns found in the items fetch queries). You can do this both for a personalized ranking query or a non-personalized one. There's two primary use-cases that this is useful for:

  1. Personalized search: Filtering out items based on a user defined keyword matching or metadata specific query.
  2. Category pages: Filtering out items for a specific recommendation UI element (e.g. carousel or feed). This means that you can create one Shaped model and use it for a variety of different carousels based on prior domain knowledge you know will resonate with your customers.

In this guide we'll show you how to use the filter predicate feature to power some of these use-cases.

Supported Operations

The filter predicate language is a standard SQL expression, e.g. "category = 'sports'" or "publish_year >= 2023". Here are the currently supported operators:

* >, >=, <, <=, =
* AND, OR, NOT
* IS NULL, IS NOT NULL
* IS TRUE, IS NOT TRUE, IS FALSE, IS NOT FALSE
* IN
* LIKE, NOT LIKE
* regexp_match(column, pattern)
* CAST
* array_has(sequential_column, value)
* array_has_any(sequential_column, values)
* array_has_all(sequential_column, values)

For example, the following predicate string is acceptable:

((label IN [10, 20]) AND (note.email IS NOT NULL))
OR NOT note.created

Filter Predicate Examples

Filtering by Category

shaped rank ---model-name personalized_video_search -user_id 3 --limit 5 \
--filter_predicate 'category IN ["sports", "news"]'

Filtering a Sequence Category Column

shaped rank --model-name personalized_video_search --user_id 3 --limit 5 \
--filter_predicate 'array_has_any(category_sequence, ["sports", "news"])'

Filtering By Year

shaped rank --model-name personalized_video_search --user_id 3 --limit 5 \
--filter_predicate 'publish_year >= 2023'

Note that if the user_id is provided the filtered results are personalized to that user. If the user_id is not provided the filtered results return trending non-personalized results by default.

info

Internally we use the Lance data format to support the filter predicate, take a look at their the docs here for more information.

Diversity Factor

Shaped allows you to fine-tune the diversity of your recommendations, ensuring users see a wider range of relevant items rather than just the top predicted results. This can lead to a more engaging and surprising user experience.

This guide explains how to control the diversity of recommendations using the diversity_factor parameter at inference time.

Understanding Diversity

In recommendation systems, relevance usually refers to how well an item matches a user's preferences based on their past behavior and item attributes. However, simply showing the most relevant items can lead to a phenomenon known as "filter bubbles," where users are only exposed to a narrow range of similar content.

Diversity, on the other hand, aims to introduce variety and novelty into recommendations. By incorporating diversity into your ranking strategy, you can:

  • Reduce redundancy: Avoid recommending very similar items.
  • Increase serendipity: Surface unexpected but potentially interesting items.
  • Improve user satisfaction: Provide a more engaging and less repetitive experience.

Configuring diversity_factor

Shaped utilizes the diversity_factor parameter to balance relevance and diversity in your recommendations. This parameter accepts a value between 0 and 1, where:

  • 0: Prioritizes relevance, ignoring diversity. This setting essentially ranks items solely based on their predicted relevance scores.
  • 1: Prioritizes diversity, potentially sacrificing some relevance. This setting emphasizes variety, even if it means recommending items with slightly lower predicted relevance.

Values between 0 and 1 allow you to fine-tune the trade-off between relevance and diversity according to your specific use case and desired user experience.

You can override the model's default diversity_factor at inference time using the Rank API. This provides flexibility to adjust diversity dynamically based on the context:

shaped rank --model_name my_recommendation_model --user_id "XA123F2" --diversity_factor 0.7

This example overrides the model's default diversity_factor of 0.5 and sets it to 0.7 for this specific ranking request, emphasizing diversity more strongly.

How Shaped Handles Diversity: Maximal Marginal Relevance (MMR)

Shaped employs the Maximal Marginal Relevance (MMR) algorithm to incorporate diversity into the ranking process. MMR works by iteratively selecting items for the recommendation list, considering both:

  • Relevance: How well an item matches the user's preferences.
  • Diversity: How different the item is from those already included in the list.

This approach ensures that the final ranked list contains a mix of relevant and diverse items, striking a balance controlled by the diversity_factor parameter.

Exploration Factor

A critical challenge in recommendation systems is balancing the need to exploit known user preferences (recommending familiar items) with the need to explore potentially interesting but less familiar content. Shaped provides flexible mechanisms to control this balance, ensuring users discover fresh content while still seeing relevant recommendations.

This guide explains how to leverage the exploration_factor parameter to fine-tune the exploration-exploitation trade-off in your recommendation strategy.

The Importance of Exploration

Exploiting known user preferences through recommendations based on past behavior is essential for providing immediate satisfaction. However, over-reliance on exploitation can lead to:

  • Filter bubbles: Users only see a narrow range of similar content, limiting discovery and potentially becoming repetitive.
  • Stagnant recommendations: New items struggle to gain traction, as they lack the interaction history needed to compete with established items.

Exploration, on the other hand, involves recommending items that users haven't interacted with before, often with less confidence based purely on predicted relevance. Exploration helps to:

  • Introduce novelty and serendipity: Break users out of filter bubbles and surface unexpected but potentially enjoyable content.
  • Uncover hidden gems: Give new or less popular items a chance to find their audience.
  • Gather valuable data: Learn about user preferences for a wider range of items, improving future recommendations.

Configuring exploration_factor

Shaped utilizes the exploration_factor parameter to govern the balance between exploration and exploitation. This parameter, ranging from 0 to 1, dictates the proportion of recommendations dedicated to exploration:

  • 0: Prioritizes exploitation, focusing on recommending items with high predicted relevance based on past interactions.
  • 1: Maximizes exploration, recommending primarily new or less familiar items to users.

Values between 0 and 1 allow you to fine-tune this trade-off, finding the optimal balance for your application and user base.

Similar to the diversity_factor, you can override the model's default exploration_factor at inference time using the Rank API:

shaped rank --model_name my_recommendation_model --user_id "XA123F2" --exploration_factor 0.3

This example overrides the model's default exploration_factor and sets it to 0.3 for this specific ranking request, increasing the emphasis on exploration for this user in this particular context.

Intelligent Exploration with Bandit Algorithms

Shaped doesn't just randomly inject new items for exploration. Under the hood, we employ sophisticated bandit algorithms to make intelligent exploration decisions. These algorithms:

  • Continuously learn from user interactions with both exploited and explored recommendations.
  • Dynamically adjust the exploration-exploitation balance for each user based on their individual interaction patterns.
  • Optimize for long-term user satisfaction by balancing immediate rewards (exploiting known preferences) with the potential for discovering even more preferable items (exploration).

This intelligent approach ensures that exploration is targeted and effective, maximizing the chances of users discovering content they'll truly enjoy.

Conclusion

Balancing exploration and exploitation is crucial for creating engaging and effective recommendation systems. By leveraging the exploration_factor parameter and the power of bandit algorithms, Shaped empowers you to find the optimal balance for your application, ensuring users discover fresh and exciting content while still receiving relevant recommendations tailored to their preferences.