Shaped’s mission is to make AI accessible for everyone. We achieve this by building elegant software that abstracts the complexity of machine-learning tasks. Below we share the key design directions, decisions and constraints we’ve built Shaped around to achieve this vision.

Minimal, powerful, elegant APIs

At Shaped, we strive to make our APIs as easy to use as possible. We’re a developer led team and we and understand the power of abstracting complex infrastructure, like ranking systems, behind an elegant API. Over time we’ve consistently refined our endpoints so that developers can started with the least work possible. That’s led us to where Shaped is today: A minimal set of APIs that can demonstrate growth for your business in hours.

4 Stage Recommendation System

Shaped is a 4 stage recommendation system pipeline (as typically used by big-tech):

  1. Candidate retrieval - A high recall step that retrieves all candidate items that need to be ranked for a user. In one of our policies we perform this offline, using a light-weight collaborative filtering model that finds similarity between users and items from just the interactions.
  2. Candidate filtering - This step filters out any last minute items from the candidate set (e.g. if the user has already viewed the item or if inventory was recently set to zero). We perform this at rank-time.
  3. Scoring - A scoring model then decides the confidence that each user will interact with the item. In one of our policies this takes into consideration the context of the user and item to provide better scores than at the retrieval steps.
  4. Ordering - The final ranked results are then sorted based on the sorted scoring order. Some indeterminism is added into the final ranking (e.g. placing random candidate items towards the top of the rankings) to avoid filter bubbles that bias the ranking algorithm.
2406

Image from https://medium.com/nvidia-merlin/recommender-systems-not-just-recommender-models-485c161c755e

Shaped provides a suite of different model policies that define each stage of the pipeline above. These policies are selected and evaluated based on your use-case (defined by your user, item and interaction schema).

Several of our model policies focus on building embedding representations of your user and items. The quality of these embedding models are why our models perform well across many ranking use-cases.

As we keep building Shaped we’ll allow you to configure these model policies more. For example, you may have a heuristic that filters the candidate generation set better than what a learned model could do. Let us know if configuring these policies is interesting to you!

📘

Exploration vs Exploitation

In recommendation systems there is a trade off between exploring new content that hasn't been seen and exploiting old content that the system is confident will be relevant. The ordering stage is where this trade-off is typically addressed.

Data centric AI

Shaped is designed around the philosophy that you only need to tell us where your data is for us to start training. This can be thought of as a data centric AI approach. It goes off the assumption that the modeling is essentially solved and the data ingestion and data organization that is the main problem with production machine-learning. This philosophy is one of the major differentiators of our API and a reason customers love us.

Understand any data type

To get the most from your data, modern recommendation and ranking methods need all the data they can get. As well as understanding traditional tabular data types such as categorical (enums) and numerical (scalars) variables, Shaped understands complex data types like image, audio, language and video. It does this by processing these data types into embeddings using pre-trained understanding models. These embeddings are then fed into the ranking models to improve its understanding of the input and the the performance of the final ranking.

For example, if you are building a social post recommendation model, the content of the post is crucial in understanding the relevance it has to a user. And these embedding models can understand that content. It is even more crucial when you lack interaction data for that post (e.g. say for a newly created post on the platform). Note, this is called the cold-start problem and will be discussed later.

The flexible understanding of any data type also builds on the data centric AI philosophy — Shaped understands your data no matter how its formatted.

📘

What are embeddings?

Embeddings are compressed numerical representations that encapsulate the underlying features or attributes of the data type.

Continuous machine-learning

Shaped continuously trains your ranking models to ensure that it's always learning from the most recent data. This helps avoid data drift, which may be caused by seasonality or sporadic trends within your data. After training we deploy the new model only if it evaluates better than the previous one. This ensures the best model is always used, and protects against regressions with newer models or newer data cleaning.

How we handle the cold-start problem

Ranking models learn the relevance between users and items based on the interaction data that relates them together. But what happens when you have a new user or item?

This is called the cold-start problem in recommendation systems. Shaped addresses it in three ways:

  1. Using user and item context features. Having the context allows Shaped to rank results in the same way that similar users or items (that may have more interaction data) would be ranked. For users, these features may be demographic or interest data that is collected at sign-up time. For items, these features may be any metadata, descriptions, or the item content type itself.
  2. Cold-start item optimization. In the ordering step (described in the "4 stage recommender" section), we intelligently inject cold-start items to give them a chance to be seen by your users. The proportion of low interaction items that we inject can be chosen using the exploitation_factor argument within the Create Model API.
  3. Session-based recommendations. Our endpoint accepts the latest user interactions (often called sessions) and uses them to return the most relevant recommendations for the current context. By using the latest user interactions you can ensure you serve recommendations relevant to the most recent user intents. This functionality can be used to serve recommendations to users just introduced to the system.

Real-Time

In this day and age, recommendations need to be real-time. Personalization models perform best with the most recent user data and that's why our endpoints can accept and process data in real-time. Our models consider the latest data when producing recommendations, even if they were not part of the initial training dataset. This helps combat freshness and makes it feel like the algorithm is learning from your customer's interactions in real-time. It's also what enables session-based recommendation (see above).

Search & Retrieval Filtering

Yes, we support search as well! We build a reverse index of all the categorical, numerical and binary features you use to create your Shaped model. At rank time, you can then filter by any of these using our metadata query language. Many of our customers use this to give their users more control about what's being ranked (for example, size or location filters). You can also use it to retrieve different ranking results in different areas of your website or products. For example, filtering out different categories for different recommendation carousals you may want to build.

Metadata query language

Our metadata query language is a subset of MongoDB's query and projection operators. These metadata filters can be combined with $or and $and operators for complete flexibility.

NameDescription
$eqMatches values that are equal to a specified value.
$gtMatches values that are greater than a specified value.
$gteMatches values that are greater than or equal to a specified value.
$inMatches any of the values specified in an array.
$ltMatches values that are less than a specified value.
$lteMatches values that are less than or equal to a specified value.
$neMatches all values that are not equal to a specified value.
$ninMatches none of the values specified in an array.

Data ingestion load and storage

Shaped aims to store and ingest the least amount of data possible to serve you the best rankings at low latency and with minimal database load. There are two continuously scheduled jobs that we use to ingest data for your model:

  1. Train job. This job is used to continuously train your ranking model. It pulls recent interactions for all your users and items at up to a 4hr frequency. This data is removed from Shaped after training.
  2. Online feature job. This job materializes your user and item context features and interactions into our online store up to every 15 minutes. Our online store and this materialization job are needed to enable low latency real-time predictions using the most recent features.

Scale Limits

For each of your models, Shaped’s API supports up to:

DimensionLimit
Unique users10 million
Unique items10 million
Requests per month36 million
Requests per second1000
Train frequency4 hours
Online store ingestion frequency15 minutes

These scale limits will keep on increasing as we continue to upgrade Shaped’s infrastructure. Please contact us if there's any constraints.