Shaped’s mission is to make AI accessible for everyone. We achieve this by building elegant software that abstracts the complexity of machine-learning tasks. Below we share the key design directions, decisions and constraints we’ve built Shaped around to achieve this vision.

Minimal, powerful, elegant APIs

At Shaped, we strive to make our APIs as easy to use as possible. We’re a developer led team and we and understand the power of abstracting complex infrastructure, like ranking systems, behind an elegant API. Over time we’ve consistently refined our endpoints so that developers can started with the least work possible. That’s led us to where Shaped is today: A minimal set of elegant APIs that can demonstrate growth for your business in hours.

Data centric AI

Shaped is designed around the philosophy that you only need to tell us where your data is for us to start training. This can be thought of as a data centric AI approach. It goes off the assumption that the modeling is essentially solved and the data ingestion and data organization that is the main problem with production machine-learning. This philosophy is one of the major differentiators of our API and a reason customers love us.

Understand any data type

To get the most from your data, modern recommendation and ranking methods need all the data they can get. As well as understanding traditional tabular data types such as categorical (enums) and numerical (scalars) variables, Shaped understands complex data types like image, audio, language and video. It does this by processing these data types into embeddings using pre-trained understanding models. These embeddings are then fed into the ranking models to improve its understanding of the input and the the performance of the final ranking.

For example, if you are building a social post recommendation model, the content of the post is crucial in understanding the relevance it has to a user. And these embedding models can understand that content. It is even more crucial when you lack interaction data for that post (e.g. say for a newly created post on the platform). Note, this is called the cold-start problem and will be discussed later.

The flexible understanding of any data type also builds on the data centric AI philosophy — Shaped understands your data no matter how its formatted.

📘

What are embeddings?

Embeddings are compressed numerical representations that encapsulate the underlying features or attributes of the data type.

Continuous machine-learning

Shaped continuously trains your ranking models to ensure that your model is always learning from the most recent data. This helps avoid data drift, which may be caused by seasonality or sporadic trends within your data. After training we deploy the new model only if it evaluates better than the previous one. This ensures the best model is always used, and protects against regressions with newer models or newer data cleaning.

Handling the cold-start problem

Ranking models learn the relevance between users and items based on the interaction data that relates them together. But what happens when you have a new user or item?

This is called the cold-start problem in recommendation systems. Shaped addresses it by using user and item context features. Having the context allows Shaped to rank results in the same way that similar users or items (that may have more interaction data) would be ranked. For users, these features may be demographic or interest data that is collected at sign-up time. For items, these features may be any metadata, descriptions, or the item content type itself.

4 Stage Recommendation System

Shaped is a 4 stage recommendation system pipeline (as typically used by big-tech):

  1. Candidate retrieval - A high recall step that retrieves all candidate items that need to be ranked for a user. In one of our policies we perform this offline, using a light-weight collaborative filtering model that finds similarity between users and items from just the interactions.
  2. Candidate filtering - This step filters out any last minute items from the candidate set (e.g. if the user has already viewed the item or if inventory was recently set to zero). We perform this at rank-time.
  3. Scoring - A scoring model then decides the confidence that each user will interact with the item. In one of our policies this takes into consideration the context of the user and item to provide better scores than at the retrieval steps.
  4. Ordering - The final ranked results are then sorted based on the sorted scoring order. Some indeterminism is added into the final ranking (e.g. placing random candidate items towards the top of the rankings) to avoid filter bubbles that bias the ranking algorithm.
24062406

Image from https://medium.com/nvidia-merlin/recommender-systems-not-just-recommender-models-485c161c755e

Shaped provides a suite of different model policies that define each stage of the pipeline above. These policies are selected and evaluated based on your use-case (defined by your user, item and interaction schema).

Several of our model policies focus on building embedding representations of your user and items. The quality of these embedding models are why our models perform well across many ranking use-cases.

As we keep building Shaped we’ll allow you to configure these model policies more. For example, you may have a heuristic that filters the candidate generation set better than what a learned model could do. Let us know if configuring these policies is interesting to you!

📘

Exploration vs Exploitation

In recommendation systems there is a trade off between exploring new content that hasn't been seen and exploiting old content that the system is confident will be relevant. The ordering stage is where this trade-off is typically addressed.

Data ingestion load and storage

Shaped aims to store and ingest the least amount of data possible to serve you the best rankings at low latency and with minimal database load. There are two continuously scheduled jobs that we use to ingest data for your model:

  1. Train job. This job is used to continuously train your ranking model. It pulls recent interactions for all your users and items at up to a 4hr frequency. This data is removed from Shaped after training.
  2. Online feature job. This job materializes your user and item context features and interactions into our online store up to every 15 minutes. Our online store and this materialization job are needed to enable low latency real-time predictions using the most recent features.

Scale Limits

For each of your models, Shaped’s API supports up to:

DimensionLimit
Unique users10 million
Unique items10 million
Requests per month36 million
Requests per second1000
Train frequency4 hours
Online store ingestion frequency15 minutes

These scale limits will keep on increasing as we continue to upgrade Shaped’s infrastructure. Please contact us if there's any constraints.