Skip to main content

The basics

Shaped is a real-time retrieval engine that lets you build state-of-the-art retrieval systems. You can:

  • Import from 20+ connectors
  • Transform data with SQL or AI into materialized views for training
  • Define retrieval engines, models, and ranking pipelines as YAML files
  • Test different models, ranking strategies, and configurations in parallel
  • Index on new data automatically and in real-time
  • Scale up or down automatically based on usage

Shaped has three layers: data, intelligence, and query.

Data layer

This is where your data lives.

Tables

You can import data from an external source using one of our 20+ connectors. If you need a custom dataset, you can declare a custom table and add rows to it via API.

Once your data is imported, it exists in Shaped as tables. Depending on the connector you use, new data may be updated in real-time (streams) or every 15 minutes (batch).

Table views

If your tables are not in the right structure to be trained on, you can use transforms to fix or enrich them into new views.

SQL views let you:

  • Convert data types, fix null values, or rename columns
  • Join multiple tables into one massive interactions table.
  • Combine multiple sources into one table - eg user events from your product with billing transactions)

AI views let you:

  • Add new semantic data from existing columns
  • Create new columns like clothing_color from an image of clothes

Each view is materialized, so you can use the outputs to train a model (or transform in a different way).

Intelligence layer

The intelligence layer is where indexing, embedding generation, and model training happens. You will declare an engine which contains the scoring and ranking logic for your retrieval system.

Engines

When you configure an engine, you can define multiple retrieval components that will be trained together. You will also describe what data the engine is connected to and deployment configuration (such as number of pods, data tier, etc).

Your engine does not have to be limited to a single embedding or scoring model. You can configure multiple models, plus embeddings for semantic search, plus a custom lexical/bm25 search engine, all in one engine. Each component has an API to fine-tune it accordingly.

You can also run multiple engines in parallel to test the best one.

Query layer

The query layer is how you retrieve data using your engine. It is a REST API with a simple configuration to handle multiple different retrieval strategies with varying complexity.

At the simple end, you can do standard BM25 lexical search with a text query.

At the more complex end, you can run complex 4-stage ranking pipelines with retrieval, filtering, scoring, and ordering steps.

tip

If you want to learn more about the four-stage recommender model, check out our series: Anatomy of Modern Ranking Architectures