https://store-images.s-microsoft.com/image/apps.57941.80e35683-d7b1-4934-a220-c23a2e6f1084.f760bb27-bc6d-4f09-be10-4737616455b4.e9290fae-1a19-42eb-9e53-25e5c05cef13

voyage-3-lite Embedding Model

Voyage AI Innovations Inc

voyage-3-lite Embedding Model

Voyage AI Innovations Inc

Text embedding model optimized for retrieval quality, latency, and cost. 32K context length.

Text embedding models are neural networks that transform texts into numerical vectors. They are a crucial building block for semantic search/retrieval systems and retrieval-augmented generation (RAG) and are responsible for the retrieval quality. voyage-3-lite is a lightweight general-purpose embedding model optimized for latency and cost, which: [1] outperforms OpenAI v3 large and small by 3.82% and 7.58% on average across the domains, respectively, [2] has a 6-8x smaller embedding dimension (512) compared to OpenAI (3072) and E5 Mistral (4096), resulting in 6-8x lower vectorDB costs, and [3] supports a 32K-token context length, compared to OpenAI (8K) and Cohere (512). Learn more about voyage-3-lite here.