Embedding Integrations

Use Cohere's Embeddings with the tools you love.

Elasticsearch has all the tools developers need to build next generation search experiences with generative AI. Use Elastic if you’d like to build with:

  • a vector database
  • deploy multiple ML models
  • perform text, vector and hybrid search
  • search with filters, facet, aggregations
  • apply document and field level security
  • run on-prem, cloud, or serverless (preview)

Elasticsearch supports native integration with Cohere through their inference API


RedisVL provides a powerful, dedicated Python client library for using Redis as a Vector Database. Leverage the speed and reliability of Redis along with vector-based semantic search capabilities to supercharge your application!

The following guide walks through how to integrate Cohere embeddings with Redis.


OpenSearch is an open-source, distributed search and analytics engine platform that allows users to search, analyze, and visualize large volumes of data in real time. When it comes to text search, OpenSearch is well-known for powering keyword-based (also called lexical) search methods. OpenSearch supports Vector Search and integrates with Cohere through 3rd-Party ML Connectors as well as Amazon Bedrock


Chroma is an open-source vector search engine that's quick to install and start building with using Python or Javascript.


452

Qdrant is an open-source vector similarity search engine and vector database. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural-network or semantic-based matching, faceted search, and other applications.

Qdrant is written in Rust, which makes it fast and reliable even under high load.


Weaviate is an open source vector search engine that stores both objects and vectors, allowing for combining vector search with structured filtering.

The text2vec-cohere module allows you to use Cohere embeddings directly in the Weaviate vector search engine as a vectorization module.


The Pinecone vector database makes it easy to build high-performance vector search applications. Use Cohere to generate language embeddings, then store them in Pinecone and use them for Semantic Search.


Cohere offers optimized containers that enable low latency inference on a diverse set of hardware accelerators available on AWS, providing different cost and performance points for Sagemaker customers.


2033

Milvus is a highly flexible, reliable, and blazing-fast cloud-native, open-source vector database. It powers embedding similarity search and AI applications and strives to make vector databases accessible to every organization. Milvus is a graduated-stage project of the LF AI & Data Foundation.

The following guide walks through how to integrate Cohere embeddings with Milvus.


600

Zilliz Cloud is a cloud-native vector database that stores, indexes, and searches for billions of embedding vectors to power enterprise-grade similarity search, recommender systems, anomaly detection, and more. Zilliz Cloud provides a fully-managed Milvus service, made by the creators of Milvus that allows for easy integration with vectorizers from Cohere and other popular models. Purpose-built to solve the challenge of managing billions of embeddings, Zilliz Cloud makes it easy to build applications for scale.

The following guide walks through how to integrate Cohere embeddings with Zilliz.