LlamaIndex
Prerequisite
To use LlamaIndex and Cohere, you will need:
- LlamaIndex Package. To install it, run
pip install llama-index
. - Cohere's SDK. To install it, run
pip install cohere
. If you run into any issues or want more details on Cohere's SDK, see this wiki. - A Cohere API Key. For more details on pricing see this page. When you create an account with Cohere, we automatically create a trial API key for you. This key will be available on the dashboard where you can copy it, and it's in the dashboard section called "API Keys" as well.
Cohere Chat with LlamaIndex
To use Cohere's chat functionality with LlamaIndex create a Cohere model object and call the chat
function.
from llama_index.llms.cohere import Cohere
from llama_index.core.llms.types import ChatMessage
cohere_model = Cohere(api_key="{API_KEY}")
message = ChatMessage(role="user",content= "Who founded Cohere?")
resp = cohere_model.chat([message])
print(resp)
Cohere Embeddings with LlamaIndex
To use Cohere's embeddings with LlamaIndex create a Cohere Embeddings object with an embedding model from this list and call get_text_embedding
.
from llama_index.embeddings.cohereai import CohereEmbedding
embed_model = CohereEmbedding(
cohere_api_key="{API_KEY}",
model_name="embed-english-v3.0", # Supports all Cohere embed models
input_type="search_query", # Required for v3 models
)
# Generate Embeddings
embeddings = embed_model.get_text_embedding("Welcome to Cohere!")
# Print embeddings
print(len(embeddings))
print(embeddings[:5])
Cohere Rerank with LlamaIndex
To use Cohere's rerank functionality with LlamaIndex create a Cohere Rerank object and use as a node post processor.
cohere_rerank = CohereRerank(api_key="{API_KEY}", top_n=2)
Cohere Pipeline with LlamaIndex
The following example uses Cohere's chat model, embeddings and rerank functionality to generate a response based on your data.
# rerank
from llama_index import ServiceContext, VectorStoreIndex
from llama_index.llms.cohere import Cohere
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.embeddings.cohereai import CohereEmbedding
from llama_index.postprocessor.cohere_rerank import CohereRerank
# Create the embedding model
embed_model = CohereEmbedding(
cohere_api_key="{API_KEY}",
model_name="embed-english-v3.0",
input_type="search_query",
)
# Create the service context with the cohere model for generation and embedding model
service_context = ServiceContext.from_defaults(
llm=Cohere(api_key="{API_KEY}", model="command"),
embed_model=embed_model
)
# Load the data, for this example data needs to be in a test file
data = SimpleDirectoryReader(input_files=["example_data_file.txt"]).load_data()
index = VectorStoreIndex.from_documents(data, service_context=service_context)
# Create a cohere reranker
cohere_rerank = CohereRerank(api_key="{API_KEY}")
# Create the query engine
query_engine = index.as_query_engine(node_postprocessors=[cohere_rerank])
# Generate the response
response = query_engine.query("Who founded Cohere?",)
Updated 23 days ago