Use the Cohere platform to build natural language understanding and generation into your product with a few lines of code. Cohere’s large language models can solve a broad spectrum of natural language use cases, including classification, semantic search, paraphrasing, summarization, and content generation. Through finetuning, users can create massive models customized to their use case and trained on their data.
- Playground overview
The playground is usually the first place to experiment with the models and the various endpoints.
- Model cards
Learn about Cohere’s generation and representation models including their performance, and intended use.
The Responsible Use documentation aims to guide developers in using language models constructively and ethically. Toward this end, we've published guidelines for using our API safely, statistics regarding the environmental impact of pre-training our language models, as well as our processes around harm prevention.
Using the Cohere API shows how to install the python or node.js SDKs.
- Generate: Generate text from a model in response to an input prompt
- Similarity: Measure the similarity score between a sentence (anchor) and multiple other sentences (targets).
- Embed: Retrieve the sentence embeddings from a representation model
- Likelihood: Calculate the likelihood score for each token in the prompt
- Choose Best: Perform classification by using likelihood scores.
The Command Line Tool is an alternative to our web interface, which allows you to login to your Cohere account, manage API Keys, and run finetunes.
Learn some of the key concepts involved in language generation and understanding.
- Tokens are words or parts of words that our models take as input or produce as output.
- Embeddings are lists of numbers that represent a word or token and capture information about their meaning and context.
- Finetuning is the process of continuing to train a model to improve its performance for a specific task.
- Prompt Engineering is the process of tuning the input to a generation model to get the model to produce a useful output for a specific use case.
- Temperature is a value that controls the outputs of a generation model by tuning the degree of randomness involved in picking output tokens.
- Likelihood is a measure of how “expected” each token is in a piece of text.