Use the Cohere platform to build natural language understanding and generation into your product with a few lines of code. Cohere’s large language models can solve a broad spectrum of natural language use cases, including classification, semantic search, paraphrasing, summarization, and content generation. Through finetuning, users can create massive models customized to their use case and trained on their data.
- Playground overview
The playground is usually the first place to experiment with the models and the various endpoints.
- Using the Cohere API
Learn how to install the Python or Node.js SDKs.
- Intro to Large Language Models with Cohere
A brief visual overview of large language models and some of their applications.
- Model cards
Learn about Cohere’s generation and representation models including their performance, and intended use.
Text classification is one of the most useful applications of LLMs. The text classification with Embeddings article guides you through building a sentiment analysis text classifier if you have labeled data.
LLMs can also classify text using a small number of examples (few-shot classification). For these cases, see the question classification and sentimant analysis articles. These two articles use two different modes of the Choose Best endpoint.
LLMs can write coherent text like no other human technology before them could. This can be used for creative copy, but also for summarization and paraphrasing. We tune the inputs using prompt engineering techniques that get the model to produce useful outputs. Important text generations parameters include top-k and top-p.
Learn how to use embeddings to build semantic search capabilities.
The Cohere Platform endpoints are:
- Generate: Generate text from a model in response to an input prompt
- Embed: Retrieve the sentence embeddings from a representation model
- Choose Best: Perform classification by using likelihood scores
The Command Line Tool is an alternative to our web interface, which allows you to login to your Cohere account, manage API Keys, and run finetunes.
The Responsible Use documentation aims to guide developers in using language models constructively and ethically. Toward this end, we've published guidelines for using our API safely, statistics regarding the environmental impact of pre-training our language models, as well as our processes around harm prevention.