Skip to main content

Overview

Use the Cohere platform to build natural language understanding and generation into your product with a few lines of code. Cohere’s large language models can solve a broad spectrum of natural language use cases, including classification, semantic search, paraphrasing, summarization, and content generation. Through finetuning, users can create massive models customized to their use case and trained on their data.

The models can be accessed through the playground, SDKs, and the CLI tool.

Getting started#

Guides#

Text Classification#

Text classification is one of the most useful applications of Large Language Models (LLMs). They can classify text using a small number of examples (few-shot classification).

See the text classification with Classify tutorial which demonstrates the Classify endpoint.

See the text classification with Embeddings tutorial which demonstrates the Embed endpoint.

Text Generation#

LLMs can write coherent text like no other human technology before them could. This can be used for creative copy, but also for summarization and paraphrasing. We tune the inputs using prompt engineering techniques that get the model to produce useful outputs. Important text generations parameters include top-k and top-p.

Semantic Search#

Learn how to use embeddings to build semantic search capabilities.

Finetuning#

Customize Cohere models to fit your use case by finetuning our baseline models with your own data. Learn about finetuning generation models in addition to finetuning representation models.

API Reference#

The Cohere Platform endpoints are:

  • Generate: Generate text from a model in response to an input prompt
  • Embed: Retrieve the sentence embeddings from a representation model
  • Classify: Perform classification by using a few examples

The Command Line Tool is an alternative to our web interface, which allows you to login to your Cohere account, manage API Keys, and run finetunes.

Learn#

Learn some of the key concepts involved in language generation and understanding. These include tokens, embeddings, temperature, and likelihood.

Responsible Use#

The Responsible Use documentation aims to guide developers in using language models constructively and ethically. Toward this end, we've published guidelines for using our API safely, statistics regarding the environmental impact of pre-training our language models, as well as our processes around harm prevention.