Skip to main content

Generation

Model Description#

The model outlined in this card provides generated text completions based on an input prompt. It powers the Generate, Choose Best, and Likelihood endpoints.

Model architecture: Generative Pretrained Transformer
Model release date: April 2021
Model version: 0.1
Model sizes: Shrimp, Otter, Seal, Shark, Orca Model Card Author(s): Cohere Safety Team & Responsibility Council

Training Dataset: coheretext-filtered dataset

View the API documentation.

Performance#

Performance has been evaluated on the following research benchmarks. These metrics are reported on the Orca model.

ModelBenchmarkMetricStatistic
Orca1 Billion Word Language Model BenchmarkPerplexity35.8
LAMBADA TaskLast-token Accuracy0.74
StereoSetStereotype Score51.95
StereoSetLanguage Modeling Score80.92
StereoSeticat77.75

Model performance is only currently reported on English benchmarks. Multilingual benchmarks will be reported in the future.

Intended Use Case#

Generations may be used for interactive autocomplete, augmenting human writing processes, summarization, text rephrasing, and other text-to-text tasks in non-sensitive domains.

Outputs from Choose Best can be used for classification and analysis tasks, such as selecting the most likely completion for a sentence. Token likelihoods from Likelihood might be used to make fun claims about the “randomness” of your favorite author’s writing, or to explore the statistical differences between human-written and machine-generated text (see Gehrmann et al., 2019).

Example usage of Generate

Example: Generations can be used for fun applications, such as generating a unique and inspiring message for a user each morning.

Usage Notes#

Always refer to the Usage Guidelines for guidance on using the Cohere API responsibly. Additionally, please consult the following model-specific usage notes:

Model Toxicity and Bias#

Language models learn the statistical relationships present in training datasets, which may include toxic language and historical biases along race, gender, sexual orientation, ability, language, cultural, and intersectional dimensions. We recommend that developers using the Generation model take model toxicity and bias into account and design applications carefully to avoid the following:

  • Toxic Degeneration: Despite our ongoing efforts to remove harmful text from the training corpus, models may generate toxic text. This may include obscenities, sexually explicit content, and messages which mischaracterize or stereotype groups of people based on problematic historical biases perpetuated by internet communities (see Gehman et al., 2020 for more about toxic language model degeneration). We have put safeguards in place to avoid generating harmful text, but we highly recommend that developers build additional guardrails to ensure that text presented to end users is not toxic or harmful.

Max toxicity graph

Figure: The maximum toxicity observed in N otter unconditional generations. After around 100 generations, at least one is likely to be toxic (toxicity > 0.5). otter's performance is similar or better than other state-of-the-art language models (see Gehman et al., 2020). We used the methods described in Gehman et al., 2020 to produce this graph, and we use the same toxicity measure: PerspectiveAPI's TOXICITY. We acknowledge the bias inherent in using automated methods to rate the toxicity of text; this visualization is provided only to depict the general trend of Generation model toxic degeneration.

  • Reinforcing historical social biases: Language models capture problematic associations and stereotypes prominent on the internet and society at large. They should not be used to make decisions about individuals or the groups they belong to. For example, it is dangerous to use Generation model outputs in CV ranking systems due to known biases (Nadeem et al., 2020).

Technical Notes#

  • Language limitations: This model provides completions for English text only.
  • Sampling parameters: Generation quality is highly dependent on the sampling parameters. Please consult the documentation for details about each parameter and tune the values used for your application. Parameters may require re-tuning upon a new model release.
  • Varying text length: Choose Best performance may vary when using options spanning a wide range of lengths.

Potential for Misuse#

Guided by the NAACL Ethics Review Questions, we describe the potential for misuse of the Generation model. By documenting adverse use cases, we aim to keep our team accountable for addressing them. It is our goal to prevent adversarial actors from leveraging the model to the following malicious ends.

Note: The examples in this section are not comprehensive and are only meant to illustrate our understanding of potential harms. The examples are meant to be more model-specific and tangible than those in the Usage Guidelines. Each of these malicious use cases violates our usage guidelines and Terms of Use, and Cohere reserves the right to restrict API access at any time.

  • Astroturfing: Generated text used to provide the illusion of discourse or expression of opinion by members of the public on social media or any other channel.
  • Generation of misinformation and other harmful content: The generation of news or other articles which inform public opinion, or any content which aims to incite hate or mischaracterize a group of people.
  • Reverse-engineering generated text detection systems: Using the Likelihood endpoint to reverse-engineer and avoid detection methods. Likelihood information returned by this endpoint has been shown to be useful in detecting machine-generated text (Gehrmann et al., 2019). Text that goes unidentified by humans is more easily flagged by automated methods (Ippolito et al., 2020) and these methods may be an important line of defense against malicious actors.
  • Human-outside-the-loop: The generation of text about people, places, or events without a human-in-the-loop. This includes making decisions based on human-written input which have real-world consequences, or posing as a human in any context where the end user is unaware that outputs are being generated by a language model.