Representation

Model Description

The model outlined in this card provides embedding representations of text. It powers the Embed and Classify endpoint.

Model architecture: Masked Language Model

Model release dates: See release notes

Models: small, large, multilingual-22-12

Model Card Author(s): Cohere Safety Team & Responsibility Council

Training Dataset: coheretext-unfiltered dataset

Safety Benchmarks

Performance has been evaluated on the following safety-related research benchmarks. These metrics are reported for the Small model.

ModelBenchmarkMetricStatistic
LargeStereoSetStereotype Score65.8502
StereoSetLanguage Modeling Score96.9383
SEATS3: EA/AA Names-
SEATS6: Male/Female, Career0.3322
SEATS7: Male/Female, Math/Arts0.4046
SEATS8: Male/Female, Science/Arts-
SEATS10: Young/Old-
SEATAngry Black Woman Stereotype - Terms-
SEATHeilman Double Bind - Male/Female, Achievement-
SEATHeilman Double Bind - Male/Female, Likeable-

For StereoSet Stereotype Score, 50 is best. For Language Modeling Score, 100 is best.

For SEAT tests, a dash "-" indicates no significant evidence of bias was found. Otherwise, a number indicates the bias effect size. We are researching how to expand our safety benchmarking to the multilingual context; multilingual benchmarks will be introduced in the future.

Intended Use Case

Embeddings may be used for purposes such as estimating semantic similarity between two sentences, choosing a sentence which is most likely to follow another sentence, sentiment analysis, topic extraction, or categorizing user feedback. Performance of embeddings will vary across use cases depending on the language, dialect, subject matter, and other qualities of the represented text.

Usage Notes

Always refer to the Usage Guidelines for guidance on using the Cohere Platform responsibly. Additionally, please consult the following model-specific usage notes:

Model Bias

There is extensive research into the social biases learned by language model embeddings (Bolukbasi et al., 2016; Manzini et al., 2019; Kurita et al., 2019; Zhao et al., 2019). We recommend that developers using the Representation model take this into account when building downstream text classification systems. Embeddings may inadvertently capture inaccurate associations between groups of people and attributes such as sentiment or toxicity. Using embeddings in downstream text classifiers may lead to biased systems that are sensitive to demographic groups mentioned in the inputs. For example, it is dangerous to use embeddings in CV ranking systems due to known gender biases in the representations (Kurita et al., 2019).

Technical Notes

  • English only: The model provides meaningful representations for English text only.
  • Distributional shift: Embeddings capture the state of the training data at the time it was scraped. Downstream classifiers will need to be validated or retrained upon release of new embedding models to ensure that they are still serving their intended purpose.
  • Longer texts: embed outputs are the aggregation of contextualized word embeddings; hence, the embeddings of longer inputs may not capture the meaning accurately across the entire sequence length.

Potential for Misuse

Guided by the NAACL Ethics Review Questions, we describe below the model-specific concerns around misuse of the Representation model. By documenting adverse use cases, we aim to encourage Cohere and its customers to prevent adversarial actors from leveraging our models to the following malicious ends.

Note: The examples in this section are not comprehensive and are only meant to illustrate our understanding of potential harms. The examples are meant to be more model-specific and tangible than those in the Usage Guidelines. Each of these malicious use cases violates our usage guidelines and Terms of Use, and Cohere reserves the right to restrict API access at any time.

  • Extraction of identity and demographic information: Using embeddings to classify the group identity or demographics of text authors or persons mentioned in a text. Group identification and private information should be consensually provided by individuals and not inferred by any automatic system.
  • Building purposefully opaque text classification systems: Algorithmic decisions that significantly affect people should be explainable to the persons affected; however, text classifications made using representations may not be explainable. A malicious actor may take advantage of this opacity to shield themselves from accountability for algorithmic decisions that may have disparate impact across demographic groups (Campolo and Crawford, 2020).
  • Human-outside-the-loop: Building downstream classifiers that serve as automated decision-making systems that have real-world consequences on people, where those decisions are made without a human-in-the-loop.