Skip to main content

Harm Prevention

We aim to mitigate adverse use of our models with the following:

  • Responsible AI Research: We’ve established a dedicated safety team which conducts research and development to build safer language models, and we’re investing in technical (e.g., usage monitoring) and non-technical (e.g., a dedicated team reviewing use cases) measures to mitigate potential harms.
  • Cohere Responsibility Council: We’ve established an external advisory council made up of experts who work with us to ensure that the technology we’re building is deployed safely for everyone.
  • No online learning: The models used to power these endpoints do not learn from user inputs. This prevents the underlying models from being poisoned with harmful content by adversarial actors.