Skip to main content

Harm Prevention

We aim to mitigate adverse uses of our models with the following:

  • No online learning: The models used to power these endpoints do not learn from user inputs. This prevents them from being poisoned with harmful content by adversarial actors.
  • Responsible AI Research: We’ve established a dedicated safety team who will be conducting research and development on building safer language models, and we’re investing in technical and non-technical measures to mitigate potential harms.
  • Cohere Responsibility Council: We’ve established an external advisory council made up of experts who work with us to ensure that the technology we’re building is deployed safely for everyone.