Release Notes September 29th 2023
We're Releasing co.chat() and the Chat + RAG Playground
Release Notes August 8th 2023
Command Model Updated
The Command model has been updated. Expect improvements in reasoning and conversational capabilities.
Release Notes June 28th 2023
Command Model Updated
The Command model has been updated. Expect improved code and conversational capabilities, as well as reasoning skills on various tasks.
New Maximum Number of Input Documents for Rerank
We have updated how the maximum number of documents is calculated for co.rerank. The endpoint will error if len(documents) * max_chunks_per_doc >10,000
where max_chunks_per_doc
is set to 10 as default.
Model Names Are Changing!
We are updating the names of our models to bring consistency and simplicity to our product offerings. As of today, you will be able to call Cohere’s models via our API and SDK with the new model names, and all of our documentation has been updated to reflect the new naming convention.
Multilingual Support for Co.classify
The co.classify endpoint now supports the use of Cohere's multilingual embedding model. The multilingual-22-12 model
is now a valid model input in the co.classify call.
Command Model Nightly Available!
Nightly versions of our Common models are now available. This means that every week, you can expect the performance of command-nightly to improve as we continually retrain them.
Multilingual Text Understanding Model + Language Detection!
Cohere's multilingual text understanding model is now available! The multilingual-22-12
model can be used to semantically search within a single language, as well as across languages. Compared to keyword search, where you often need separate tokenizers and indices to handle different languages, the deployment of the multilingual model for search is trivial: no language-specific handling is needed — everything can be done by a single model within a single index.
Model Sizing Update + Improvements
Effective December 2, 2022, we will be consolidating our generative models and only serving our Medium (focused on speed) and X-Large (focused on quality). We will also be discontinuing support for our Medium embedding model.