Classify
This endpoint classifies text into one of several classes. It uses a few examples to create a classifier from a generative model. In the background, it constructs a few-shot classification prompt and uses it classify the input
texts you pass to it.
#
Usage#
Sample Response{"results": [{"text": "this movie was great","prediction": "positive review","confidences": [{"option": "positive review","confidence": 0.45},{"option": "negative review","confidence": 0.33},{"option": "neutral review","confidence": 0.22}]},{"text": "this movie was bad","prediction": "negative review","confidences": [{"option": "positive review","confidence": 0.13},{"option": "negative review","confidence": 0.57},{"option": "neutral review","confidence": 0.30}]}]}
#
Request:taskDescription
(optional)#
string A brief description providing context on the type classification the model should preform (i.e. Classify these movie reviews as positive reviews or negative reviews
)
inputs
#
array of strings examples
#
object An array of examples to provide context to the model. Each example is a text string and its label/class. Each unique label/class requires at least 5 examples associated with it. The values should be structured as [label:{},text:{}]
outputIndicator
(optional)#
string The output indicator part of the prompt. This is string to be appended at the end of every example and text. See Prompt Engineering for more details.
#
Response:text
#
string The input text that was classified
prediction
#
string The predicted class for the associated query
confidences
#
list of objects An array containing each class and its confidence score according to the classifier. The score is computed as follows:
- Obtain the likelihood from the back of every generate prompt until the label is formed
- Then averaged over the number of tokens in the label
- All scores are then normalized via a softmax.