Create moderation
POST https://api.fastapi.ai/v1/moderations
Classifies if text and/or image inputs are potentially harmful. Learn more in the moderation guide.
Request body
input string or array Required
Input (or inputs) to classify. Can be a single string, an array of strings, or an array of multi-modal input objects similar to other models.
string
A string of text to classify for moderation.
array
An array of strings to classify for moderation.
array
An array of multi-modal inputs to the moderation model.
object
An object describing an image to classify.
typestring Required
Alwaysimage_url.
image_urlobject Required
Contains either an image URL or a data URL for a base64 encoded image.
urlstring Required
Either a URL of the image or the base64 encoded image data.
object
An object describing text to classify.
typestring Required
Alwaystext.
textstring Required
A string of text to classify.
model string Optional Defaults to omni-moderation-latest
The content moderation model you would like to use. Learn more in the moderation guide, and learn about available models here.
Returns
A moderation object.
The moderation object
Represents if a given text input is potentially harmful.
id string
The unique identifier for the moderation request.
model string
The model used to generate the moderation results.
results array
A list of moderation objects.
flaggedboolean
Whether any of the below categories are flagged.
categoriesobject
A list of the categories, and whether they are flagged or not.
hateboolean
Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Hateful content aimed at non-protected groups (e.g., chess players) is harassment.
hate/threateningboolean
Hateful content that also includes violence or serious harm towards the targeted group based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.
harassmentboolean
Content that expresses, incites, or promotes harassing language towards any target.
harassment/threateningboolean
Harassment content that also includes violence or serious harm towards any target.
illicitboolean or null
Content that includes instructions or advice that facilitate the planning or execution of wrongdoing, or that gives advice or instruction on how to commit illicit acts. For example, "how to shoplift" would fit this category.
illicit/violentboolean or null
Content that includes instructions or advice that facilitate the planning or execution of wrongdoing that also includes violence, or that gives advice or instruction on the procurement of any weapon.
self-harmboolean
Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
self-harm/intentboolean
Content where the speaker expresses that they are engaging or intend to engage in acts of self-harm, such as suicide, cutting, and eating disorders.
self-harm/instructionsboolean
Content that encourages performing acts of self-harm, such as suicide, cutting, and eating disorders, or that gives instructions or advice on how to commit such acts.
sexualboolean
Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).
sexual/minorsboolean
Sexual content that includes an individual who is under 18 years old.
violenceboolean
Content that depicts death, violence, or physical injury.
violence/graphicboolean
Content that depicts death, violence, or physical injury in graphic detail.
category_scoresobject
A list of the categories along with their scores as predicted by model.
hatenumber
The score for the category 'hate'.
hate/threateningnumber
The score for the category 'hate/threatening'.
harassmentnumber
The score for the category 'harassment'.
harassment/threateningnumber
The score for the category 'harassment/threatening'.
illicitnumber
The score for the category 'illicit'.
illicit/violentnumber
The score for the category 'illicit/violent'.
self-harmnumber
The score for the category 'self-harm'.
self-harm/intentnumber
The score for the category 'self-harm/intent'.
self-harm/instructionsnumber
The score for the category 'self-harm/instructions'.
sexualnumber
The score for the category 'sexual'.
sexual/minorsnumber
The score for the category 'sexual/minors'.
violencenumber
The score for the category 'violence'.
violence/graphicnumber
The score for the category 'violence/graphic'.
category_applied_input_typesobject
A list of the categories along with the input type(s) that the score applies to.
hatearray
The applied input type(s) for the category 'hate'.
hate/threateningarray
The applied input type(s) for the category 'hate/threatening'.
harassmentarray
The applied input type(s) for the category 'harassment'.
harassment/threateningarray
The applied input type(s) for the category 'harassment/threatening'.
illicitarray
The applied input type(s) for the category 'illicit'.
illicit/violentarray
The applied input type(s) for the category 'illicit/violent'.
self-harmarray
The applied input type(s) for the category 'self-harm'.
self-harm/intentarray
The applied input type(s) for the category 'self-harm/intent'.
self-harm/instructionsarray
The applied input type(s) for the category 'self-harm/instructions'.
sexualarray
The applied input type(s) for the category 'sexual'.
sexual/minorsarray
The applied input type(s) for the category 'sexual/minors'.
violencearray
The applied input type(s) for the category 'violence'.
violence/graphicarray
The applied input type(s) for the category 'violence/graphic'.
{
"id": "modr-0d9740456c391e43c445bf0f010940c7",
"model": "omni-moderation-latest",
"results": [
{
"flagged": true,
"categories": {
"harassment": true,
"harassment/threatening": true,
"sexual": false,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"self-harm": false,
"sexual/minors": false,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"harassment": 0.8189693396524255,
"harassment/threatening": 0.804985420696006,
"sexual": 1.573112165348997e-6,
"hate": 0.007562942636942845,
"hate/threatening": 0.004208854591835476,
"illicit": 0.030535955153511665,
"illicit/violent": 0.008925306722380033,
"self-harm/intent": 0.00023023930975076432,
"self-harm/instructions": 0.0002293869201073356,
"self-harm": 0.012598046106750154,
"sexual/minors": 2.212566909570261e-8,
"violence": 0.9999992735124786,
"violence/graphic": 0.843064871157054
},
"category_applied_input_types": {
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"sexual": [
"text",
"image"
],
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm/intent": [
"text",
"image"
],
"self-harm/instructions": [
"text",
"image"
],
"self-harm": [
"text",
"image"
],
"sexual/minors": [
"text"
],
"violence": [
"text",
"image"
],
"violence/graphic": [
"text",
"image"
]
}
}
]
}Example
Request
curl https://api.fastapi.ai/v1/moderations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $FAST_API_KEY" \
-d '{
"input": "I want to kill them."
}'curl https://api.fastapi.ai/v1/moderations \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $FAST_API_KEY" \
-d '{
"model": "omni-moderation-latest",
"input": [
{ "type": "text", "text": "...text to classify goes here..." },
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png"
}
}
]
}'Response
{
"id": "modr-AB8CjOTu2jiq12hp1AQPfeqFWaORR",
"model": "text-moderation-007",
"results": [
{
"flagged": true,
"categories": {
"sexual": false,
"hate": false,
"harassment": true,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"harassment/threatening": true,
"violence": true
},
"category_scores": {
"sexual": 0.000011726012417057063,
"hate": 0.22706663608551025,
"harassment": 0.5215635299682617,
"self-harm": 2.227119921371923e-6,
"sexual/minors": 7.107352217872176e-8,
"hate/threatening": 0.023547329008579254,
"violence/graphic": 0.00003391829886822961,
"self-harm/intent": 1.646940972932498e-6,
"self-harm/instructions": 1.1198755256458526e-9,
"harassment/threatening": 0.5694745779037476,
"violence": 0.9971134662628174
}
}
]
}{
"id": "modr-0d9740456c391e43c445bf0f010940c7",
"model": "omni-moderation-latest",
"results": [
{
"flagged": true,
"categories": {
"harassment": true,
"harassment/threatening": true,
"sexual": false,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"self-harm": false,
"sexual/minors": false,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"harassment": 0.8189693396524255,
"harassment/threatening": 0.804985420696006,
"sexual": 1.573112165348997e-6,
"hate": 0.007562942636942845,
"hate/threatening": 0.004208854591835476,
"illicit": 0.030535955153511665,
"illicit/violent": 0.008925306722380033,
"self-harm/intent": 0.00023023930975076432,
"self-harm/instructions": 0.0002293869201073356,
"self-harm": 0.012598046106750154,
"sexual/minors": 2.212566909570261e-8,
"violence": 0.9999992735124786,
"violence/graphic": 0.843064871157054
},
"category_applied_input_types": {
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"sexual": [
"text",
"image"
],
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm/intent": [
"text",
"image"
],
"self-harm/instructions": [
"text",
"image"
],
"self-harm": [
"text",
"image"
],
"sexual/minors": [
"text"
],
"violence": [
"text",
"image"
],
"violence/graphic": [
"text",
"image"
]
}
}
]
}