Get started

This is to classify any of your text to check if there is a HateSpeech or not. Also, You can Custimize your Rules, Keywords Levels.

To use this API, you need an API key. Please contact us at safemediaproofficial@gmail.com to get your own API key.

                            
                                
API Endpoint
/api/classify

REST API

To get the Safe Media Pro Classification you need to make a POST call to the following url:
/api/classify

QUERY PARAMETERS

Field Type Description
text String Your provided text will go here.
image String This will contain the image in base64 format.
class_dict Dictionary Here, you'll need to define the category and sub-category and can add keywords as per the mentioned format, if it is required to be shown in that particular catgory.
                            
                                
Parameters :
{ "text": "Earth is oval in shape", "image": "data:image/png;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/...", // image of 2 boys playing football "class_dict": { "category_name_example": { "sub_cat_name_example": [], "colors": ["yellow", "red"] }, "American": { "language": ["american", "canadian", "mexican"], "resident": ["russian", "german", "french"] } }, }
                            
                                
Response :
{ "text": "Earth is oval in shape", "hate_speech_type": "Neutral", "hate_speech_subtype": "Neutral", "image_category": "Neutral", "image_description": "The image you've provided contains a depiction of a two boys playing football. , "severity": "low", "language": "en-EN" }

Errors

The Classify API uses the following error codes:

Error Code Meaning
422 Unprocessable Entity
200 Successful Response