SafetyGates

API Documentation

Content moderation with sub-millisecond response times

Get API Key npm Package

Quick Start

Send text, get a safety verdict. It's that simple.

cURL

curl -X POST https://sg-api.cyclecore.ai/v1/classify \
  -H "Content-Type: application/json" \
  -H "X-API-Key: demo" \
  -d '{"text": "Have a great day!"}'

Response

{
  "safe": true,
  "confidence": 0.92,
  "categories": {
    "toxic": 0.08,
    "spam": 0.03,
    "hate": 0.05,
    "nsfw": 0.02,
    "harassment": 0.11
  },
  "latency_us": 1234.5
}

Node.js

npm install safetygates
const SafetyGates = require('safetygates');
const client = new SafetyGates('YOUR_API_KEY');

const result = await client.classify('Have a great day!');
console.log(result.safe);       // true
console.log(result.confidence); // 0.92

// Or use the convenience method
if (await client.isSafe(userMessage)) {
  // Allow the message
}

Python

import requests

response = requests.post(
    'https://sg-api.cyclecore.ai/v1/classify',
    headers={'X-API-Key': 'YOUR_API_KEY'},
    json={'text': 'Have a great day!'}
)
result = response.json()
print(result['safe'])  # True

Authentication

Include your API key in the X-API-Key header:

X-API-Key: sg_live_xxxxxxxxxxxxx

Or use Bearer token format:

Authorization: Bearer sg_live_xxxxxxxxxxxxx

Demo Key

Use demo as your API key to test without signing up. Limited to 1,000 requests/day.

Endpoints

Base URL: https://sg-api.cyclecore.ai

POST /v1/classify

Classify text for safety. Returns a verdict with category scores.

Request Body

ParameterTypeDescription
text required string Text to classify
strictness optional string strict, balanced (default), or permissive. Paid tiers only.

Response

FieldTypeDescription
safebooleantrue if content is safe
confidencefloatConfidence score (0-1)
categoriesobjectScores for each category
langstringDetected language (ISO 639-1 code)
strictnessstringStrictness level used
latency_usfloatProcessing time in microseconds

Example

{
  "safe": false,
  "confidence": 0.87,
  "categories": {
    "toxic": 0.87,
    "spam": 0.05,
    "hate": 0.23,
    "nsfw": 0.12,
    "harassment": 0.65
  },
  "lang": "en",
  "strictness": "balanced",
  "latency_us": 1456.7
}
POST /v1/classify/batch

Classify multiple texts in a single request (up to 10,000).

Request Body

ParameterTypeDescription
texts required array Array of texts to classify
strictness optional string strict, balanced (default), or permissive. Paid tiers only.

Response

{
  "results": [
    { "safe": true, "categories": { "toxic": 0.08, "spam": 0.02 }, "lang": "en" },
    { "safe": false, "categories": { "toxic": 0.91, "hate": 0.34 }, "lang": "en" }
  ],
  "total_latency_us": 2345.6,
  "items_per_second": 852.4
}
GET /health

Health check endpoint. No authentication required.

{ "status": "ok", "gates_loaded": 25 }

Error Codes

CodeDescription
400Bad request - Invalid parameters
401Unauthorized - Missing or invalid API key
429Rate limit exceeded
500Internal server error
503Service unavailable

Error Response

{
  "detail": "API key required. Use X-API-Key header or Authorization: Bearer."
}

Rate Limits

TierPriceRequests/Month
Free$01,000
Starter$19/mo100,000
Pro$49/mo1,000,000
EnterpriseCustomUnlimited

Batch requests count as 1 request regardless of batch size.

Strictness Presets

Paid tiers can adjust moderation sensitivity using the strictness parameter.

PresetBehaviorUse Case
strictLower threshold, more flagsChild safety, regulated industries
balancedDefault behaviorGeneral use
permissiveHigher threshold, fewer flagsCreative writing, mature content

Free tier always uses balanced. Requests with other values will fallback to balanced.

Example

curl -X POST https://sg-api.cyclecore.ai/v1/classify \
  -H "X-API-Key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Your content here", "strictness": "strict"}'

SDKs

Node.js

npm install safetygates

Python (coming soon)

pip install safetygates

SDK Example

const SafetyGates = require('safetygates');
const client = new SafetyGates('YOUR_API_KEY');

// Single classification
const result = await client.classify('Check this message');
if (!result.safe) {
  console.log('Blocked:', result.categories);
}

// Batch classification
const batch = await client.classifyBatch([
  'Message 1',
  'Message 2',
  'Message 3'
]);
console.log(`Processed ${batch.results.length} messages`);

// Quick safety check
if (await client.isSafe(userInput)) {
  // Process the message
}