API Documentation

Everything you need to integrate CleanFlux-AI into your application. Get started in minutes with our simple REST API.

Authentication

All API requests require authentication using a Bearer token. Include your API key in the Authorization header of every request.

Authorization: Bearer YOUR_API_KEY

Get your API key →

Quick Start

Make your first API call in under a minute. Here's a simple example to clean some text:

curl -X POST https://api.cleanflux.ai/clean \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello   world!!!"}'

Rate Limits

Rate limits vary by plan. Exceeding your limit will return a 429 status code.

PlanRequests/MonthRequests/Minute
Free1,00010
Starter50,00060
Pro250,000200

Endpoints

POST/clean

Clean and normalize text with optional AI enhancement

Parameters

text
string required

The text to clean

mode
string

"rule-based" (default) or "ai-powered"

options
object

Additional cleaning options

POST/normalize

Normalize text case, whitespace, and Unicode characters

Parameters

text
string required

The text to normalize

case
string

"lower", "upper", "title", or "sentence"

unicode
boolean

Normalize Unicode characters

POST/extract-urls

Extract and analyze URLs from text

Parameters

text
string required

The text to extract URLs from

validate
boolean

Validate extracted URLs

POST/remove-profanity

Detect and filter profanity from text

Parameters

text
string required

The text to filter

mode
string

"mask" (default) or "remove"

maskChar
string

Character to use for masking (default: *)

POST/metadata

Extract text metadata and analytics

Parameters

text
string required

The text to analyze

Code Examples

cURL

# Clean text with rule-based mode
curl -X POST https://api.cleanflux.ai/clean \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Hello!!!   Visit https://example.com 😀",
    "mode": "rule-based"
  }'

# Clean with AI-powered mode
curl -X POST https://api.cleanflux.ai/clean \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "this txt is rly messd up n needs fixin",
    "mode": "ai-powered"
  }'

# Extract URLs
curl -X POST https://api.cleanflux.ai/extract-urls \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Check out https://cleanflux.ai and https://docs.cleanflux.ai"}'

# Remove profanity
curl -X POST https://api.cleanflux.ai/remove-profanity \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "This is some text with bad words", "mode": "mask"}'

JavaScript / TypeScript

import CleanFlux from "cleanflux-ai";

// Initialize the client
const client = new CleanFlux({
  apiKey: process.env.CLEANFLUX_API_KEY
});

// Rule-based cleaning (fast, included in free tier)
const cleaned = await client.clean({
  text: "Hello!!!   Visit https://example.com 😀",
  mode: "rule-based"
});
console.log(cleaned.output);
// → "Hello! Visit https://example.com"

// AI-powered cleaning (context-aware, premium feature)
const aiCleaned = await client.clean({
  text: "this txt is rly messd up n needs fixin",
  mode: "ai-powered"
});
console.log(aiCleaned.output);
// → "This text is really messed up and needs fixing."

// Normalize text
const normalized = await client.normalize({
  text: "HELLO WORLD",
  case: "sentence"
});
console.log(normalized.output);
// → "Hello world"

// Extract URLs
const urls = await client.extractUrls({
  text: "Visit https://cleanflux.ai for more info"
});
console.log(urls.urls);
// → [{ url: "https://cleanflux.ai", domain: "cleanflux.ai", ... }]

// Remove profanity
const filtered = await client.removeProfanity({
  text: "Some text with profanity",
  mode: "mask"
});

// Get text metadata
const metadata = await client.metadata({
  text: "CleanFlux-AI is a powerful text processing API."
});
console.log(metadata);
// → { wordCount: 8, charCount: 47, ... }

Python

from cleanflux import CleanFlux

# Initialize the client
client = CleanFlux(api_key="your-api-key")

# Rule-based cleaning
result = client.clean(
    text="Hello!!!   Visit https://example.com 😀",
    mode="rule-based"
)
print(result.output)
# → "Hello! Visit https://example.com"

# AI-powered cleaning
ai_result = client.clean(
    text="this txt is rly messd up n needs fixin",
    mode="ai-powered"
)
print(ai_result.output)
# → "This text is really messed up and needs fixing."

# Normalize text
normalized = client.normalize(
    text="HELLO WORLD",
    case="sentence"
)
print(normalized.output)
# → "Hello world"

# Extract URLs
urls = client.extract_urls(
    text="Visit https://cleanflux.ai for more info"
)
print(urls.urls)
# → [{"url": "https://cleanflux.ai", "domain": "cleanflux.ai", ...}]

# Remove profanity
filtered = client.remove_profanity(
    text="Some text with profanity",
    mode="mask"
)

# Get text metadata
metadata = client.metadata(
    text="CleanFlux-AI is a powerful text processing API."
)
print(metadata)
# → {"wordCount": 8, "charCount": 47, ...}

Response Formats

/clean Response

{
  "success": true,
  "output": "Hello! Visit https://example.com",
  "metadata": {
    "originalLength": 42,
    "cleanedLength": 33,
    "mode": "rule-based",
    "processingTime": "3ms"
  }
}

/extract-urls Response

{
  "success": true,
  "urls": [
    {
      "url": "https://cleanflux.ai",
      "domain": "cleanflux.ai",
      "protocol": "https",
      "isValid": true
    }
  ],
  "count": 1
}

/metadata Response

{
  "success": true,
  "metadata": {
    "wordCount": 8,
    "charCount": 47,
    "sentenceCount": 1,
    "avgWordLength": 5.1,
    "readabilityScore": 72,
    "topWords": [
      { "word": "cleanflux", "count": 1 },
      { "word": "powerful", "count": 1 }
    ]
  }
}

Need more details?

View our complete API reference for detailed documentation on all endpoints, parameters, error codes, and advanced features.

View Full API Reference