API Reference
Complete reference for all Citadelis API endpoints.
Base URL
https://citadelis.eu/api/v1Authentication
All API requests require an API key sent in the Authorization header:
Authorization: Bearer citadel_sk_your_key_hereGet your API key from Settings > API Keys
POST
/chat/completions
Creates a chat completion for the provided messages. OpenAI-compatible endpoint.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | No | Model ID to use. Defaults to Meta-Llama-3_3-70B-Instruct |
| messages | array | Yes | Array of message objects with role and content |
| temperature | number | No | Sampling temperature (0-2). Default: 0.7 |
| max_tokens | number | No | Maximum tokens to generate |
| stream | boolean | No | Enable streaming responses. Default: false |
| anonymize | boolean | No | Citadelis extension: Enable PII anonymization |
Message Object
| Parameter | Type | Required | Description |
|---|---|---|---|
| role | string | Yes | "system", "user", or "assistant" |
| content | string | Yes | The content of the message |
Example Request
curl https://citadelis.eu/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer citadel_sk_xxx" \
-d '{
"model": "Meta-Llama-3_3-70B-Instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"temperature": 0.7
}'Example Response
{
"id": "chatcmpl-abc123xyz",
"object": "chat.completion",
"created": 1704067200,
"model": "Meta-Llama-3_3-70B-Instruct",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 10,
"total_tokens": 35
}
}Response with Anonymization
When anonymize: true is set, the response includes additional data:
{
"id": "chatcmpl-abc123xyz",
"object": "chat.completion",
"model": "Meta-Llama-3_3-70B-Instruct",
"choices": [...],
"usage": {...},
"citadelis": {
"anonymization": {
"enabled": true,
"mapping": {
"{{EMAIL_1}}": "john@example.com",
"{{NAME_1}}": "John Doe"
},
"pii_count": 2,
"deanonymized_content": "Hello John Doe, I'll send details to john@example.com"
}
}
}GET
/models
Lists all available models.
Example Request
curl https://citadelis.eu/api/v1/models \
-H "Authorization: Bearer citadel_sk_xxx"Example Response
{
"object": "list",
"data": [
{
"id": "Meta-Llama-3_3-70B-Instruct",
"object": "model",
"created": 1700000000,
"owned_by": "meta",
"citadelis": {
"name": "Llama 3.3 70B Instruct",
"description": "Most performant model - Recommended",
"context_length": 128000,
"category": "open-source",
"server_location": "eu",
"ai_provider": "ovh"
}
},
{
"id": "openai/gpt-4.1",
"object": "model",
"owned_by": "openai",
"citadelis": {
"name": "GPT-4.1",
"context_length": 1047576,
"category": "premium",
"server_location": "us",
"ai_provider": "openrouter"
}
}
]
}POST
/anonymize
Anonymizes text by detecting and replacing PII with tokens. Optionally processes with an LLM.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| text | string | Yes | Text to anonymize |
| existing_mapping | object | No | Existing token-to-value mapping to extend |
| process_with_llm | boolean | No | Send anonymized text to LLM |
| model | string | No | Model to use (if process_with_llm is true) |
| system_prompt | string | No | System prompt for LLM |
| temperature | number | No | LLM temperature |
| max_tokens | number | No | Max tokens for LLM response |
Example: Simple Anonymization
curl https://citadelis.eu/api/v1/anonymize \
-H "Content-Type: application/json" \
-H "Authorization: Bearer citadel_sk_xxx" \
-d '{
"text": "Contact me at john.doe@email.com or call +33 6 12 34 56 78"
}'Response
{
"anonymized_text": "Contact me at {{EMAIL_1}} or call {{PHONE_1}}",
"mapping": {
"{{EMAIL_1}}": "john.doe@email.com",
"{{PHONE_1}}": "+33 6 12 34 56 78"
},
"pii_count": 2,
"pii_types": ["EMAIL", "PHONE"]
}Example: With LLM Processing
curl https://citadelis.eu/api/v1/anonymize \
-H "Content-Type: application/json" \
-H "Authorization: Bearer citadel_sk_xxx" \
-d '{
"text": "Write an email to john.doe@email.com about the project deadline",
"process_with_llm": true,
"model": "Meta-Llama-3_3-70B-Instruct"
}'Response with LLM
{
"anonymized_text": "Write an email to {{EMAIL_1}} about the project deadline",
"mapping": {
"{{EMAIL_1}}": "john.doe@email.com"
},
"pii_count": 1,
"pii_types": ["EMAIL"],
"llm_response": {
"raw_content": "Subject: Project Deadline Reminder\n\nDear {{EMAIL_1}},\n\nI wanted to reach out...",
"deanonymized_content": "Subject: Project Deadline Reminder\n\nDear john.doe@email.com,\n\nI wanted to reach out...",
"model": "Meta-Llama-3_3-70B-Instruct",
"usage": {
"prompt_tokens": 45,
"completion_tokens": 150,
"total_tokens": 195
}
}
}Restore (Deanonymize)
Use PUT to restore anonymized text to original values:
curl -X PUT https://citadelis.eu/api/v1/anonymize \
-H "Content-Type: application/json" \
-H "Authorization: Bearer citadel_sk_xxx" \
-d '{
"text": "Hello {{NAME_1}}, your email is {{EMAIL_1}}",
"mapping": {
"{{NAME_1}}": "John Doe",
"{{EMAIL_1}}": "john@example.com"
}
}'Error Codes
| Status | Code | Description |
|---|---|---|
| 400 | invalid_request | Missing or invalid request parameters |
| 400 | invalid_model | The specified model does not exist |
| 401 | missing_api_key | No API key provided |
| 401 | invalid_api_key | API key is invalid or revoked |
| 403 | insufficient_scope | API key does not have required permissions |
| 429 | rate_limit_exceeded | Too many requests per minute |
| 429 | budget_exceeded | Monthly budget limit reached |
| 500 | internal_error | Internal server error |
Error Response Format
{
"error": {
"message": "Rate limit exceeded. 61/60 requests per minute.",
"type": "api_error",
"code": "rate_limit_exceeded",
"param": null
}
}Rate Limits
| Plan | Requests/min | Daily Limit |
|---|---|---|
| Free | 30 | 50 requests |
| Pro | 60 | Unlimited (budget-based) |
| Enterprise | 120 | Unlimited |