PeakPrivacy API
Use our Chat Completion API to create anonymized requests to LLM
POST Create Chat Completions
sh
https://api.peakprivacy.ch/v1/ai/completions
HEADERS
Key | Value |
---|---|
Api-token | api_token from https://peakprivacy.ch/api-tokens |
X-Requested-With | XMLHttpRequest |
Accept | application/json |
REQUEST BODY SCHEMA: application/json
Key | Value |
---|---|
model | string ID of the model to use. Supported models : - gpt-4-1106-preview - gpt-4 - gpt-3.5-turbo-1106 - mistral-tiny - mistral-small - mistral-medium - mistral-swiss |
messages | Array of objects The prompt(s) to generate completions for, encoded as a list of dict with role and content. The first prompt role should be user or system. |
temperature | number or null [ 0 .. 1 ] Default: 0.7 What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
top_p | number or null [ 0 .. 1 ] Default: 1 Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
max_tokens | integer or null >= 0 Default: null The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. We generally recommend altering this or temperature but not both. |
random_seed | integer or null Default: null The seed to use for random sampling. If set, different calls will generate deterministic results. |
safe_prompt | boolean or null Default: false Whether to inject a safety prompt before all conversations. |
anonymize | boolean or null Default: true Whether to use anonymization for the prompt. Might slightly increase response time due to anonymzation & deanonymization algorithms work. |
stream | boolean or null Default: false Whether to receive response as a stream or a single object. NOTE: anonymize=true will force stream to be set to false automatically. |
messages SCHEMA:
Key | Value |
---|---|
role | stringsystem user assistant Specifies the role of the messages author |
content | string Message(-s) of the conversation with AI |
BODY raw
json
{
"model": "gpt-4-1106-preview",
"messages": [
{
"role": "assistant",
"content": "content"
}
],
"temperature": null,
"top_p": null,
"max_tokens": null,
"random_seed": null,
"safe_prompt": null,
"anonymize": true
}
Example request
sh
curl --location --request POST 'https://api.peakprivacy.ch/v1/ai/completions' \
--header 'Api-token: api_token' \
--header 'X-Requested-With: XMLHttpRequest' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-4-1106-preview",
"messages": [
{
"role": "user",
"content": "content"
}
],
"temperature": null,
"top_p": null,
"max_tokens": null,
"random_seed": null,
"safe_prompt": null,
"anonymize": true
}'
Possible Responses
200 - Success
json
{
"id": "1df8f5af-c96d-42ba-938a-3f5c4f62e633",
"object": "chat.completion",
"created": 1705942239,
"model": "gpt-3.5-turbo-1106",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! How can I assist you today?",
"role": "assistant"
}
}
],
"usage": {
"completion_tokens": 9,
"prompt_tokens": 8,
"total_tokens": 17
}
}
RESPONSE SCHEMA: application/json
Key | Value |
---|---|
id | string |
object | string |
created | integer |
model | string |
choices | Array of objects |
usage | object |
choices SCHEMA: application/json
Key | Value |
---|---|
finish_reason | string |
index | string |
message | object |
message SCHEMA: application/json
Key | Value |
---|---|
role | string |
content | string |
usage SCHEMA: application/json
Key | Value |
---|---|
completion_tokens | integer |
prompt_tokens | integer |
total_tokens | integer |
400 - Bad Request; Validation Error
json
{
"message": "Validation error",
"errors": {
"model": [
"The model field is required."
]
}
}
401 - Unauthenticated; Form Token Expired
json
{
"error": "Unauthenticated"
}
403 - Forbidden; Subscription amount limit has been reached
json
{
"error": "Subscription amount limit has been reached"
}
response SCHEMA: application/json
Key | Value |
---|---|
error | string |
GET Get supported models list
sh
https://api.peakprivacy.ch/v1/ai/models
HEADERS
Key | Value |
---|---|
Api-token | api_token from https://peakprivacy.ch/api-tokens |
X-Requested-With | XMLHttpRequest |
Accept | application/json |
Example request
sh
curl --location --request GET 'https://api.peakprivacy.ch/v1/ai/models' \
--header 'Api-token: ' \
--header 'X-Requested-With: XMLHttpRequest' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
Response samples
200 - Success
json
{
"object": "list",
"data": [
{
"id": "gpt-4-1106-preview",
"object": "model",
"created": 0,
"owned_by": "OpenAI"
},
...
]
}
RESPONSE SCHEMA: application/json
Key | Value |
---|---|
object | string |
data | Array of objects |
data SCHEMA: application/json
Key | Value |
---|---|
id | string |
object | string |
created | integer |
owned_by | object |
401 - Unauthenticated; Form Token Expired
json
{
"error": "Unauthenticated"
}
response SCHEMA: application/json
Key | Value |
---|---|
error | string |