Skip to content

Peak Privacy Chat

  • Employees use AI with company data in a secure and controlled manner

  • Faster responses shorten waiting times

  • Attractive cost model based on consumption


Use your own secure company chat

Whether you're an SME, organisation or foundation, using a chatbot is convenient and makes dealing with text more efficient. With the Peak Privacy Chat, your employees can chat based on your own data and provide information faster and in a more targeted manner.


Our process

1. Free strategy session

Our process starts with an assessment of the corporate strategy for the use of AI. We then identify potential applications together. Based on a demo, objectives are then jointly defined.

2. Data Discovery Workshop

The "Data Discovery" workshop starts by analysing your available data and clarifying your requirements. We then create a solid briefing for the integration.

3. Integration

We integrate the service with the greatest potential seamlessly and efficiently and continuously optimise it through feedback. This allows us to recognise individual needs and develop them further in the next step.


500 CHF / month

+ one-off setup fee of CHF 2000*

Subscription model includes:

  • 3 GB storage for your own data (RAG system)

  • Access for 50 users

  • Incl. 1 million tokens/mt, thereafter consumption per GB, see costs for consumption

Services included:

  • 4h "Data Discovery" workshop

  • Integration of 1 custom RAG with setup service

  • 8h support incl. onboarding for independent administration

Available LL models:

  • Mistral (8x7B, 7B, Swiss)

  • OpenAI (GPT 3.5, 4)


Timeline

Start now

Online access

Your own RAG system with our API

The RAG System accesses a database that we create together. From this, the AI extracts relevant information for text generation.


Costs for consumption

ModelsInput (1k tokens)Output (1k tokens)RessourcesCosts
mistral-swissCHF 0.004CHF 0.006🌱🌱🌱🌱🌱💰💰7 B (+ Opt.)
mistral-tinyCHF 0.00042CHF 0,00126🌱🌱🌱🌱💰7 B
mistral-smallCHF 0.0012CHF 0.0036🌱🌱🌱💰💰💰7B (8x7)
mistral-mediumCHF 0.00375CHF 0,01125🌱💰💰💰💰💰40 B
gpt-3.5-turbo-1106CHF 0.0015CHF 0.003🌱🌱🌱💰💰💰175 B
gpt-4-1106-previewCHF 0.015CHF 0.045🌱🌱💰💰💰💰220 B (est.)
gpt-4CHF 0.045CHF 0.09🌱💰💰💰💰💰220 B (est.)

Die Preise können jederzeit geändert werden und werden hier publiziert.


description

Your own RAG system (Extractive AI)

The RAG system accesses a large database or data set that we create together in advance. This database can come from various sources, such as the Internet, specialised databases, company data or other publicly available sources.

The database can contain structured data, such as facts and statistics, or unstructured data, such as texts and articles. The artificial intelligence in the RAG system then uses this database to retrieve relevant information and integrate it into the generated text.

--> Content is extracted from the database

description

Consumption, speed and costs

The use of different models, such as OpenAI and Mistral, has a direct impact on consumption, speed, and costs. OpenAI models are powerful and versatile, but they can be resource-intensive, which can lead to longer processing times and higher costs. In contrast, Mistral is known for its high speed and low consumption.

By selecting the right model for each task, companies can save costs, utilize resources efficiently, and achieve optimal performance. This approach makes it possible to utilize the advantages of different models and take individual requirements and budgets into account.

description

Ethics and AI

Ethics in artificial intelligence (AI) is of great importance as AI systems are increasingly present in our everyday lives.

Similar to RAG technology, it is important to ensure that AI systems are not only effective but also ethical. Transparency, accountability and the avoidance of discrimination are crucial aspects of ensuring that AI systems are used responsibly.

Find out more at Ethics.

description

Anonymisation

Anonymisation is particularly important when we use models outside of Switzerland, such as GPT from OpenAI. Our models hosted in Switzerland ensure that your data never leaves the country. Our service is aimed at teams that place great value on data security and privacy.

At the heart of our technology is the integration of Microsoft's Presidio, a leading tool for effective data anonymisation and de-anonymisation.



Recommendation for using smaller language models for more sustainable AI

In the ongoing quest for more efficient and sustainable solutions in the field of Artificial Intelligence (AI), selecting the right technologies is crucial. A recent study by researchers from MIT, NYU, and Northeastern University, published under the title "From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference" on arXiv (arXiv:2310.03003v1), provides valuable insights into the energy consumption of Large Language Models (LLMs).

The study highlights that the energy consumption of LLMs increases with the number of their parameters. Particularly when comparing different sizes of the LLaMA model – 7B, 13B, and 65B – it was found that larger models exhibit significantly higher energy consumption. These findings underscore the importance of selecting model sizes with regards to energy consumption and associated environmental impacts.

Based on these insights, we recommend using smaller models whenever possible and feasible. By opting for models with fewer parameters, companies and developers can not only reduce energy consumption and CO2 emissions but also save costs and enhance the efficiency of their AI applications. This recommendation is particularly relevant for applications where the slightly increased accuracy of larger models does not provide a decisive advantage.

We encourage all stakeholders in the AI community to critically examine the energy consumption of their models and pursue more sustainable practices wherever possible. The full study provides further details and is a significant contribution to understanding and reducing resource consumption in the field of Artificial Intelligence.

Source: Samsi, S., Zhao, D., McDonald, J., Li, B., Michaleas, A., Jones, M., Bergeron, W., Kepner, J., Tiwari, D., Gadepally, V., "From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference", arXiv:2310.03003v1, [cs.CL], October 4, 2023.

Open link