Peak Privacy Chat
-
Employees use AI with company data in a secure and controlled manner
-
Faster responses shorten waiting times
-
Attractive cost model based on consumption
Use your own secure company chat
Whether you're an SME, organisation or foundation, using a chatbot is convenient and makes dealing with text more efficient. With the Peak Privacy Chat, your employees can chat based on your own data and provide information faster and in a more targeted manner.
Our process
-
Our process starts with an assessment of the corporate strategy for the use of AI. We then identify potential applications together. Based on a demo, objectives are then jointly defined.
-
The "Data Discovery" workshop starts by analysing your available data and clarifying your requirements. We then create a solid briefing for the integration.
-
We integrate the service with the greatest potential seamlessly and efficiently and continuously optimise it through feedback. This allows us to recognise individual needs and develop them further in the next step.
- 3 GB storage for your own data (RAG system)
- Access for 50 users
- incl. 1 million tokens/mt, thereafter consumption per GB, see costs for consumption
- 4h "Data Discovery" workshop
- Integration of 1 custom RAG with setup service
- 8h support incl. onboarding for independent administration
- Mistral (-8x7B, -7B, -Swiss)
- OpenAI (GPT 3.5, 4)
*Prices subject to change
Timeline
The RAG System accesses a database that we create together. From this, the AI extracts relevant information for text generation.
Costs for consumption
Prices can be changed at any time and will be published here.
The RAG system accesses a large database or data set that we create together in advance. This database can come from various sources, such as the Internet, specialised databases, company data or other publicly available sources.
The database can contain structured data, such as facts and statistics, or unstructured data, such as texts and articles. The artificial intelligence in the RAG system then uses this database to retrieve relevant information and integrate it into the generated text.
--> Content is extracted from the database
The use of different models such as OpenAI and Mistral has a direct impact on consumption, speed and costs. OpenAI models are powerful and versatile, but they can be resource-intensive, which can lead to longer processing times and higher costs. In contrast, Mistral is known for its high speed and low consumption.
By selecting the right model for each task, companies can save costs, utilise resources efficiently and achieve optimum performance. This approach makes it possible to utilise the advantages of different models and take individual requirements and budgets into account.
Ethics in artificial intelligence (AI) is of great importance as AI systems are increasingly present in our everyday lives. Similar to RAG technology, it is important to ensure that AI systems are not only effective but also ethical. Transparency, accountability and the avoidance of discrimination are crucial aspects of ensuring that AI systems are used responsibly.
Find out more at Ethics.
Anonymisation is particularly important when we use models outside of Switzerland, such as GPT from OpenAI. Our models hosted in Switzerland ensure that your data never leaves the country. Our service is aimed at teams that place great value on data security and privacy.
At the heart of our technology is the integration of Microsoft's Presidio, a leading tool for effective data anonymisation and de-anonymisation.
In the ongoing quest for more efficient and sustainable solutions in the field of Artificial Intelligence (AI), selecting the right technologies is crucial. A recent study by researchers from MIT, NYU and the Northeastern University, published under the title "From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference" on arXiv (arXiv:2310.03003v1), provides valuable insights into the energy consumption of Large Language Models (LLMs).
The study highlights that the energy consumption of LLMs increases with the number of their parameters. Particularly when comparing different sizes of the LLaMA model – 7B, 13B, and 65B – it was found that larger models exhibit significantly higher energy consumption. These findings underscore the importance of selecting model sizes with regards to energy consumption and associated environmental impacts.
Based on these insights, we recommend using smaller models whenever possible and feasible. By opting for models with fewer parameters, companies and developers can not only reduce energy consumption and CO2 emissions but also save costs and enhance the efficiency of their AI applications. This recommendation is particularly relevant for applications where the slightly increased accuracy of larger models does not provide a decisive advantage.
We encourage all stakeholders in the AI community to critically examine the energy consumption of their models and pursue more sustainable practices wherever possible. The full study provides further details and is a significant contribution to understanding and reducing resource consumption in the field of Artificial Intelligence.
Source: Samsi, S., Zhao, D., McDonald, J., Li, B., Michaleas, A., Jones, M., Bergeron, W., Kepner, J., Tiwari, D., Gadepally, V., "From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference", archive:2310.03003v1, [cs.CL], October 4, 2023.