User Guide
Welcome to the User Guide! On the following pages, we will guide you through all the important functionalities of the Alan user interface. In this way, you will learn how to use your AI assistant Alan optimally.
A quick overview of the most important functionalities of Alan can be found in our explanatory videos, which can be accessed under Video Tutorials.
For a comprehensive introduction, you can work through the User Guide chronologically. Alternatively, if you have questions on specific topics, you can navigate directly to the relevant pages:
- In the section Chats, you will learn everything about the basic chat functionalities of Alan.
- The section Knowledge Databases explains how you can integrate specific knowledge into Alan.
- Under Experts, you will find all the information you need to use and create customized AI assistants for specific use cases.
- In the section Sharing, you will learn how to make knowledge databases and experts accessible to your colleagues.
- In the Advanced Usage section, we have grouped all the information on keyboard shortcuts and Alan's action bar, which you can use to make your work with Alan even more efficient.
Important Note: Dealing with Hallucinations
With Alan, we make it easier for you to access powerful language models (Large Language Models - LLMs) like the Comma LLMs, which can make your daily tasks more efficient and effective. These models are advanced AI technologies that can understand, generate, and respond to human language. By training with extensive text data, these models learn to recognize patterns and nuances of language, making them a flexible tool for a variety of inquiries.
Before you begin, it is important to understand that LLMs, as advanced as they may be, can occasionally generate information that is not exact or even misleading - a phenomenon often referred to as hallucination. Although we have implemented various mechanisms to improve accuracy and significantly reduce the likelihood of hallucinations, a residual uncertainty remains. We continuously strive to improve the accuracy of our models and operate at the state of the art, but hallucinations in generative AI cannot yet be completely ruled out according to the current state of global research. Therefore, it is advisable to critically evaluate Alan's responses and, if possible, verify them - especially for decision-critical tasks.
How can hallucinations be minimized?
To minimize hallucinations, you can take the following measures:
- Precise Prompts: The clearer and more specific your inputs are, the more likely the model is to provide an accurate response.
- Specific Information: When chatting with files or using knowledge databases, the LLM has access to specific information, which significantly increases the accuracy of the generated responses.
- Validation and Verification: Humans should review the information generated by LLMs, especially for decision-critical content.