Creating and Managing Experts
Experts can be created and managed in Alan in the settings menu under "Experts".
There, you find an overview of all existing experts that you have created or that have been shared with you.
When you open an expert, you see the expert's configuration and can copy, edit and share the expert, or directly start a chat with it.
Creating
To create a new expert, click on the "New" button. Now you can enter all the necessary information to configure the expert's appearance and behavior.
Finally, click on "Save" to create the expert.
Editing
After creating an expert, you can continuously edit and optimize all its configurations.
For example, you can improve the system prompt based on your chat experiences or add a new knowledge database.
INFO
All changes to experts affect only new chats, not existing chats. To use the changes, a new chat with the expert must be started.
If you have created the expert, you can also share it with your colleagues.
Deleting
To delete an expert in Alan, scroll in the expert to the bottom and click on "Delete expert".
Please note the following:
- Ensure that a shared expert is no longer used within your organization.
- Deleted experts cannot be restored.
- You can only delete experts that you have created yourself.
Configuration
Creating an expert requires understanding and configuring various settings. Each setting contributes to defining the appearance and behavior of the expert.
By carefully selecting and adjusting these settings, you can ensure that your expert is tailored to your needs and delivers optimal results.
Adjust all settings that affect the appearance of your expert, so that it is easy to use for you and other potential users:
Adjust the settings that affect the behavior of your expert only where needed for your specific use case. For all settings that you do not adjust, good default values are set by Alan:
Below is a detailed explanation of each setting:
Expert Icon
The icon is the first thing users see and should therefore reflect the function of the expert. Choose an icon that thematically matches the area in which the expert specializes.
Name
The name should be concise and descriptive so that users can recognize at a glance which tasks or subject areas the expert is suitable for. A clear name facilitates usage and ensures that users immediately understand what to expect from the expert.
Description
In the description, provide an overview of the expert's capabilities, their area of application, and any special features. A good description helps your colleagues quickly understand what the expert can do and in which scenarios it is best used. For an optimal presentation, refer to the text length of existing experts.
Greeting
The greeting text is displayed as the first message in all chats with the expert. You can use this, for instance, to provide tips on how to use the expert or encourage users to start the expert chat in a specific way by asking a question.
The welcome text is only a display element and is not part of the chat messages. Therefore, it does not affect the behavior of the language model.
System Prompt
The system prompt serves as an instruction to the AI model to initialize or control the desired behavior. This can include the type of communication, the depth of analysis, or a persona. Additionally, relevant information can be conveyed in the system prompt, but make sure its scope remains manageable.
INFO
Sometimes it helps to write the system prompt in English, although both German and English are generally supported.
Examples of System Prompts
Communication Style: The system prompt can be used to set the desired tone and style of the responses.
- Example: Always respond in a friendly and helpful manner. Try to explain technical problems in an understandable way.
- Effect: The model will ensure to respond in a friendly tone and explain technical information simply.
Analysis Depth: You can specify how detailed the analyses or explanations of the model should be.
- Example: Provide detailed technical explanations and step-by-step instructions when troubleshooting.
- Effect: The model will give more detailed and thorough answers when dealing with technical issues and their solutions.
Role and Persona: The system prompt can instruct the model to take on a specific role or persona.
- Example: You are an IT support expert specializing in network issues. Provide specific and technical solutions.
- Effect: The model will provide answers reflecting the expertise of an IT support specialist.
Specific Formatting: The system prompt can be used to guide the model to specific answer formats.
- Example: When answering a question about software installation, list the steps in a numbered format.
- Effect: The model will provide answers in a clearly structured, numbered format.
Technical Formats: The system prompt can be used to generate responses in specific technical formats, for example as JSON or YAML.
- Example: Return the answer in the following JSON format: {"Name": "<Extracted Name>", "Date": "<Extracted Date>", "Address": "<Extracted Address>"}. Ensure all answers are correctly formatted.
- Effect: The model will structure responses in the defined JSON format.
Initial Conversation
The initial conversation calibrates the model for specific use cases by providing it with context and examples of expected answers. This helps the model better understand and respond to specific scenarios.
Examples
Providing Context: The initial conversation gives the model the necessary context to correctly interpret requests and respond appropriately.
- Example: Start with a greeting and a brief introduction to give the model the context of your interaction.
- Question: "Hello, I have a problem with my network configuration. Can you help me?"
- Answer: "Of course, I can help you with the network configuration. What exactly isn't working?"
Example Dialogues: By providing example conversations, you can train the model on specific response patterns.
- Example:
- Question: "How do I install the new software version?"
- Answer: "To install the new software version, follow these steps: 1. Download the installation file. 2. Open the file and follow the on-screen instructions."
- Example:
Answer Formatting: You can instruct the model to give answers in a specific format to ensure consistency.
- Example:
- Question: "Can you provide the next steps in JSON format?"
- Answer: "{ 'Step 1': 'Download the installation file.', 'Step 2': 'Open the file.', 'Step 3': 'Follow the on-screen instructions.' }"
- Example:
Information Extraction: The initial conversation can include instructions for extracting specific information to deliver structured answers.
- Example:
- Question: "Extract the information in JSON format from the following text: 'Alan Turing placed an order on May 23, 2024. The delivery is to be made to Pützchens Chaussee 202-204a in 53229 Bonn.'"
- Answer: "{ 'Name': 'Alan Turing', 'Date': 'May 23, 2024', 'Address': 'Pützchens Chaussee 202-204a, 53229 Bonn' }"
- Example:
Through these examples and detailed explanations of the initial conversation, you can ensure that the model understands exactly what types of answers and interactions are expected, thus delivering the best possible results.
Prompt Suggestions
This is an optional setting to provide typical requests for the expert. These suggestions can be particularly useful for users without much experience with LLMs and help to answer frequently asked questions more quickly.
Knowledge Bases
You can determine which data sources the expert should access. This directly influences the quality and relevance of the expert's answers. The preselected knowledge bases can be changed by users during the chat.
INFO
When sharing an expert with preselected knowledge databases, make sure that all colleagues to whom you grant access to the expert also have access to the preselected knowledge databases.
Model
Select the underlying language model and adjust its parameters to optimize the expert's performance. Different models and parameters can affect the accuracy and response speed of the expert. Use the Comma LLM L for more demanding tasks and the Comma LLM S for quick responses, for example, in combination with knowledge bases.
Model Parameters
In addition to the model itself, individual model parameters can also be adjusted. These parameters require a basic understanding of how large language models work and usually do not need to be changed in most cases.
Temperature
This parameter determines how deterministic the model is. A low temperature (e.g., 0.2) leads to more deterministic results, as the most likely next token is always chosen. A higher temperature (e.g., 0.8) increases randomness and can result in more varied or creative outputs. For tasks like factual question and answer (Q&A), you should use a lower temperature value (between 0.1 and 0.3) to encourage more precise and concise answers. For creative tasks like brainstorming, a higher temperature value (between 0.7 and 1.0) can be beneficial.
Top P
This parameter also controls how deterministic the model is. A low Top-P value (e.g., 0.2) leads to more confident answers, as only the tokens that form the highest probability mass are considered. A high Top-P value (e.g., 0.9) allows the model to consider more possible words, including less likely options, resulting in more varied outputs. For precise and factual answers, you should keep the Top-P value low, while for more varied answers, you can set a higher value.
INFO
Change either the temperature or Top P, but not both parameters simultaneously.