Agent - Chatting
The main chat interface of Agent is the default landing when you log in, allowing users to begin interacting with the AI immediately. As the heart of your interaction with the AI, it is designed with a clean and clear layout, minimizing distractions and focusing on the conversation.
Starting a New Chat
Default landing: As this is the default landing page, you can start chatting immediately after logging in.
Start over with defaults: To start a new conversation, click on
Chatfrom the left-hand menu. This will begin a new conversation with the default Model, no Instruction selected and no Files selected.Start a new chat with current settings: Using the “Reset Chat” button in the top right corner will start a new conversation, but retain the currently selected Model, Instructions and Files. Note that any prompt you were working on in the Chat Input box will also be retained.
Chat Window Features
Conversation Display: The main area of the chat interface displays the ongoing conversation. Messages from you and responses from the AI are shown in a threaded format, making it easy to follow the dialogue.
The Chat Input Box: Located at the bottom of the interface, it is where you can type your questions or commands. The interface supports natural language input, making it easy to communicate with the AI as you would with a human.
Conversation History: You can scroll back through the conversation to review previous messages. This is useful for maintaining context and ensuring continuity in your discussions.
Response Details
At the top of each of the AI’s replies, you will see a bar containing several items:
Answer: the final response to your prompt
References: If any Files or Sources were used to produce this response, they will be indicated here. Clicking the References icon will open a detailed list of files used. If you've used the
Websource, the References tooltip and dialog will also include Web documents used to generate the response.Steps:
If
/reasonwas used (see https://ayfie-dev.atlassian.net/wiki/spaces/USER/pages/4281008161 ), this will show the reasoning steps used by the Agent.If Sources were used, this will show the information retrieved from them
Model name: This is the name of the model which was used to generate the response.
Instruction name: If an Instruction was used, this will show the name of the Instruction used to generate the response.
Menu button: The menu expands several more options:
Copy: This option copies the response using Markdown to preserve formatting
Exclude from context/Include in context: Exclude this message from the ongoing chat context used by the model when generating new responses, or include a message that is already excluded.
Convert to document: This option will generate a Word .docx document containing the response body
Convert to Presentation: This option will generate a PowerPoint .pptx document containing the response body
File Upload and Management
Drag and Drop Files: The chat interface allows you to upload files directly into the conversation. Some file formats can be dragged and dropped directly into the chat input, others need to be dropped into the Resources panel and processed. Files in the Resources panel. See Uploading Files for more ways to upload files to Agent.
Selecting Files or File Groups: Checking the boxes next to files and file groups in the Resources panel will include them as context in the conversation. Clicking an already checked box will uncheck it.
Chat Options and Settings
Model Selection: You can select from various AI models available in your Agent instance. Each model may offer different capabilities, allowing you to tailor the Agent's responses to your specific needs based on the complexity or nature of the task at hand. In some types of Agent deployments, the choice of available models can be customized by the application administrator.
Instructions (formerly Personalities): See Instructions
Suggestions: Enabling suggestions will display text suggestions based on your previous prompts as you’re entering new prompts.
Specialized Actions
A specialized action can be selected via the actions button or typing '/'. Once selected, it is automatically inserted at the beginning of the input prompt.
Please note that these commands are optional and will only be available in your environment if enabled in the app configuration by an administrator.
/analyze - perform analyses and generate visualizations to uncover insights from your data, using either an uploaded file or direct data input. See https://ayfie-dev.atlassian.net/wiki/spaces/USER/pages/4090855427 for more details.
/generate-image - create custom images or graphics.
/reason - decomposes complex questions into subproblems, delivering precise answers and a well-rounded conclusion. See https://ayfie-dev.atlassian.net/wiki/spaces/USER/pages/edit-v2/4281008161?draftShareId=9cc3f027-39e6-4835-95c4-c95dc1580b7c for more details.
/translate - translate files from one language to another. See https://ayfie-dev.atlassian.net/wiki/spaces/USER/pages/edit-v2/4251090963?draftShareId=5d867ab1-7245-42c6-a0e9-c549f471163c for more details.
Speech to Text
Activate the Speech-to-Text feature by pressing the microphone icon next to the text input field:
Permissions: Ensure microphone permissions are granted on mobile devices.
Language support: Supports English and Norwegian. Change the language in Settings.
Browser compatibility: Works with Chrome, Safari, and Edge. Not supported on Firefox.
Stopping recognition: Speech recognition stops automatically after 5 seconds of silence or when you click the microphone button again.
Speech-to-Speech
Speech-to-Speech is currently only supported in the Azure Marketplace and Standalone versions of Agent, not in Agent integrated with Index.
Important: Speech to Speech does not support using image files to provide context. Any selected images will be ignored.
The Voice Chat mode allows you to converse with Agent in a seamless speech-to-speech experience (please note - Speech to Speech uses a separate AI model, indepenent of the one you’ve selected for text chat).
On its first activation, you need to allow microphone access:
The Agent will then immediately start listening to the audio input. When speech is detected, it will wait for a short period of silence before it begins generating a response.
Both the input and output transcriptions will be shown in the chatting window, and audio output will be played through speakers.
When a response is being generated and played, further audio input will be ignored until the playback ends. After that, Agent will resume listening to input.
In the Voice Chat mode, the input control allows you to pause response playback and/or to mute microphone input.
Sources/Resources panel
The panel is visible to the right of the screen. If desired, it can be collapsed for a clearer layout. Here, you can see your external sources, uploaded files, created File Groups and check/uncheck any that you’d like the AI to use as context for your current conversation. You can also drag and drop new files into the panel to upload them to Agent.
Resources panel (open) | Resources panel (collapsed)
|
Resource selection
Users can select resources using checkboxes. The current selection is displayed in the input field: one resource shows its name, while multiple selections show only the number of chosen items. Groups can be expanded, and their state reflects whether all, some, or none of the resources are selected. The selection can be cleared with the X button.
Sorting options
Resources can be sorted by title, upload date, or recent. The “recent” list shows the ten most recently used resources from previous chats.
Search resources
Clicking the Search resources button opens a search dialog. Results prioritize resources starting with typed characters, followed by those containing them. Matching fragments are highlighted in bold. Selecting a resource from the results automatically adds it to the chat and updates the view.