Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Markdown
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Configure the Azure OpenAI Models](#configure-the-azure-openai-models)

# Introduction
This guide describes how to configure Azure to generate the prerequisites required to enable the Ayfie Personal Assistant feature. It is assumed that one already has an Azure subscription and access to Azure OpenAI as described in [Azure OpenAI Service Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview).

# Prerequisites
The prerequisites are to provide the following entities:
- **Deployment Name**
- **API Address**
- **API Key**

for each of the following 3 Azure OpenAI models:
- **Main Model**
- **High Quality Model**
- **Embeddings Model**

TheFor intendedhistorical differencereasons, betweenPersonal the Main Model andAssistant supports the High Quality Model is the quality of the chat responses when users toggle between the two in the Personal Assistant UI. The normal approach is to set theconcept of a Main Model up with GPT-3.5 and thea High Quality Model. with GPT-4. However,This to make it ispossible fullyfor possiblycustomers to settrade bothperformance offor themcost. upHowever, withas forthe instancecurrently GPTbest 3.5performing whichmodel wouldis meanalso thatthe togglingmost between the two ininexpensive one, the UIrecommended wouldsetup haveis nothe effect*full* mode Thereoption are(see twosection GPT-4Personal alternativesAssistant forin the High Quality model: The lightweight model[Installation Guide](https://ayfie-dev.atlassian.net/wiki/spaces/SAGA/pages/2400714758/Ayfie+Locator+Installation+Guide)) in combination with *gpt-4* that is suitable for resource-constrained scenarios where cost and response time are critical, or alternatively the more resource rich *gpt-4-32k* model that offers higher performance, accuracy, and the ability to handle larger sets of documents. Consider your specific requirements, performance needs, and cost constraints to choose the most appropriate model.

The third, version *1106-Preview*, as the Main Model. Practically speaking, this means that one uses identical configuration for the Main Model and the High Quality Model (see instructions below).

The last listed model is for creating embeddings. Embeddings are numerical representations of words that are learned from large amounts of text data. Currently, there is only one supported model.

Each of the 3 models requires an API address and an API key. However, unless one chose to spread the models across geographical regions, all 3 models will be reached via the same API address and an API key.

Given the limited options for each setting, the configuration of the prerequisites in Azure described in the next section, has a very predictable outcome:
- Main Model Deployment Name: **gpt-35-turbo4**
- High Quality Model Deployment Name: **gpt-4** or **gpt-4-32k**
- Embeddings Model Deployment Name: **text-embedding-ada-002**
- API Address: the same one for all three
- API Key: the same one for all three

# Configure the Azure OpenAI Models
For more information, consult [Azure OpenAI Service Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview).

Follow these steps to set up the deployments for the 3 Azure OpenAI models:
- Log in to the Azure portal at [portal.azure.com](https://portal.azure.com/)
- Make sure the account that is logged in has at least one subscription
- Go to *Azure OpenAI*
  - Click *Create*, to create a Resource
    - In *Project Details*, select *Subscription*
    - In *Project Details*, select *Resource Group*
    - In *Instance Details*, select *Region*.  Not all models are available in all regions, consult with [Azure OpenAI Service models](https://learn.microsoft. Choosecom/en-us/azure/ai-services/openai/concepts/models) for availability. EU country customers should for legal reason select a region that has the models one wants to use is within EU. To avoid any data quota conflict with a current or a future deployment of the standalone version of the Personal Assistant, it is recommended to not select the regions *Sweden Central*, *UK South* and *Canada East*.
    - In *Instance Details*, set the *Name*
    - In *Instance Details*, select *Pricing Tier*
    - Click *Next* to go to the *Network* tab
    - In *Type*, select *All networks, including internet can access this resource.*
    - Click *Next* to go to the *Tags* tab
    - Click *Next* to go to the *Review + submit* tab
    - Click *Create*
  - When Resource is created, select the resource in *Azure OpenAI*
  - Click *Keys and Endpoint*Click herein tothe viewleft endpoints*.menu
    - **Copy the value of *Language APIsEndpoint*, it will be required later as the API Address**
    - **Copy the value of *KEY 1*, it will be required later as the API Key** (optionally *KEY 2*, both keys are valid)
  - Click *Model *Go to Azure OpenAI Studiodeployments* in the left menu
  - Click *Manage Deployments* (this will open a new portal)
  - Click *Management->Deployments*
  - Click *Create new deployment* to create Main model
    - Select the model (typicallyrecommended *gpt-35-turbo-4*)
    - Select the Model Version (recommended *1106-Preview*)
    - Set the *Deployment Name* (must be same as model name)
      - **Copy the *Deployment Name*, it will be required later**
    - In *Advanced Options*, set *Tokens per Minute Rate Limit (thousands)* to maximum value.
  - Click *Create new deployment* to create High Quality model (only required if one wishes to use a different deployment for the High Quality Model than for the Main Model)
    - Select the model
    - Select the Model version.
    - Set the *Deployment Name* (must be same as model name).
      - **Copy the *Deployment Name*, it will be required later**
    - In *Advanced Options*, set *Tokens per Minute Rate Limit (thousands)* to maximum value.
  - Click *Create new deployment* to create Embeddings model
    - Select the model (must be *text-embedding-ada-002*)
    - Set the *Deployment Name* (must be same as model name)
      - **Copy the *Deployment Name*, it will be required later**
    - In *Advanced Options*, set *Tokens per Minute Rate Limit (thousands)* to maximum value

...