Complete chat with OpenAI

You can complete the chat via OpenAI and use the result in your bot conversation.

Step 1. Add and set up the OpenAI integration.

Navigate to the organization integrations section:

Add the OpenAI integration:

Step 2. Create an assistant at OpenAI here.

You can add instructions, upload static files and test its behavior directly on the OpenAI side It is possible to set up automatic update of the needed assistant using the OpenAI integration automations, learn more here.

Step 3. Add bot variable of type Chat Completion:

Screenshot2024-03-07at15.03.43.png

Select the previously configured integration. Select the previously configured Assistant.

Set the initial and follow-up parameters:

{
    "thread": {
        "messages": [
            {
                "role": "user",
                "content": "{{chat_prompt}}"
            }
        ]
    }
}
{
    "thread": {
        "messages": [
            {
                "role": "user",
                "content": "{{chat_prompt}}"
            }
        ]
    }
}

Adjust the parameters according to the documentation Bot variable syntax is allowed in the parameters so that they can be dynamic.

The result of such a bot variable is the completed chat that can be used in the bot conversation, for example, in the response.

In our example, the variable will create a response from a bot based on the user input, which is represented by the chat_prompt variable.

Step 4. Add a step for the user input:

Note that the code name for this step is chat_prompt - this automatically creates a bot variable with the same name.

Step 5. Add a step response with the chat completion:

Screenshot2024-03-07at15.56.58.png

This response uses the completed_chat_assistant variable created in Step 3.

Streaming

For the improved user experience with sometimes longer time that it takes for the AI to generate a response, you can use dedicated Variable Stream bot chat text type. It will display the AI response in real time as it comes.

Function calling

OpenAI allows function calling as a helper tool for the assistant-based chat completion. This can be integrated with the bot variables, and using the scripting, you can, for example, call external services and use the returned content in the chat completion. You can add one or more functions in the OpenAI assistant:

Screenshot2024-03-12at09.23.46.png
Screenshot2024-03-12at09.25.40.png
{
  "name": "getCurrentWeather",
  "description": "Determine weather in my location",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "The city and state e.g. San Francisco, CA"
      }
    },
    "required": [
      "location"
    ]
  }
}

Enable the function script in the chat completion variable settings:

Screenshot2024-03-12at09.27.04.png
function (name, arguments, getVariableValue, callback) {
  switch (name) {
    case "getCurrentWeather":
      var url = new quriobot.URL("http://api.weatherapi.com/v1/current.json", true)
      url.query.q = arguments.location
      url.query.key = "YOUR_API_KEY"
      quriobot.ajax(url.toString(), function(responseText){
        var response = JSON.parse(responseText);
        var weather = response?.current?.condition?.text;
        callback(weather);
      })
      break;
    default:
      callback(null);
  }
}

Function script receives a name and the arguments of the requested function to submit the results of. This function will be called each time there’s a request from the thread run to submit the function-calling results. As a helper, there’s a getVariableValue(name, callback) function** **to allow getting the bot variables if they are needed for the function result. In our example, we check the requested function name and if it’s getCurrentWeather, we make an AJAX call to the weather API service and return the textual representation of the weather condition in the requested location. The response then might look like this:

Screenshot2024-03-12at09.44.05.png
You can also have multiple functions processed by the same script by having logic branches depending on the provided name argument.

File uploads

You can use files uploaded via the File upload for the chat completions. Currently, those use cases are supported:

  • Assistant completions that use Code Interpreter In order to use the file(s), add the file_ids parameter and use {{variable.value}} syntax:
{
    "thread": {
        "messages": [
            {
                "role": "user",
                "content": "{{hi_and_welcome}}",
                "file_ids": "{{upload.value}}"
            }
        ]
    }
}
  • Non-Assistant chat completions with Vision In order to use the uploaded image for the vision, use image_url message type and use {{variable.value}} syntax:
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": [
        {"type": "text", "text": "{{hi_and_welcome}}"},
        {
          "type": "image_url",
          "image_url": {
            "url": "{{upload.value}}"
          }
        }
      ]
    }
  ]
}

Edit this page

Categories
Tags
See also