APIs
Getting StartedREST API BasicsComplianceWebhooksWebex APIs
Admin
Calling
Devices
imiconnect
Meetings
Messaging
Webex Assistant Skills
Full API Reference
API Changelog

Webex Assistant Skills Reference Guide

We will go over the structure of the requests sent to your Skill from the Assistant Skills service and the responses expected to be returned from your Skill.

anchorRequest Payload
anchor

After decrypting the message received from Assistant Skills, the request body will look as follows:

{
  "text": "hello",
  "context": {
    "orgId": "00000000-0000-0000-0000-000000000000",
    "userId": "00000000-1111-1111-1111-000000000000",
    "userType": null,
    "developerDeviceId": "00000000-2222-2222-2222-000000000000",
    "supportedDirectives": [
      "speak",
      "ui-hint",
      "clear-web-view",
      "asr-hint",
      "reply",
      "display-web-view",
      "listen",
      "sleep",
      "assistant-event"
    ]
  },
  "params": {
    "target_dialogue_state": null,
    "time_zone": "America/Los_Angeles",
    "timestamp": 1638833645,
    "language": "en",
    "locale": "en_US",
    "dynamic_resource": {},
    "allowed_intents": []
  },
  "frame": {},
  "challenge": "4f1cf832c0affcfb31afc841ca9fab2cd8243811131d37966ce2beba65d9a99d"
}
  • text: This is the Actor's query. This can sometimes be empty when the Actor asks to speak to the Skill without a query.
  • context: Contains relevant and potentially useful information about the user making the query.
    • orgId: A UUID of the organization who the Actor is registered to.
    • userId: A UUID of the Actor themself if available.
    • userType: A string value of which type of account making the request if available.
    • developerDeviceId: A UUID of the device used to make the query.
    • supportedDirectives: A list of directives that are allowed from your Skill. More directive details are available below.
  • params: Contains information specific to MindMeld powered applications but some values maybe useful for your Skill. Additional information can be found in MindMeld documentation.
    • target_dialogue_state: The name of the dialogue handler that you want to reach in the next turn. One particular case, skill_intro must be supported. If the target_dialogue_state is skill_intro, the Skill should return an introductory message.
    • time_zone: The name of IANA time zone.
    • timestamp: A valid 13-digit unix timestamp representation.
    • language: A valid ISO 639 two letter code for the language used to make the query.
    • locale: A valid ISO 3166-2 locale code. Locale codes are represented as ISO 639-1 language code and ISO3166 alpha 2 country code separated by an underscore character.
    • dynamic_resource: A dictionary containing data used to influence the language classifiers by adding resource data for the given turn (see dynamic gazetteer documentation).
    • allowed_intents: A list of intents that you can set to force the language processor to choose from.
  • frame: A mutable object that contains information that is preserved during multiple interactions with the Skill.
  • challenge: A uniquely generated string that the Assistant Skills service includes in each request and is used to verify that the Skill was able to decrypt the payload. The challenge is then expected to be sent back in the response to the service. If the challenge is missing or incorrect, the service will return an error to Assistant NLP.
anchorResponse Payload
anchor

Next, we'll go over the expected structure of the response from your Skill. The response should include directives and challenge, expressed in JSON.

  • directives: A list of directive objects that will instruct the Assistant on what actions to perform and what views to display.
  • challenge: A uniquely generated string originally sent from Assistant Skills that must be returned in order to authenticate the response.
{
  "directives": [
    {
      "name": "reply",
      "type": "view",
      "payload": {
        "text": "This is the echo skill. Say something and I will echo it back."
      }
    },
    {
      "name": "speak",
      "type": "action",
      "payload": {
        "text": "This is the echo skill. Say something and I will echo it back."
      }
    },
    {
      "name": "listen",
      "type": "action",
      "payload": {}
    }
  ],
  "challenge": "520d1c22c9c33ed169997d62da855e60145320767f49a502270fee0e7fb59ff1"
}
anchorAssistant Directives
anchor

A directive is single instruction, expressed in JSON, sent from the Skill or Assistant NLP to a Webex Client. Each directive will perform some action or display a view to the Actor. The Skill is expected to send a list of directives to the Webex Client in response to each query the Actor asks and they are completed in the order that they are received in.

Each individual directive JSON has at least two fields, name and type and an optional third, payload.

  • name: the identifier of the directive. The name is used by the Webex Client to determine which directive is being invoked.
  • type: can be either view or action. These values are defined in advanced by the directive.
  • payload: the specification of what the directive will do. If needed, each directive payload has an expected structure of key value pairs.

This is an example of a reply directive.

{
  "name": "reply",
  "type": "view",
  "payload": {
    "text": "Hello, World!"
  }
}

When creating a set of directives from your Skill, you must end the set with either a listen or sleep directive. This is the proper way to allow Webex Assistant to handle followup queries or end user interactions.

Assistant Directive Examples

The following are the supported directives that a Skill can send to the Webex Client:

reply

Type: view

Display a text message. The payload contains a text field.

Sample Payload
{
  "text": "Hello, World!"
}
speak

Type: action

Read a given text aloud.

Sample Payload
{
  "text": "Hello, World!"
}
listen

Type: action

Listen for a user's response.

Note: payload is not required.

sleep

Type: action

Dismiss Webex Assistant and end the interaction with the Skill. The payload contains an optional delay integer value of time that will be waited before dismissing the Assistant. The delay can be used to allow a view to be displayed for a duration before being removed from the screen.

Sample Payload
{
  "delay": 0
}
ui-hint

Type: view

Display text messages containing a suggested response for the user. This text will be styled slightly differently in the user interface. The payload contains a text field containing an array of hints, and an optional prompt field denoting the prompt to display.

Sample Payload
{
  "text": ["Hello", "What can you do?"],
  "prompt": "Try saying"
}
asr-hint

Type: action

Send a list of words to the Automatic Speech Recognition (ASR) service to help with speech-to-text recognition. This is intended to help bootstrap the ASR service if commonly used words for your Skill are consistently being mistranscribed. The payload contains a text field containing an array of strings.

Sample Payload
{
  "text": ["Hello", "Echo"]
}
display-web-view

Type: action

Displays the specified url in a web view on the Webex Client. The payload contains an optional title field, which is displayed on the web view and a required url field of the web page to be displayed. Displayed web views remain on screen until dismissed with a clear-web-view directive from your Skill or by the user interacting with the Webex Client.

Sample Payload
{
  "title": "Google",
  "url": "https://google.com"
}
clear-web-view

Type: action

Dismisses any web views displayed on Webex Client.

Note: payload is not required.

assistant-event

Type: action

A generic event sent to Webex Assistant and forwarded to the Webex Client. This directive can be used in combination with Webex Client Macros. The payload contains a required name field and an optional inner payload field must be an object if included.

{
  "name": "test",
  "payload": { "foo": "bar" }
}