Alexa Slot Value Synonyms

  1. Alexa Slot Value Synonyms Thesaurus
  2. Alexa Slot Value Synonyms Value
  3. Alexa Slot Value Synonyms List
  4. Alexa Slot Value Synonyms Dictionary

In the previous articles we have built a Hello World Alexa skill.As any other hello world programs, the skill does only one job.We can only ask a single question and receive the single answer back.

Although you can build skill using that kind of interaction with a user, sooner or later you will feel the need of getting some custom data from a user.For example, you may want to tell a number or a name of a city and behave differently based on the answer.

A slot type is a list of values that Amazon Lex uses to train the machine learning model to recognize values for a slot. For example, you can define a slot type called ' Genres. ' Each value in the slot type is the name of a genre, 'comedy,' 'adventure,' 'documentary,' etc. You can define a synonym for a slot type value. Slot values will be used as training data and the slot is resolved to the value provided by the user if it is similar to the slot values and synonyms. This is the default behavior. Amazon Lex maintains a list of possible resolutions for a slot.

Let’s see how we can do that.

Slots

Alexa

To capture a user’s input we need to use so-called slots in intent utterances.To achieve that, we can use curly braces ({}) to provide a slot name.

For example:We would like to ask a user to provide any number.So we define an “AnswerIntent” with the following utterances:

  • Number {numberAnswer}
  • The number is {numberAnswer}
  • {numberAnswer}

Now, if the user says “The number is five hundred” we would like to get the number and use it somehow in our code.

At that moment we have defined the slot. But we need also to choose a “Slot Type” for that slot.The slot type will help Alexa to understand that is user saying.

That slot type can be either custom or built-in.

Built-in slot types

In our example, we need a number, so there is an “AMAZON.NUMBER” slot type.

That is how the “AnswerIntent” looks in the skill builder.

Besides that type, the skill builder provides a vast amount of other types.The list of types is growing.

For example, there are slot types to work with dates and times.As well as some other interesting ones. There are slot types with a list of airports or animals.

We can use a built-in type and extend it with additional values.

Here is just a small portion of them.

Custom slot types

Let’s say you want to get a more specific response from your users, but there is no built-in slot type for these needs.Then you can build your own custom slot type and provide a list of possible values.

Go ahead and in the skill builder click to “Add” slot type link and choose a name for your custom slot.

Then you can specify all the possible values for that type. Even add synonyms if you need.

After that, you are free to use the slot type in your slots. Create a new slot in utterance samples and attach the custom type.

Getting values from the slots

We have figured out how to define slots in the skill builder. The half of the job is done.

Now it’s time to access the response from our codebase.

We can achieve that in two steps.

At first, we need to tell Alexa to start listening to a user’s input.To do that we need some intent handler with reprompt call in it.

At this moment Alexa waits for the user’s input.If the user tells us something satisfying the sample utterances from “AnswerIntent” example above,Alexa would trigger “AnswerIntentHandler” where we can fetch a slot value.

The slots a live deep inside handlerInput.requestEnvelope.request.intent.slots object.We can fetch the slot by the name slots['<name-of-the-slot>'] and then access the value slots['<name-of-the-slot>'].value.

Let’s see it in the example:

Here we fetch the slots object to a slots constant.Then we’ve got the value of numberAnswer slot.

That’s it. We’ve got it.

Now to demonstrate that Alexa understood a user correctly, we tell her to say the number back to the user.

That is how the dialog looks like.

Wrapping up

We have a new tool in the toolbelt. Now when we need to get some input from the user we can use slots for that.We have learned there are two kinds of slot types: custom and built-in.We can use both in order to build better Alexa skills.

You can find a complete example on GitHub.

Recently I published my first skill for Amazon’s Alexa voice service called, BART Control. This skill used a variety of technologies and public APIs to become useful. In specific, I developed the skill with Node.js and the AWS Lambda service. However, what I mentioned is only a high level of what was done to make the Amazon Alexa skill possible. What must be done to get a functional skill that works on Amazon Alexa powered devices?

We’re going to see how to create a simple Amazon Alexa skill using Node.js and Lambda that works on various Alexa powered devices such as the Amazon Echo.

To be clear, I will not be showing you how to create the BART Control skill that I released as it is closed source. Instead we’ll be working on an even simple project, just to get your feet wet. We’ll explore more complicated skills in the future. The skill we’ll create will tell us something interesting upon request.

Alexa Slot Value Synonyms Thesaurus

The Requirements

While we don’t need an Amazon Alexa powered device such as an Amazon Echo to make this tutorial a success, it certainly is a nice to have. My Amazon Echo is great!

Here are the few requirements that you must satisfy before continuing:

  • Node.js 4.0 or higher
  • An AWS account

Lambda is a part of AWS. While you don’t need Lambda to create a skill, Amazon has made it very convenient to use for this purpose. There is a free tier to AWS Lambda, so it should be relatively cheap, if not free. Lambda supports a variety of languages, but we’ll be using Node.js.

Building a Simple Node.js Amazon Alexa Skill

We need to create a new Node.js project. As a Node.js developer, you’re probably most familiar with Express Framework as it is a common choice amongst developers. Lambda does not use Express, but instead its own design.

Before we get too far ahead of ourselves, create a new directory somewhere on your computer. I’m calling mine, alexa-skill-the-polyglot and putting it on my desktop. Inside this project directory we need to create the following directories and files:

If your Command Prompt (Windows) or Terminal (Mac and Linux) doesn’t have the mkdir and touch commands, go ahead and create them manually.

We can’t start developing the Alexa skill yet. First we need to download the Alexa SDK for Node.js. To do this, execute the following form your Terminal or Command Prompt:

The SDK makes development incredibly easy in comparison to what it was previously.

Before we start coding the handler file for our Lambda function, let’s come up with a dataset to be used. Inside the src/data.js file, add the following:

The above dataset is very simple. We will have two different scenarios. If the user asks about Java we have two possible responses. If the user asks about Ionic Framework, we have three possible responses. The goal here is to randomize these responses based on the technology the user requests information about.

Now we can take a look at the core logic file.

Open the project’s src/index.js file and include the following code. Don’t worry, we’re going to break everything down after.

So what exactly is happening in the above code?

The first thing we’re doing is including the Alexa Skill Kit and the dataset that we plan to use within our application. The handlers is where all the magic happens.

Alexa has a few different lifecycle events. You can manage when a session is started, when a session ends, and when the skill is launched. The LaunchRequest event is when a skill is specifically opened. For example:

Alexa, open The Polyglot

The above command will open the skill The Polyglot and trigger the LaunchRequest event. The other lifecycle events trigger based on usage. For example, a session may not end immediately. There are scenarios where Alexa may ask for more information and keep the session open until a response is given.

In any case, our simple skill will not make use of the session events.

In our LaunchRequest event, we tell the user what they can do and how they can get help. The ask function will keep the session open until a response is given. Based on the request given, a different set of commands will be executed further down in our code.

This brings us to the other handlers in our code.

If you’re not too familiar with how Lambda does things, you have intents that perform actions. You can have as many as you want, but they are triggered based on what Alexa detects in your phrases.

For example:

If the AboutIntent is triggered, Alexa will respond with a card in the mobile application as well as spoken answers. We don’t know what triggers the AboutIntent yet, but we know that is a possible option.

The intent above is a bit more complicated. In the above intent we are expecting a parameter to be passed. These parameters are known as slot values and they are more variable to the user’s request.

If the user provides java as the parameter, the randomization function will get a Java response from our data file. This applies for ionic framework as well. If neither were used, we will default with some kind of error response.

In a production scenario you probably want to do a little better than just have two possible parameter options. For example, what if the user says ionic rather than ionic framework? Or what happens if Alexa interprets the user input as tonic rather than ionic? These are scenarios that you have to account for.

This brings us to the handler function that Lambda uses:

In this function we initialize everything. We define the application id, register all the handlers we just created, and execute them. Lambda recognizes this handler function as index.handler because our file is called index.js.

Let’s come up with those phrases that can trigger the intents in our application.

Creating a List of Sample Phrases Called Utterances

Amazon recommends you have as many phrases as you can possibly think up. During the deployment process to the Alexa Skill Store and AWS Lambda, you’ll need an utterances list.

For our example project, we might have the following utterances for the AboutIntent that we had created:

I listed four possible phrases, but I’m sure you can imagine that there are so many more possibilities. Try to think of everything the user might ask in order to trigger the AboutIntent code.

Things are a bit different when it comes to our other intent, LanguageIntent because there is an optional parameter. Utterances for this intent might look like the following:

Notice my use of the {Language} placeholder. Alexa will fill in the gap and treat whatever falls into that slot as a parameter. That same parameter name will be used in the JavaScript code.

Again, it is very important to come up with every possible phrase. Not coming up with enough phrase possibilities will leave a poor user experience.

Can you believe the development portion is done? Now we can focus on the deployment of our Amazon Alexa skill.

Deploying the Skill to AWS Lambda

Deployment is a two part process. Not only do we need to deploy to AWS Lambda, but we also need to deploy to the Amazon Alexa Skill store.

Starting with the AWS Lambda portion, log into your AWS account and choose to Lambda.

At this point we work towards creating a new Lambda function. As of right now, it is important that you choose Northern Virginia as your instance location. It is the only location that supports Alexa with Lambda. With that said, let’s go through the process.

Choose Create a Lambda function, but don’t select a blueprint from the list. Go ahead and continue. Choose Alexa Skills Kit as your Lambda trigger and proceed ahead.

Alexa Slot Value Synonyms Value

On the next screen you’ll want to give your function a name and select to upload a ZIP archive of the project. The ZIP that you upload should only contain the two files that we created as well as the node_modules directory and nothing else. We also want to define the role to lambda_basic_execution.

Alexa Slot Value Synonyms List

After you upload we can proceed to linking the Lambda function to an Alexa skill.

To create an Alexa skill you’ll want to log into the Amazon Developer Dashboard, which is not part of AWS. It is part of the Amazon App Store for mobile applications.

Once you are signed in, you’ll want to select Alexa from the tab list followed by Alexa Skills Kit.

From this area you’ll want to choose Add a New Skill and start the process. On the Skill Information page you’ll be able to obtain an application id after you save. This is the id that should be used in your application code.

In the Interaction Model we need to define an intent schema. For our sample application, it will look something like this:

Alexa Slot Value Synonyms Dictionary

The slot type for our LanguageIntent is a list that we must define. Create a custom slot type and include the following possible entries:

Go ahead and paste the sample utterances that we had created previously.

In the Configuration section, use the ARN from the Lambda function that we created. It can be found in the AWS dashboard. Go ahead and fill out all other sections to the best of your ability.

Conclusion

You just saw how to create a very simple skill for Amazon’s Alexa voice assistant. While our skill only had two possible intent actions, one of the intents had parameters while the other did not. This skill was developed with Node.js and hosted on AWS Lambda. The user is able to ask about the author of the skill as well as information about Java or Ionic Framework.

A video version of this article can be seen below.

Nic Raboy

Nic Raboy is an advocate of modern web and mobile development technologies. He has experience in Java, JavaScript, Golang and a variety of frameworks such as Angular, NativeScript, and Apache Cordova. Nic writes about his development experiences related to making web and mobile development easier to understand.

Please enable JavaScript to view the comments powered by Disqus.