We already have intelligence assistants but how about voice-activated cars and kitchen appliances? This is the future. With Amazon Lex and A.I. such as natural language understanding and automatic speech recognition we will be able to do lots of similar things in the future. But how did Amazon do this? Senior Product Manager Sudheera Vanguri explains.

 

 

Sudheera Vanguri, the Senior Product Manager at Amazon Web Services, works closely with A.I., on an Amazon campus that focuses on data and databases. She is responsible for AWS consoles, such as Amazon Lex, Amazon RDS, and Amazon DynamoDB. She holds a bachelor’s degree in Electrical Engineering and a master’s degree in Business Administration.

Amazon Web Services “offers a collection of cloud computing services that make up an on-demand computing platform.” The purpose of this is that without knowing too much about how to build, for example, a chat bot a developer can do it.

One of these services is Amazon Lex. Amazon has put the intelligence of Alexa into the cloud so that any developer can take it and plug it into their device. Here are four key points from Sudheera’s presentation on Intro to A.I.

 

A.I. is here with us now

Artificial Intelligence isn’t just some utopistic thing that we have seen in the movies. It’s actually here around us but just in much humbler forms. When you go online and click to sites like Amazon or Netflix you can see the recommendation engines. Those are one form of personalization tools that uses A.I. Other ones are, for example, the retail product catalogs and Amazon Go.

Amazon Go was created to help people do shopping faster. It’s an application based on computer visions, and its purpose is to do no-line checkout. With it, you can walk into a shop, pick your things, get the check right away without waiting and walk out. Shopping can be as easy as that.

 

Intro to A.I. with Amazon's Senior Product Manager

 

A.I. is not machine learning nor deep learning

A.I. doesn’t mean the same as machine learning or deep learning. However, machine learning is a subset of A.I., and deep learning is a subset of machine learning. Machine learning consists mainly of algorithms and data while deep learning is about doing similar things with a lot of data and A.I. Both of these are more knowledge-based structures than A.I.

To understand what machine learning is exactly we have to think about how to learn to play an instrument. At the beginning of the learning process, you will learn the finger patterns. Once you know those you can start changing the tempo and melody. This is a similar process that they’re trying to teach and make machines do. By training data, they develop intelligence.

To look further at deep learning, we take a closer look at what happens in the brain when it recognizes a face. Mainly it’s about neuron layers that go through different processes from the point where it recognizes that a figure is a face to actually knowing that it’s person A’s face. Deep learning is a layered network of algorithms that we’re trying to simulate.

 

Voice recognition is interacting with machines

Now we get to the main subject which is A.I. A.I. is putting knowledge around the data, and therefore it is more cognitive than the other two. It’s a field of computer science aiming to simulate human intelligence artificially. It has the cognitive ability to understand languages, communication patterns and to process information. An A.I. machine can also recognize voices, tones in voices, faces, emotions and when someone smiles.

The latest product with this kind of voice recognition technology inside it is the Amazon Echo. It has the Amazon Alexa inside it which uses NLU (natural language understanding) and ASR (automatic speech recognition). This advanced technology machine takes commands and understands human speech, speaking habits, and even sarcasm.

 

Intro to A.I. with Amazon's Senior Product Manager

 

A.I. in the future

As scary as artificial intelligence sounds it brings us lots of benefits. With Amazon Lex, we can voice activate cars among other things in the future. It can also help disabled people live more normal lives when instead of not physically being able to, let’s say, open the door they can voice activate it.

Now let’s think about something we use every day. A kitchen, for example. How helpful – and cool – would it be to have voice activated kitchen appliances? Artificial intelligence can also help companies decide what direction to take in the future based on data from the past.

 

Questions from the audience

 

How do you know that enough training data has gone into the system?

It’s a continuous training model, and there is a lot of testing that goes into testing when you find the training data sufficient enough for the model to behave in a way that’s close to the way a human would behave. I didn’t build Alexa, so I don’t know how many years of data was provided to it, but with the advent of big data, you can expect that billions of records were provided.

Think about this. All the services that are exposed on AWS cloud have been dog fooded on Amazon retail. Think of the recommendation engines on Amazon that you see. All of those work on billions of product catalogs around the world and once that is sufficient enough and that’s been tested enough that’s exposes an AWS for general consumption.

Amazon Lex has been tested on Amazon Alexa, and now we expose it out to regular developers. What developers can do with this service is that they can provide the training data and then continuously monitor and keep improving the model without having to write a single line of code. All the code is abstractive for them, so it’s a fully managed service.

 

Intro to A.I. with Amazon's Senior Product Manager

 

What’s your experience on customers’ concerns about data privacy?

Data privacy goes hand in hand with advanced technology. In the case of voice activation and especially smart home and IOT advises the amount of data that goes into these cloud services is huge. There is a lot of sensitive customer data.

One example was when a burglar was caught on Amazon Alexa, and that produced evidence in court. That was surprising. Especially surprising to me was that every single record that customer speaks into the Echo is available to us. It’s all done in order to improve the model. As you speak a command, it’s converted into text in the background, but that voice sample is also available to us.

We’ve not received feedback from people saying that they’d be concerned about it. We’ve mostly had feedback about how to use it or how to empower the product to use the A.I. and not so much about the privacy.

 

How do you balance privacy with personalization?

Essentially presenting to that user what is useful to him without making it creepy. Think of an email inbox. If you were shown ads related to all the personal emails, if you were shown ads of you as a Product Manager, you would have to think about the customer and identify where to draw the line.

You can use some of the methodologies that Product Managers use, for example, A/B testing and user ability test to figure out where to draw the line and to only show that particular user what he’s interested and not share it with everyone. That is taken care of very well I think in the larger companies. That is where I would draw the line as a Product Manager.

 

So what do we learn from this? That in the future cars will move with voice recognition, the oven goes on with just a few words and that we shouldn’t commit crimes if there’s an Amazon Echo in the room. Also, it means that there will be more opportunities for Product Managers.

 

Have any comments? Tweet us @ProductSchool

We teach product management courses in San Francisco, Santa Clara, Los Angeles, and New York. To learn more about our upcoming courses and how to apply click over to our course page.