Voice and AI event [VIDEO] and notes

April 28, 2017

Having recently won a Marriot Hotels accelerator with our new voice recognition product ‘Dazzle‘ (watch this space!) we hosted an event last month to share our experience of building a voice user interfaces (VUI) and discuss how companies could begin to think about their user experience and interaction using voice as an interface.

We had invited a targeted group of forward-thinking practitioners from London’s travel and tourism scene and were holding the event in partnership with visitlondon.com in their offices overlooking the Tower of London and the Thames. See some video highlights below and read on for the full story.

Our CCO, Charlie Cadbury, opened the session by musing over a world where your voice assistant would greet you with a joke and explained what had lead up to the voice interface becoming popular, why now was so important and his thoughts on why it is likely to get a rapid uptake.

Charlie was followed by David McKay from VisitLondon.com who gave us his insights and thoughts into the way VisitLondon.com is looking to use voice technology to enhance tourists’ London experience.   We had developed a proof of concept demonstration showing how a voice interface could work for visitlondon.com.  We went on demonstrate, using Alexa to book London shows and discover shows to watch, all through the voice interface.

As ever with live demo’s the odd technical hitch led to proof that technology still has its part to play in adapting to humans varying tone, pitch, intonations and every other way that we interact with each other. This was the point which Rupert Redington and Harry Harold picked up on in the next section. The duo led the bulk of the rest of the afternoon, a workshop which saw our 30 or so attendees split into groups to learn how to create conversational interfaces, a very new set of skills to coding traditional, visual, digital channels.

The core message was that there are 3 important features that should be present when designing voice interactions, the ‘bot’ should be conversational, so not command and control, rather taking its turn to speak and with context to the situation. Resilient, that is graceful at failure, and asking again, and showing that it is listening. Finally, characterful, whereby the device can be trusted and attempt to be as useful as possible, giving more with a friendly tone.

For the exercise, an existing or hypothetical business was chosen by each group and different scenarios put forward for the technology to deal with. Classic user personas were dreamed up in order to give personality to customers, possible users were grouped.  It was important to remember that this is not yet intelligent software, it still uses questions in its database for the correct response, it can’t yet intelligently react as AI might allow it to in the future – something that we will certainly be keeping an eye on.

The final scenarios were presented and a key question that was raised was how to keep adding value, and is the sky the limit? Take an example, if as a guest at a hotel, you booked a restaurant for dinner using Alexa, should Alexa then offer to book my taxi, or indeed recommend a cocktail place for after, what about giving options of activities? Is this something that brands could or should take advantage of in the future?

This was a hugely exciting event for us and one which showed the thirst for knowledge as well as to demonstrate how far voice recognition has come. With the big players fighting it out for market dominance, stay tuned to developments at LOLA Tech as we continue to assess the market and develop our own software. Finally, a huge thanks to all those that came.

We’d be delighted to give you a demonstration of some of our work and share our learning to date, please get in touch to arrange.