The 2019 NEXT pre-conference series is giving listeners an inside look into companies such as IBM, Voice Metrics, Ipsos, and Pulse Labs.. Join insight leaders on June 13 – 14 in Chicago for NEXT, where you can discover how technology and innovation are changing the market research industry. In this episode, Jamin Brazil interviews Dylan Zwick, Chief Product Officer at Pulse Labs.

Find Dylan Online:

LinkedIn

Website: www.pulselabs.ai


[00:01]

Hi, I’m Jamie Brazil, and you’re listening to the Happy Market Research podcast. This is a special episode that’s connected to the Insights Association’s NEXT conference in Chicago, that is, this June 13th and 14th. My guest today is Dylan Zwick. Dylan. I said your last name, right?

[00:22]

That was correct. Yeah, Dylan Zwick. I’m always dead last in the alphabetical order.

[00:27]

That’s funny. I was always first in photos because I am 5’8’’ so we have that objective position. Dylan is the co-founder and Chief Product Officer of Pulse Labs. Pulse Labs is a solution that enables users to launch and gather consumer opinions via voice devices such as Alexa and Google Home. Dylan, thanks for being on the Happy Market Research podcast today!

[00:49]

A pleasure to be here. Thank you so much for having me.

[00:53]

You are speaking at this year’s NEXT event on voice. When did you first realize that voice was important?

[00:57]

So I first realized that voice was going to be big back in 2016 when I bought my first Echo. So I started playing around with Alexa, and realized that what had been the dream of science fiction now for decades, you know, the ability to speak and actually have a conversation with a computer was actually becoming science fact, you know, it was it was becoming reality. And so, I played around with building my own Alexa applications and started exploring the tools that were out there for developers and designers for Alexa application, and for voice applications more generally, and realized that this was going to be a huge space and also that I really wanted to be a part of it. So that’s what got me initially involved.

[01:54]

Yeah, I mean, Alexa in and of itself is really interesting. One of the things that I think is… If you pull back, YouTube right now, I forget what the data is, something like 60% of the Internet is there. It’s a massive amount. And if you look at the bet that Google placed when they did that acquisition, it was, they consolidated the different product lines into a single thing, and then they centralized the KPI to one centralized point of focus, which was the number of daily videos unloaded. And that created so much focus from an R&D perspective that that was all anybody cared about. It wasn’t predicated on revenue or eyeballs or anything like that. That was it. And then subsequently, of course, that was the tail that wagged the dog. Amazon is actually doing the exact same thing with respect to Alexa. I mean, my kids… My 12-year-old can create an Alexa skill. It is crazy easy how they have made the development side of this accessible.

[02:52]

Yeah, that’s been a huge focus for Amazon. And the Alexa team is to open up as many tools for developing essentially applications or, as they call them, skills, on Alexa and then, trying to provide these and encourage as many independent developers to build skills there as they can. So you have a ton of skills, actually, that have just been built by independent developers, and then also a bunch of skills that have been built by brands or professional agencies. And there are even companies out there that are focused exclusively on the building of Alexa skills. And then yeah, you’ve mentioned, you know, they’re also very interested in providing tools that makes this as easy as possible. So you even have a blueprint tool that essentially lets you quickly create a standard but personalized skill without the need to have any programming background at all. And I focused on Alexa and what I just said but Google is also pursuing a similar strategy, and that you can also build applications on Google’s Assistant. They’re called actions, and they’re really trying to build out, and expand, and encourage that ecosystem as well. So all the major voice players and, to be honest, Bixby and Cortana are also very interested in that. So all the major voice players are really trying to provide a platform for as many content creators to participate on as they can.

[04:30]

So the Bixby thing was interesting, right? They launched… I think it was last year. It was Samsung’s voice device. And Cortana. It is interesting to me that Cortana and Siri haven’t had a more dominant role in voice so far, especially considering the head-start that Siri had. Do think that developers are going to need, or should I say, brands are going to need to deploy across all of the major players? Or is it… I may even roll it back a little bit further. I don’t know how old you are, but I was in the Silicon Valley during the whole rise of the dot-com, and there were probably 12 searching engines.

[05:13]

I remember.

[05:14]

…like Infoseek and Go.com. You had this… exactly! So you had to really pay attention if you wanted to get visibility on the Web in terms of where, what the users… do you think ultimately it is going to be one ring to rule them all?

[05:29]

So right now, it’s certainly a duopoly. So right now, most of the market share there is being taken by Amazon and Google. And so if you are a brand and you are building an application for voice, most of the time you’re going to want to… Brands are interested in actually just building on both of the platforms, and it tends to be pretty easy to port applications built on one from the other. Once you’ve built, for example, a Google action or Alexa skill, translating that over to the other platform —as I said, it’s not trivial, is a whole bunch easier than building a new one from scratch. So because of the market share that both of the major smart speaker players have, most brands when they build a voice application are interested in building on both – and it’s becoming easier to port that so it’s becoming less expensive to do both, kind of at the same time, so most are interested in doing that.

In terms of the other voice assistants, that is, Cortana, Siri and Bixby, they’re all making interesting plays, but they are mostly not competing directly with Amazon and Google on the smart speaker market. So Cortana is actually positioning themselves as much more of an enterprise voice offering. So the idea would be that Cortana would be kind of your voice assistant in the office and sort of the business aspect of voices system. And then, Bixby has an offering that is very tied to your phone, to your Samsung products, though, so it’s really tied to what people are doing on their smartphones. But yeah, I would say that we will see how the future shakes out in terms of who is going to be dominant. I don’t think it will be one single player, but I also don’t think it will be five or six.

[7:42]

That sounds like your framework is really centered around use cases and the context of the interaction. So I have my Samsung TV, of course, and similarly, I have got my Alexa sitting there, but it’s actually funny. So I got Bixby on my Samsung TV that I set up. But I still use Alexa on my Samsung TV. From an interaction perspective, it is kind of funny.

(Oh, sorry, Alexa, stop!)

[08:10]

Ha! I’ve had that happen many times. So yeah, that’s really what you’re getting at is the fundamental goal. The reason that these major huge tech companies are so interested and invested in those platforms is not because they really want to dominate the speaker module. But it’s not because that the clock radio market was so important to them that they’re just going to go in there and crush that. What they really want to do is they view voice assistants as being the operating system of the Internet of Things. So you’re not just going to be talking to your smart speaker, or even just to your phone, but also to your car, to your television, as you brought up, to your refrigerator… I mean, you’re going to be talking to all of these different electronic appliances, and it’s going to be ubiquitous and the primary means, or at least one of the primary means of interaction, will be via voice. So that is essentially the big dream there. Whether it’s going to be something that’s dominated by one particular company, or whether there’s going to be maybe just some underlying framework that isn’t owned by anybody but that everybody kind of builds on, and that maybe you’ll be able to access Alexa or Google Assistant or whatever voice assistant you want, from any of these touch points, will be interesting to watch as it develops.

[09:49]

I mean, what I’m finding so fascinating is the way that we interact with voice. Alexa, for example, has skills, and I forgot what you said, Google Home’s reference was.

[10:05]

Actions.

[10:06]

Actions, right. So it’s a very much of a human interaction, that is, it is that part of the UX experience. So I could see a scenario where you could successfully address Cortana, and to your point, in a business context, and then, similarly, I could use Bixby for maybe my refrigerator or my appliances or what have you. And then maybe at a personal level, I just want to go ahead and interact with the lady whose name I won’t say. That is really interesting.

And I love how you started out talking about the science fact because being a geek and Star Wars nerd, and Star Trek fan, the way that both of those environments projected the future, it turns out that Star Trek was right with this voice AI always being part of you.

[11:01]

Yeah, and sort of as a side note there, Jeff Bezos is famously a Trekkie, so Jeff Bezos is actually famously a big Star Trek fan. And the Star Trek or Enterprise computer was actually part of the inspiration for Alexa. So it may not be entirely coincidental that they seem to be similar. From what I understand, the Star Trek computer is actually part of the inspiration behind Alexa.

[11:36]

Oh, that’s so interesting! All right. So you’ve worked with several firms, many firms, including today’s top firms, on voice application. What has been the most exciting aspect of that? And then also, what do you see as one of the larger challenges in this early stage?

[11:51]

Yeah, absolutely. So the most exciting thing about it is that voice has the potential to be, essentially, the lowest friction form of interaction between a person and a computer, and also the most essentially natural and intuitive one. Speaking conversation is something that we learn and understand almost innately. There are parts of the human brain that are just specifically wired to communicate this way, and so voice interface is something that, if done right, is going to be the most intuitive and easiest type of interface that anybody is going to be able to use. And you have actually seen that with the… I remember six or seven years ago, a friend of mine’s young child walked up to a television and started touching it, and the television did not respond. And the child thought that the television was broken because he had become so used to touch interfaces. And even before touch, we’re seeing kids that are talking to their smart speakers. And so it’s going to be expected that any sort of technology that you interact with, you’re going to be able to talk with. And if you can’t, it’s going to seem broken. But the big promise there is that, as I said, it’s a super low friction way of interacting with technology, and it also is a form of interaction that can take place when you were otherwise occupied. And so a couple of examples: the big one that’s been so successful right now are things like requesting music to be played, just saying “A word”: “play Despacito”. Or asking for the weather or any of those kind of quick functions that you want to do every day and that can be made really easy and low friction. But what I think you’re going to see is most of the audio consumption is increasing rapidly. People are listening to music and podcasts and radio broadcast or radio shows more and more on digital devices. An so the ability to kind of interact with what you’re listening to via voice is extremely promising because usually when you are listening, you are doing something else. So if you are a marketer, and let’s say that you have an audio advertisement that’s playing on Spotify or podcasts or something like that, the ability to just say —if you want to know more about this, say: “Alexa, tell me more”. And have that instantly send you an email that will tell you more about what was being advertised, and then take you right back to what you were listening to, I think it has enormous potential and power.

Another scenario, another context is driving. So if you are driving, you are in an inherently hands-free situation, your hands should be occupied, so not hands-free because your hands should be occupied. And so it is an inherently audio scenario in which you are able to, for example, order food on your way home for pick up from a drive through via voice in your car, I think has enormous potential to kind of transform a lot of those flows.

What challenges there are today? I would say the biggest challenges are discoverability. It can be hard to really know what is currently there and available, and to remember what skills you want to invoke to do what. So that has been an issue. And then there are other scenarios, other interactions that I think to be the best way to input information, but are not necessarily the best way to get information back. So if I asked for a list of the 10 most popular movies from last year via voice is probably the easiest way to request that information, but then having something come back and say: “The most popular movie was X. The second most popular movie was Y”, etc., might not be the best way to get that information back. So something like a list might make more sense as a visual response. And I think that the combination of voice and audio with visual, so opening voice up as one medium through which you can communicate, is opening up a lot of new possibilities. I think that multimodal is going to be a major part of the voice applications in the ways that we use voice over the next few years.

[17:02]

Yes, for sure. And I think from a researcher’s point of view, thinking about the opportunity for ethnography to be done even though you do not have the video component tethered to it, but always on a feedback option is really, really powerful. If you are thinking about CPG-type or products, whether they are software or service or whatever, that we interact with or real things, then you can always provide feedback as long as you have that particular device handy.

[17:34]

Exactly, and so…

[17:35]

And you could do that, to your point, before while multitasking. So you have the new Alexa Auto that is a really interesting —I think they’re doing a limited release right now. So it’s the in-car version of the Dot, or whatever, Echo. So the closer that you can provide feedback to the actual experience, the better the data, the less the time degradation of the insight.

[18:07]

Exactly.

[18:08]

And right. If you think about being in the shower, and Head & Shoulders wants to do a new product test, and I’ve got my device inside of the bathroom, I can actually provide feedback on that experience while I am in the shower, where before that was just always impossible. You could never garnish that kind of information.

[18:26]

Exactly. So something like the ability to quickly provide, like a net promoter score or rating and then some quick feedback or data about a particular experience can be done very low friction via voice. You could have something like, on your Head & Shoulders bottle or something like that that said: “To provide feedback or rating, just say this particular thing to Alexa and then answer two questions”, which is something that I think people would be much more likely to do than say: “Go on a website and fill that out.” Or if you have the ability to say yes to the bottom of their seat. Say you can use Alexa to provide this feedback, and then we can maybe even send you an email coupon or something like that.

And then you mentioned CPG. Another big, exciting possibility here is that CPG most of the time people purchase is replenishment and reordering. Traditionally, packaging has been mostly geared towards standing out and convincing the consumer to make a particular purchase while that purchase is available on the shelf, and competing with other similar products. However, with this huge shift that we are seeing into purchasing online, I think packaging might even be somewhat rethought as a way of convincing consumers not necessary to make a purchase but to reorder. And the ability to say, for example, let’s say that you’ve got a roll of paper towels, and when you’re done with the paper towel roll on the actual cardboard roll itself, it says: “To reorder this, just say Alexa XYZ”. And it could be just a quick two-turn interaction or something like that, and you would have a replenishment of what you just finished on its way. I think that has enormous potential for tons of consumer packaged goods.

[20:30]

Oh my gosh, totally. I have never heard that example before. Thank you for sharing it. That literally blew my mind. This is going to be the headline quote by the way of the episode because one of my big challenges in moving to a voice consumer journey is that it’s an invisible journey. So the opportunity for a brand like Scott to intercept the point of purchase is quite literally zero. It’s all about my brand affinity, which at the end of the day, paper towels are paper towels for me; maybe not for other people but I don’t particularly care as long as it does what I want it to do. And so, if you can get that brand into the speaker, paper towel ring thing or whatever, now all of a sudden you do have an opportunity to create that connection with the consumer.

[21:20]

Exactly.

[21:24]

And this is what’s interesting: you could actually spawn the transaction because it’s a voice-based trigger

[21:31]

You could just say it right there at the moment where they are thinking: “OK, I need to reorder”. You could just be instantly there, and it’s the simplest transaction it could be. It’s basically just this thing that I have, that I’m out of, I want to re-order replacement of exactly this thing. “To do that, say exactly this”. And it will happen.

[21:55]

I think we should scrap everything we’re working on. And that is the direction…

[22:01]

Ha! This is what I thought at the end of 2016. I thought: “All right. I got to scrap what I’m working on and pursue voice.” Because exactly, it’s things like this that got me and continue to get me super excited about it.

[22:18]

So, for our listeners, www.pulselabs.ai is Pulse Labs’ website, and if you’re going to go visit there, there’s really two paths. One is on the consumer side, the customer side, that is, somebody that may want to leverage the platform to gather consumer opinions through voice. And the other is to actually sign up as a panelist to provide feedback. So I want to talk a little bit about your platform. What type of insights are being captured in your voice surveys?

[22:48]

So right now, primarily, we have been focused actually on testing usability testing, mostly for designers and developers, skills and applications. So if you are building an Alexa skill or Google action, and you want to get a gauge on how usable it is, whether people are understanding it, whether one particular approach makes more sense than another, you can use our platform to quickly and easily test with real world users. And we are able to do all of our testing directly on devices. So you can test any Alexa-enabled device. We used to do the test on any Google Assistant-enabled device, and we provide a level of data on those interactions that is exactly unavailable anywhere else in the market today. So it is designed essentially… if you’re building something on voice to get real user feedback and really deep, detailed feedback on exactly how people are using your application, Pulse Labs provides a platform and a panel for providing and gathering that feedback.

[23:57]

So I have not come across a business exactly like yours in our space. Did you do any pivots? Was your start different from where you are right now?

[24:07]

We have not done any. Small pivots? Absolutely. Changes in approaches or changes in focus? I would say absolutely. Major pivots in what our product offering is and what our vision is? No. So our vision from the very beginning has been to provide user research to your real world, real people… user research to anybody (brands, developers, designers, agencies), basically anybody who wants a presence on voice, and wants to understand how real people are using voice, and how real people are interacting with voice, and how they can effectively build their presence there.

[24:59]

In 2023, $80 billion dollars is the projected number that will be spent on voice devices in a voice consumer journey context. What do you think research will look like at that point in time as we see such a migration of the consumers’ expense moved to that environment?

[25:17]

I think that research is going to be based around “How do you make this as easy as possible for the users?”, “How do you make it as convenient as possible?”, so they have easy access whenever and wherever they needed. But also, if you are a brand, “How do you remain top of mind here?” Do essentially, how do you be the orient, and how do you set yourself up so that if a customer just wants to order paper towels or something like that, it’s your paper towels that they are ordering. And that is part of the big play for the voice platforms: they want to have some control and say over who gets that top position. With Google AdWords, it’s always a fight to be on the first page. With voice, it is going to be a fight to be the top, the number one, the one that is recommended, and the one that is provided. There is going to be a lot of research, a lot of understanding devoted to how to make yourself number one, and then how much number one is worth.

[26:31]

Yes, that’s really interesting, especially in the context of how many generic brands are now owned by Amazon and Google. This speaks to the overall importance of ensuring that you are “the Kleenex” of your brand category.

[26:44]

Yes, exactly, exactly.

[26:48]

All right, so the NEXT conference is coming up. You are going to be talking about voice. What is one practical take-away? I know that you are tilting your cards here, but what is one practical take-away that our listeners can gleam from your upcoming talk?

[27:01]

So the practical take-away will would be if you are a marketer or a brand and you want to build something on voice, what you want to do, what you want to focus on are one or two very key use cases that voice can do better than what is currently available right now, that are valuable to you, and then execute on those. Too often, we tend to see either brands think: “Okay, we’re going experiment with this. Let’s put together some application”, and it might be either frequently asked questions application, or maybe they’ll just say: “Let’s take the API that we have, that feeds all of our product line for our website and just connect it to Alexa.” Usually those approaches don’t work so well.

So the important thing is to think of things like what we just talked about, such as the ability to reorder paper towels at the point when you’re done using your current batch, and make that seamless and easy. Those are the sort of approaches that are most successful, and that we will see the best ROI.

[28:19]

Yes, that’s great. I think that the application of Kmart in Australia —I heard this through the Voicebot podcast, which I’m sure you’re listening to, and I’m going to try to distill the information a bit – they were talking about how they actually had a tremendous success. I guess there’s some legislation around not being able to procure a product through voice yet. But the ways that Kmart became dominant in a voice framework is that they provided proximity to the actual product. So if the consumer wants to buy something, they would say: “Is it in stock?”, or “Where is it near me?” and that is how they would get directed specifically to the store. So it is an interesting story for me in that they started talking about how the brand is empowering the consumer and getting close to them, adding the value. Another one, I think it is Chrysler, that has an automatic start feature on one of their automobiles. It’s actually one of the top 100 Alexa’s skills. So it could be cold outside, and you can just tell your voice device: “Hey, start my car.” And it will start. It’ll warm up the car for you before you before you get in. The more the brands start adopting this technology, and the better they’re going to be positioned when this action stuff actually scales.

[29:42]

Yes, exactly. Exactly.

[29:46]

Well, I can’t wait to hear your talk. My guest today has been Dylan Zwick, co-founder and Chief Product Officer of Pulse Labs. Thanks so much for being on the Happy Market Research podcast, Dylan.

[29:57]

Thank you very much for having me. It’s been a pleasure. Thank you. Thanks a lot.

[30:02]

For all of you who are listening, if you’re not signed up for the Insights Association’s NEXT conference, I would highly recommend you do that. Again, that is June 13th and 14th in Chicago. You can also find out information on our website https://happymr.com/next2019. I’ll be including links to Dylan’s information and his company’s information in the show notes. I really hope to see you at the NEXT conference. Have a wonderful rest of your day!