NEXT 2019

NEXT 2019 Pre-Conference Series – Stuart Crane & Paul Cornwell – Voice Metrics

The 2019 NEXT pre-conference series is giving listeners an inside look into companies such as IBM, Voice Metrics, Ipsos, and Pulse Labs.. Join insight leaders on June 13 – 14 in Chicago for NEXT, where you can discover how technology and innovation are changing the market research industry. In this episode, Jamin Brazil interviews Stuart Crane, Founder and CEO of Voice Metrics; and Paul Cornwell, CTO of Voice Metrics.

Find Stuart and Paul Online:

Stuart’s LinkedIn

Paul’s Linkedin



Hi, I’m Jamin Brazil, and you’re listening to the Happy Market Research podcast. This is a special episode that’s connected to the upcoming Insights Association’s NEXT conference. It is located in Chicago on June 13th and 14th. I do a lot of these conferences both inside and adjacent to the market research industry. I think this particular NEXT conference is a must attend if you’re interested in learning about what’s coming up “next”. Maybe that’s how they came up with the name. My guests today are Stuart Crane, the founder and CEO of Voice Metrics, which helps companies leverage voice, as well as Paul Cornwell. Did I say your last name right, Paul?


Yeah, you got it.


Voice metrics CTO. Guys, thanks very much for joining me on the Happy Market Research podcast today!


–Glad to be here.

–Yes, thanks for having us, Jamin!


You guys are speaking at the NEXT conference on how to integrate voice into the total customer experience. I’m really curious, given your backgrounds, when did you first recognize the voice was important?


Voice, I’ve been interested in for quite some time back in the day when I would listen to cassettes in the car and CDs in the car. I was really interesting in voice recognition:  recognizing voice with Dragon dictate, and that sort of thing. But when I realized it was really going to be big is actually when I got an Amazon Echo, I think was for Christmas in 2015, I believe, and just being able to talk to this cylinder, and have it talk back to you and start songs, and you could still talk to it while music is playing. And obviously Siri was out there. But now, it’s basically an ambient voice conversation. It just blew my mind! And then I found out that you can actually write software for it. You can write programs for the Amazon Echo. Back then, it was just called Echo. Now it’s obviously “Alexa”, and it’s a big ecosystem and everything. So I just really recognized that being able to talk to devices and have the full features of computers behind it really is going to transform things. Not that it’s going to take away the capabilities of mobile or anything like that, but supplement it in such a great way. I started looking at ways that we could program voice and got involved very early in the Alexa’s software development ecosystem and just took it from there.


All right, great. So Paul?


Yeah. So I came from an AI machine learning background prior to getting into voice, and that was sort of my segue-way into voice and where the interest came from. So actually, before I met Stuart, I was pretty hot and heavy for Alexa and the idea of building these interactive experiences. So I was looking a lot at Lex and Alexa on the Amazon side, and it just seems like a natural segue-way coming from that AI background and thinking about how these devices and experiences can be more conversational and just the technology caught up to where my head was. With the opportunity with Stuart, who had this vision, at the very beginning of what we’ve built, everything just seemed to align.


So I’m going to go ahead and share, and I apologize, I don’t mean to hijack the point, but for me I recognize it was really important with my daughter and the iPhone I got her when she was 11 years old. We were driving in the car for a three hour drive, so I just started making small talk. And I talked to her about her best friends, and her top three surprised me. It was Siri. So I wasn’t sure if she was making a joke. But we dived into that. And she goes: “Oh, you know, Siri, she’s always there. She’s talking to me.” In the context of an 11-year-old world perception, she really did not understand this concept of AI or bot. For her, it’s a voice that’s got a name, and is communicating to her. Sometimes it doesn’t make any sense. In fact, maybe that’s a lot of the times, especially in the early days, but now you can fast forward with where we are. I also have younger kids, a 2- and a 3-year old, and one of their favorite things to do is interact with Alexa, playing the hide and seek game. I don’t know if you guys have done that or not.





It’s just this construct where you can’t have a tangible game or UX –we are thinking about what that looks like, but in a voice context. For me, as I fast-forward to two or three years from now, I don’t exactly know how voice is going to look like but it feels like the opportunities for us are significant.


Yeah, absolutely. We were out in San Francisco, speaking a couple of weeks ago, and what we noticed just walking around the streets of San Francisco is half the people, probably more than half, maybe 80% of the people have their AirPods or headphones on. So once those have the capabilities, the voice assistants, built right into them, which they’re starting to do, obviously Siri is built in the AirPods, it’s going to be huge. It is just all over. It is everywhere.


So you’ve worked with a couple of market research agencies on voice surveys. What do you see as really exciting in that space? And what do you see as a material challenge at this?


Yes, that’s a good question.

The companies that we are starting to work with is in a really very exploratory way, which I’m sure we’re going to find out at the NEXT conferences. People want to see how can we utilize the voice’s system, whether it’s Alexa, Google Assistant, and Siri at some point to get data, get information, get feedback, get surveys and take them. So the agencies we’re working on right now are taking our survey platform, which is called Survey Line, and they’re basically building surveys similar in the way you would build a SurveyMonkey survey in the web app, and they’re showing them to their clients that are maybe big consumer product manufacturers or just product companies that have panels of testers out there. And they’re basically helping them to say: “Some of the things you’re doing to collect data and do market research in consumer research can now be potentially done by voice.” So they’re looking at things where they may have people coming into the homes and doing surveys by hand. And they want to lower the cost of doing a survey and consider also the convenience factor for the panelist. One thing we’re finding right now is the agency is looking at doing very interactive surveys that have a real voice behind it. So you actually have a voice actor, a voice talking the person through a product. “Pick up the product”, “Hold it in your hand”, “How does it feel in your hand?” And it’s basically through the voice assistant that way. They’re building some of the longer interactions. Some of the challenges we are working on right now is just the cadence, the pausing and stages because sometimes you might want to pause and say: “Well, you do this for a little while, and then come back and tell us what you thought about that.” And so those things aren’t as intuitive on a voice assistant because it wants to work just back and forth, back and forth. We have got some things that we are modifying to make it work in an environment where essentially the market research agencies want a hands-free experience. They do not want to have the person go to a phone or go to a laptop or any kind of tactile interface at all. They want it hands-free. And that’s what’s perfect for surveys by voice or a voice survey. But in some situations when they do something with a product and then they come back and say that. So some of the challenges, like I said, are related to cadence, pauses and delays and just getting that interaction as natural as possible, knowing that you’re still dealing with essentially a computer. As you know, IVR has been around since the 80s. So we’re taking what that had done, and saying: “Hey, this could be done on a voice assistant” and be done even better because you have full programming capabilities, you have really voice behind it, and so forth.


Stuart, I want to get to an example, if you guys have one, of a voice based survey. But before we do, Paul, I have a question in context of AI. It’s a term that we have heard a lot in market research over the last years. And the actual nailing it down in terms of how it applies and improves an outcome has been a little bit squishy in our space. Can you talk to us a little bit how the role of AI in a voice context?


Yes, absolutely. So I think, out of the box, Alexa and Google Assistant do a lot of things very well. A lot of the reasons that it’s improving over time are due to the machine learning and artificial intelligence that Amazon and Google are leveraging themselves. But we have found that there is still a gap. What we have tried to build, and I think what is successful, and what developers of voice solutions are doing, is they’re building their own sort of contextual AI. Using surveys as an example, we have actually created sort of our secret sauce to make the survey experience much smoother for the user because out of the box you run into a lot of things with Alexa skills and Google actions, where she doesn’t understand exactly what you’re trying to do. And if what you say or what she heard doesn’t match exactly what’s been predefined in those voice solutions, those skills and actions, then it can fall down. So coming at it from a pure voice developer standpoint, to me artificial intelligence, which can be a buzzy sort of word –we hear that term all the time – it just means basically having a layer of algorithms and logic that can make sense of what the user is actually trying to do with the intended action is and giving them that results. So that’s how we approach it. And I hope that answers it.


Got it. Yeah, that makes less sense. Again, going back to the simplistic example of the hide and seek game on Alexa. In that framework, it doesn’t feel 100% human, but it also doesn’t feel like going back to Stuart’s point about IVR in the 80s. It’s definitely a lot closer to the human interaction experience. All right, Stuart. Do we have an example?


Yeah, yeah, So what I’m going to show now is an example we put together at the Voice Of The Car Summit, which was a few weeks ago, that brought together people who are involved in bringing voice assistance into the car. Obviously, there’s a lot of companies involved in that, and the car companies themselves, the voice assistants, and so forth. So this example that I’m going to show is related to a survey that would be done after you go to a retailer, a restaurant or, in this case, a fast food restaurant. So everybody’s familiar with the receipt you would get at a movie theater or retailer, a restaurant, which says: “Oh, thanks for coming. Please take our survey.” And it has a long web URL that you have to type into your phone, or what have you. Who does that? Who is going to go, and take the survey, and type all that in from a printed receipt? So the idea is that we can just ask our voice assistant to take it. So the receipt would say: “Just start the McDonald survey” or what happens to be. And in this case, we did a demo for a McDonald’s fast food survey. So what I am going to do is I am going to say the wake word, and it is going to start the survey, and it would be what someone would do in the car as they are pulling out of the drive-through. So let’s give it a shot!

–Alexa, start the McDonald Survey.

–Hello, thanks for visiting McDonald’s today. We want to hear about your experience. This survey should take less than a minute, so let’s get started!

How often do you visit this McDonald’s location?

–About once a week.

–For the next few questions, rate your satisfaction on a scale of 1 to 5, where five is excellent.

Rate the speed of your food delivery.


–Rate the quality of the food.


–Rate the friendliness of the McDonald’s staff.


–Finally, yes or no. Would you recommend this McDonald’s to your friends and family? –Yes.

–That’s great. We’re happy that you’ll recommend us! Thanks again for stopping at McDonald’s and taking our survey. Next time, try our new Mushroom and Swiss signature burger.


You had to throw in the advertisement at the end, of course. But I really like this. So there’s a couple things that popped for me. I will start at the beginning. First of all, thinking about programming that survey, is that hard? I know that creating Alexa skills in general at a basic level is relatively…


-I will let Paul hit that question.

-Yeah, it’s hard to make it extremely flexible. So we built a platform to do that. I think anyone could probably build an Alexa skill or a Google action that is built around a very specific set of questions and get responses. To sort of take it to the next level, we really wanted to build something that was really self-service and something that we call a platform. But I would say the challenges were probably with supporting those different question types and collecting responses in a way that match up with what the survey creator was trying to get. So if they are looking for a rating, we have a lot of validation around. So if instead of 1 through 5, the person said “6”, we got to make sure we come back and tell that person gently: “Okay, that’s not the right answer.” And then maybe play the question again, things like that. So it’s really just having the experience of as a conversation it could be. Then, from a programming side, it was just really building the platform to support basically any type of question and answer back and forth that someone wants. And we tried to make as conversational as possible.


I do think it be really funny if you did an out take version where the correction was something like: “Hey, jackass, it is only five.”


Yes, that would be good.


That’s really feeding the point of the impact of user experience in context of feedback. You really have an opportunity to help enforce brand inside of consumer feedback nowadays. In truth, we always did. I think we are just actually starting to pay attention to it more as an industry now. But having that friendly voice is such a better experience, to your earlier example, than just having to go manually and put a URL into a web browser, which is just like filing taxes.


Yes, for sure. We think there’s multiple benefits to it. We just think it’s another way. I mean, obviously there’s other ways to take surveys. But one of the things we really like is just how we have seen some creativity with some of our customers, who are doing things like having a user do a feedback session while they are experiencing the product, which is difficult to do any other way. But voice lets you do that. So maybe while you’re trying shampoo, or whatever, and you got an Echo in the bathroom, you could actually be answering questions on how does it feel?, how does it lather?, and things like that.

We are seeing some creative stuff, and we just love that. That’s why we try to build it as just as open as possible.


In context feedback, I think it is the part that is going to be interesting for market researchers, and I don’t mean that in a narrow way but in a broad way, anybody that’s interested in consumer feedback is going to find to be tremendously valuable because the in-moment experience comment is the most valuable feedback versus the degradation of feedback because of the delay in that Q&A.




What kinds of content or insights are you actually capturing beyond the obvious answers to the questions?


It is all centered on that: the responses, as you probably are aware. So Alexa and Google don’t give anyone like raw access to the audio itself. You can’t set up a skill and then get the audio file for what exactly that user says. I want to hear their voice. It doesn’t work like that. That’s for privacy and good reasons. So they do a great job with the Speech-to-Text. So it relies heavily on Alexa and Google’s natural language processing and Speech-to-Text capabilities. So we have different question types that we support. So we support asking a user for a rating from 1 to 10 or 1 to 5, whatever they want to set up, “yes/no”, and then multiple choice, of course, and then free form, which is really wide open. So if you just want to ask for the user for some comments, things like that, we have that capability. And we have just added a new question side. We just call it “mobile phone” but it’s basically the ability to collect contact information from the user. And the way we’re implementing it right now is if the user wants to supply that, they get a text, and you need to lead to that phone, and it sort of makes that connection with the brand or whoever is conducting the survey. So we’re looking at different ways to provide value there. But as far as the actual insights, we are just looking at providing as accurate a set of data as we can per survey and then our customers will gleam the insights they want from that data.


Is the data set like a CSP file?


Not exactly. So right now that’s how it happens.


So really easy to integrate into whatever platform they’re using for their analytics. Is there any additional metadata that you are gathering, in a traditional web or web-based platform? You know, you’ve got a host of stuff like time stamps and a browser version, maybe even location?


Yes. Well, we could get what device they are using, whether it’s Google Home or Alexa on them, and then within that which type of device they get. We can only basically get whatever we are given by the platform, the voice platform, and the Google Assistant. But there is some metadata and Paul and I work with that and we provide that to some of the clients.


Got it. I’m going through the site, I literally just purchased my breakfast this morning from McDonald’s. That is maybe not an endorsement of my health, but I do like McDonald’s a lot. So I finished going through the check out. How do I get that? What’s the trigger event? Is it in cars? Is it later? How does that survey get served up to me, so to speak?


Yeah. That’s basically going to be the challenge. I think going forward, Jamin, it is basically what we call the “voice call to action”. And the “call to action” could be in so many different forms. It could be like in this example that we gave, it could be on the receipt, and it just says: “Launch the McDonald survey”, or whatever engagement or voice action they want to start. It could be printed on a product and say: “Tell us what you think. Just say to your voice assistant XYZ”, and we would obviously rebrand it to that product or to that company or whatever they want it to say. So that’s going to be going forward. That challenge is how do you implement that call to action? We are working with a company now that does direct marketing. And they have huge brands like Wells Fargo and these companies that do massive amounts of direct marketing. And they’re adding voice response into it because they get somebody who could get something in the mail and it would be “Go to our website” or “Call 1-800 number”. Well, now you just interact with us through voice, and it would launch essentially a voice interaction, which could be a survey asking them a few questions and based on how they answered those questions, it could do different things and contact them that way.

But I think it is going to be tricky because it is going to take time for companies to say:  “Where do we want to put this call to action and what should it say?” And that’s something that we can help with to a certain degree. But we’re not the experts on that so much as they are.


It seems like that’s a big partnership opportunity that you’re talking about.

I am thinking about in the market research space, we have got a host, whether it’s Dynata or others, very large market research sample providers, and I don’t know how big the industry is between $2 billion and $4 billion, if they have a voice enabled device as a variable inside of their profiles, then maybe there is that trigger event that could happen. There’s a lot of bubbles in this scenario, but assuming that there was an app that was tracking geo.


So just think of emails and how many people do SurveyMonkey or Qualtrics surveys or Zoho or whatever. Most of the time they are asking people to take their survey because they are sending out an e-mail or it’s on social media or somewhere. That could be supplemented, maybe not replaced, but supplemented with “Do you want to do it with your voice assistant? Just say launch bla-bla or “Start the XYZ survey” or whatever it is. Now, obviously, surveys by voice need to be friendlier in a voice contact so you can’t take every SurveyMonkey survey or web-based survey and just copy and paste it into a voice survey because there’s just nuances, and cadence like we talked about before, that are necessary. It’s better sometimes on a screen to sort things or see a lot of multiple choice answers, and that doesn’t lend itself to voice. But the call to action could be in similar way to the way SurveyMonkey’s and online surveys are done.


So in 2023, it’s projected that there’s going to be about $80 billion that are going to be purchased through voice. This is for me a massive number, and I’m seeing in my own user behavior that I procure or buy stuff through my Alexa device, more of the CPG type of stuff is what we’re doing. Google and Amazon of both very aggressive and gobbling up the generic brands. I know that has been well documented. So, like generic paper towels I believe is now owned by Amazon.


Yes, Amazon Basics.


So, in a voice-based, consumer journey, which is invisible, I don’t have any opportunity to intercept the consumer if I’m Scotts, for example. Why isn’t voice a bigger deal right now for the CPG spaces? Or if it is, are they just operating in secret? As a consumer, I’m just not seeing a lot of investment, and as a practitioner, a lot of research. There is not a lot of noise in the space about investment that’s being made in this invisible consumer journey.


It is a good question, and because we’re in the voice industry, we do see a lot of internal investments of companies that are building things now, but they don’t want to just rush them out to market. I was in the healthcare space for a long time, and they’re actually wanting to get voice capabilities for patients and for doctors and so forth. But the brands, like you said, they’re taking a slower approach, and they’re doing a lot of internal testing and building things. But they’re also looking at “How do we get on there?” Because Amazon and Google have basically a native interface. And as soon as you start talking, Google and Amazon know who you are. But until you open a skill or an action or some interface with that brand, they still don’t know who you are until you somehow give them permission. So it’s much more difficult beyond Amazon and Google to get that.

So that’s why we’re building in things like getting contact information right through it, and doing look-ups with codes and so forth so you could just put a code in. But there is a lot of investment going on by brands and also ad agencies. The agencies are basically thinking: “How can we get into voice?” And it’s slow for a couple of reasons, mainly because they are just trying to figure out how it all works. But also they want to be careful not to roll something out that’s half baked.


I did some analysis earlier this year on voice ratings. I was using it as a surrogate for app utilization in voice. Unfortunately, there is not a corollary. The number of rate ratings, for example, does not mean that’s the utilization of the app. But having said that, it still is really interesting to see what products are apps that are being used in a voice-based context on just frequency. And so one of the things I thought was really cool was that, I believe it was GM, that has an auto-start, voice skill to the vehicle. Again, I am intuiting. I live in California, but it’s cold outside so I’d like to start my vehicle ahead of time so it can warm up. And that’s the extent of the skill, which is very highly rated. But they were the only automobile manufacturer, even including Tesla, that had any voice-based app that was in the top 100, I think is what I pulled. So it feels like just transparency in terms of what apps are being used and by whom could be a big opportunity for whether it’s a company like yours or even a company like Nielsen for communicating to the industry what is trending from a user experience perspective.


Yes, exactly. I think it all gets back to that “voice call to action” until people know what to say to their voice assistants, what to ask of their voice assistants, and have a prompt.

It is going to take time because people know how to say: “What’s the weather like outside?” or “What’s the sports score?” They can turn their lights on and off; I have all that up in my smart home, and it’s a great way to play songs. That’s the biggest use case of all for the smart speaker. But I think once brands and companies and different entities start doing a voice call the action where they say: “Well, here’s our website. But then, if you want us by voice, say this”, you will just see that take effect. And then, it’s going to take some time, like you said: 2023, $80 billion. I think in 2023, you will just see a lot more calls to action “Hey, engage us by voice!”


Yeah, that’s right. It is the whole user journey that has to trickle down to just the knowledge of how to interact because it’s invisible. You don’t have those user prompts that you would have. I go to the Star Trek example, right, where you know you had the computer, and then there was a constant interaction with it, and then they would give commands to the computer to transfer controls to whatever. Are you seeing that as one of the maturing use cases or potential use cases where there is a voice-based Instagram feed and then the person asks to transfer that to his/her phone or something along those lines?


Not yet. It’s just kind of too much of a reach for somebody to know to do that. But I think once there is a good use case that it actually gets habitual… It’s our about habit! Turning on lights and doing things that are IoT, and if they do them often they get that habit, and then you get that. But if somebody doesn’t know to do something, to your point before, there is really no, no visual interface. In most cases, you have Echo show and you have some visual. Actually, that brings up a good point, Jamin, I do actually give it commands when I see things on my Echo show that they prompt me with an article or to do something. Right now it needs that prompting or call to action. As more companies put them out there, you’ll see more use cases, and then people won’t even need to be prompted. They will just use them. But we are still in pretty early stages on people doing things like transferring things. It will come but it’s going to take some time.


So what is one practical take-away that our listeners can gleam right now from your upcoming talk at NEXT?


I think the biggest thing is that feedback and surveys or just getting anything from a consumer or an end user, an audience is doable by voice with your own branding, and it’s just now becoming possible. So what we are going to show at the NEXT conference is basically a platform that allows you to create a survey like SurveyMonkey, but it could be branded for yourself so that it’s your own voice. It’s not the Alexa voice or Google Home voice. It’s your own voice throughout the whole thing, like I showed, and you get the data, you get to ask what you want to ask, and that the user is happy with the experience. So that’s what we’re going to show, and it’s evolving. It is still very early stages in this. But as we improve our platform, we are leveraging the capabilities that are being improved by Amazon Alexa. And then there’s also obviously Cortana and Bixby and some others, and Siri. Once Siri has the capability of program, Siri will have that as well.


If somebody wants to get in contact with you, how would they do that?


Yes, the best way is just go to our site, which is


Got it. My guests today have been Stuart Crane and Paul Cornwell of Voice Metrics. Thank you both for joining me on the Happy Market Research podcast today.


–Thanks a lot. We enjoyed it.

–Great. Thanks for having us, Jamin.


Everyone else, for more information on the Insights Association’s NEXT conference, to hear speakers like these fantastic gentleman and others, please join us in Chicago June 13th and 14th. You can also find more information on our website Have a great rest of your day, and I hope to see you there!

NEXT 2019 Pre-Conference Series – Frank Kelly – Ipsos

The 2019 NEXT pre-conference series is giving listeners an inside look into companies such as IBM, Voice Metrics, Ipsos, and Pulse Labs.. Join insight leaders on June 13 – 14 in Chicago for NEXT, where you can discover how technology and innovation are changing the market research industry. In this episode, Jamin Brazil interviews Frank Kelly, Global Head of Operational Product at Ipsos.

Find Frank Online:




Hi, I’m Jamie Brazil, and you’re listening to the Happy Market Research podcast. This is a special episode connected to the Insights Association’s NEXT conference, which is being held in Chicago on June 13th and 14th.

My guest today is Frank Kelly, head of Innovation and NPD at Ipsos. Now, Frank, I do have a question. I have always said Ipsos. I think it’s pronounced “Ipso”, is that correct?


If you are French.




For everybody else, “Ipsos” works fine.


Perfect. So Ipsos is a global market research and consulting firm headquartered in Paris. Founded in 1975, Ipsos is publicly traded and ranked in the world’s largest market research agencies, actually number three, I believe, with offices in 88 countries and employing over 16,000 people. Prior to joining Ipsos, Frank has held senior leadership roles at Nielsen, Greenfield Online, TNS and Lightspeed/GMI. Frank, thanks so much for joining me on the Happy Market Research podcast.


Ah, thank you very much.


You’re speaking at this year’s NEXT event on how to integrate voice in your total customer experience. My second episode on this particular podcast was centric to voice. I am so excited to see a conference, the first conference that I know of in our space, that is really kind of centralizing the communication around this particular medium. How did you first come to realize that voice was important?


If I go back maybe 8 or 10 years ago, when I started seeing people use voice to text, it dawned on me. As soon as people started using features like that, there’s this bound to be an application and research, because they’re just showing preference for a way to communicate. And we have to accommodate those preferences in the way that we capture data. But I guess the real big thing was certainly when Siri was introduced by Apple –I think it was 2011. That really seemed to show the promise of what you can do with voice communication. And it showed that with work eventually that could become a major component of how we collect research data.


Yes, for sure. You know, it’s interesting 2011. I think it’s September 2016 that Alexa launched. I think I have the year right. Does that sound right to you?


That sounds right. Yes.


So the big head start that Apple had inside of this space, and yet they certainly have taken a back seat from a growth perspective with Google Home now being the fastest growing voice-based platform. And obviously, Alexa. I think Alexa is still dominant.


Yes, well, again, a lot of people are using the Apple, the Siri on their other devices. So it’s not just the voice assistant devices that we should think about because research could be captured probably more commonly through the voice assistants on the phone.


Right. Yes, yes, for sure. I was talking to my kids. I have a bunch of kids, some older ones and some younger ones. So the younger ones are utilizing, and this is all just organic, Alexa right now to play hide and seek. It’s just really funny. Oh, Alexa, stop! Sorry. She’s now going to be called “A”. No, no, stop!

So the other side of it is, my older kids were talking about how it would be really cool if they could get an audio feed that highlighted stuff coming out of their Instagram account that they could then somehow magically connect right to their phones, to get in the visual pieces there as well. It is almost like a morning notification or update or what have you. So it’s funny how you think of them and “hide and go seek”, which is very much a tangible, visual-based game can be and has been created in a voice-only context. The other piece that I think is interesting… You mentioned Siri in 2011. I got my daughter an iPhone. She was 10 years old at the time, which I know it’s early, but don’t judge, and we’re driving, and I asked her: “Let’s talk about your best friends.” That sort of dad conversation, right? She said it was Hannah, another girl named Emma and Siri. She is actually pretty clever, my daughter. At least, I’d like to think so. But anyways, she actually told me in the car that, she was all in, that Siri was a real human being and just happens to be constantly available to answer questions that she has. And of the top people in her life, one was an AI.


Wow, that’s an interesting story. It is true, there’s people that are interacting with these devices constantly, many times during the day and they’ve customized the voice to their particular accent, to their gender, to their favorite sports player, whatever might be.




And it becomes a pretty indispensable component of their day-to-day life.


Yeah, for sure. And I think that as we pull back the user experience, and I’m not asserting your age on the podcast, but I didn’t grow up with social media by any stretch of the imagination. Libraries were the source of information for me, not the Internet growing up, and so my context is completely different from my kids. My teens, who have grown up with access to social media during pretty much their whole awake life. And then now my youngest ones, my two and three year old, who are interacting yet a whole different way with technology and access to entertainment and information. As leaders, we need to make sure that we have, that at the industry level we create the humility to understand that here’s a massive evolution that’s happening in these other two generations that we just don’t exactly connect with.


Yes, that concept came out a while back about the digital natives and so forth and much what we call the youngest generation because they are even more native. But it does also suggest that across the other generations, there are some that maybe are much less amenable to these technologies. And so it means that when we create research services, we have to really cater to quite a range of capabilities in the technology space.

In many cases we have to use multiple modes of data collection, which each have their challenges, and they may be well suited for a certain generation and not so well suited for another generation. So you have to have a secondary methodology that you have to try, and make the two methodologies fit together somehow in order to capture the representative data.


It’s kind of like an analogy: being we went from paper to Internet, this whole transition in market research. Obviously, there’s still some surveys that are paper-based but predominantly Internet is taking that space over. And then, later on, in 2006, the release of the iPhone smartphone. So now you’ve seen, from 2006 to 2010 again a migration. I think the majority of surveys now, over 50%, are taken on smart devices. Are you seeing voice is part of that narrative?


I could talk about several different waves of innovation, and how researchers have followed the adoption, which was curved from… I worked in postal. To go back to those era. I worked in postal and CATI and face-to-face. And so most of those have transitioned to the online world, and then with mobile coming in, moving from computers to other devices, tablets for a while as well. But they seemed to be phasing out, but now the majority of collection is now on mobile, indeed. And within these devices, such as with mobile, with new technologies being available to move to voice, this will be an equal challenge and an equal opportunity, shall we say, to capture data in a new way. And like the other challenges, we made some big mistakes. When we first moved to the online, everybody’s heard the story about you just took CATI surveys and telephone surveys, and adapted them to the online, but didn’t really make use of the unique characteristics available online to the extent we could have. And then the same story was really true, it took a very long time to rethink how our surveys could best fit in the mobile environment. And it’s really just getting to that point now. And I’d like to think that at some point we learned from all these lessons and say:  “All right, well, you know, there’s going to be a sizable portion of data collection in the future, which will be voice-based. And let’s plan for that. Let’s figure out how to leverage that new methodology to its fullest from the very beginning and not kind of drag our feet”, and so forth.

I am definitely seeing enough of these transitions to see that this is going to be a big one. And it’s going to be an important one that’s going to provide a lot of great opportunities. And ultimately, there are going to be companies that do a good job of figuring out how to work in that new space and how to leverage the new capabilities of voice for data collection and there will be others that don’t, and suffer as a result of it.


Yeah, gosh, your point about mobile is really interesting to me. I hadn’t actually considered that, but, 2006, it is about 13 years later, and what you just said is actually very accurate. Actually, I don’t even know if we, as an industry, have completely adopted it. We just fielded a survey that was not mobile compatible, hilariously enough, we helped to fulfill sample in a survey I should say, and we saw a massive dropout rate. We wondered: “Why in hell is the incidence so bad?” It turns out we checked it on our smartphones, and sure enough, we couldn’t take the survey on a smartphone.


There’s still probably about 25% of surveys that can’t be taken on a mobile device. Something like that. It’s still a problem, you know?


Yeah, only if you want representation.


Now, the problem is that the people that are only doing it on a computer are not necessarily representative because there are whole groups of people that almost never worked on a computer.


Ha! That is exactly right. Such a great point. You know, that’s another interesting point there too, and I know we’re a little bit off topic, but is Rogier Verhulst, Head of Insights for LinkedIn, told me the day of the email solicitations is dead, which he’s making more of a point, not like a state of truth. His point is that there’s entire working parts of the organization that literally don’t check e-mail anymore right. They are using Skype, or sorry, well they are using Skype, but they are also using Slack and other platforms.


Yes, Slack.


Exactly. Right. Exactly. So if you’re exclusively soliciting feedback through email, which worked great prior, then you may be missing access to a subset of the population.


Yeah, well, on that point, we see ourselves soliciting responses from respondents through at least a handful of different ways. This in-stream, which is sort of in social media or in-Linked stream there, where you are recruiting people in real time to a survey.  In reality, in augmented reality. And there’s quite a bit of this being done there in social media, in voice, as well as the typical email and other communications methods.


So what is the biggest challenge for market researchers to get consumer opinions in a voice context?


Well, I would say that we can’t underplay the technology challenge of the AI component. What we really need to have is a conversation with respondents and conversation between a computer and a person. And to do that, the computer has to understand what the person is saying and use that understanding to then ask further probing questions. It’s one thing to just say: “Okay, can we take these basic closed-end questions, and get a computer voice assistant to understand them?” And I’d say: “Yes, We are pretty well there on that part.” But that is back to the point earlier. That’s just taking mistakes of, using what worked online and trying it on the next mode when there’s an opportunity to do better than that, and that’s through really moving to a more conversational mode in the voice context. And to do that, you really have to be able to understand what people are saying. And I think there’s this great progress being made in this area, that you can train a database at a category level, you can take social media data, for example, and use that to train a research database, which is much smaller, not adequate for training a database on the terminology used and so forth. But we’re not really there yet for large scale conversations, shall we say. But that’s where the ultimate goal is to me. As we progress with AI and with the natural language processing, we’ll get better and better over time at being able to make sense of what people are telling us. And based on that, asking the right questions to follow up on that.


So in 2023 it is projected that about $80 billion dollar will be done on voice, which is obviously a material channel, for whether it’s Walmart or Amazon or whatever, Google, etc. And that’s around the corner, so to speak, when you’re 48 like I am, it feels like it is pretty close. Why do you think voice isn’t a bigger deal in context of insights right now?


Well, it is getting a lot of voice, a lot of mentions at the moment. It’s got even more buzz. And, like I say, augmented reality that I just mentioned earlier is probably an even bigger industry than voice is today. And yet it’s not that important in a research perspective. Part of it is that I think research companies are pretty good at adopting, and using new technologies, but we’re not that great at developing them. So we need other industries that come up with these new technologies to come around and come into our industry, and help us figure out how to best deploy those technologies. The big industries for voice are things like automotive, or security, or financial, or retail. They are the early movers. I think research is a bit smaller, and the technology companies will be focusing on research as an opportunity in the near future. But the challenge is here. Technology is a bit more too complex for most research technology companies to try and tackle and optimize for research purposes.


To your point, do you see that as more of a partnership opportunity for a company like Ipsos? Or is it a hybrid where you’re going to be developing your own suite of solutions?


Well, the industry tools are going to be pretty good. I don’t know that many research companies are going to be able to tackle, to partner with the likes of Apple and Google and such realistically. But they’re making good tools available to be used. We have already been doing stuff within Amazon and so forth where you could get a skill picked up by those applications so that you can get surveys in there. And we’re going to have at the NEXT conference some good demonstrations. Myself and my co-presenters will be showing several examples of real projects that we’ve run this way. I think there are tools that we can leverage. I think that they will be building blocks that people will build upon because again, we need wide availability of these tools within whether it’s Siri, or the Google voice Assistant, and so forth. We need these things technology to be deployed out there and then we just have to make use of it.


Yeah, right. The analogy being for me, if you look at whether it’s Facebook Ads that have A/B testing baked in or Google Analytics, obviously, which is to show you a nice point of view of what’s happening in your website or apps, one of the things that market research is doing a good job of right now at the brand level is tethering that data to state consumer opinion data, and then creating more context for the business insight. It sounds like what you’re saying is voice is going to follow a similar suit where you’ve got these building blocks, whether it’s AWS or what have you, that will empower the bulk of the platforms, and then specialty pieces whether it’s an injection of the lady whose name starts with “A”, who I don’t want to be in on the interview, she could follow up with the: “What do you think about that last purchase?” That sort of structure.


Yes. I would say that we don’t need to create everything from scratch. We just need these tools. Everybody has a mobile phone. Many people have voice assistants. We just need to leverage the technology that already exists within them. And then, we can layer on other technology. In the case of the voice assistant, you’re not getting actual voice, you are getting the transcription of the voice, but I think Video Analytics is on the rise. And I think that’s going to be a big type of voice data capture that will be very important to the industry. Because I do think that the sentiment, emotion and voice tonality and all those different things that you can layer on there can further help you understand your research participants. So I think that down the road, I could see us doing segmentations based on voice, based on what we teased out of a voice in terms of the personality types, and they would be equally as valid as an answer in a questionnaire.


I think you actually already answered this in a number of different ways, but I’m going to really try to kind of hone in on this practical point. What is one practical take-away that participants can get out of your talk at NEXT?


I’m sort of a field work expert, if you will, I have been doing it for a long time. And so what I tend to look at is… And I know that researchers are going to ask a lot of questions about representivity, and how you blend sample, how you adjust the data for different collection modes and those types of things. So I am going to try and at least get some initial answers to some of those researcher-type questions in addition to the use cases that we hope to have several, like I said, good use cases that will illustrate the best ways to leverage functionality.

One thing I didn’t really explain is that like in a diary situation, there is a lot of repetitiveness in the data collection. You don’t want to be asked the same question again, and again, and again. If you have a series of five questions, you need to answer in a diary several times a day every time you do something like the laundry or something, using the AI techniques you can just give an answer for all four questions at once. And the voice assistant will listen to it, think about it, and recognize that you have answered all the questions and just say: “Thank you”. So there are some practical uses like that, that do make the research experience better for the participants, and that’s really what we’re focusing on the near term.


Yeah, I think so. I’m really excited about the point you just made, which is we care about the respondent experience. Research is right now being done at a scale that is unprecedented. Even if you look back 3 years ago, it’s crazy, but even if you look back 10 years ago, or 20, it is mind boggling that we’re doing as many surveys as we are right now. Everybody is doing surveys, it feels like. I got my tires changed the other day. And guess what? The local tire shop, which is not like a national chain, sent me a follow-up NPS survey, which was hilarious. So they are really starting to care. Seeing market research as an extension of the brand is important. And so we need to be better stewards of the respondent. I go back to the eHarmony example. I don’t know if you are familiar with that product.


Yes, not directly but yes.


Right. Prior to when I got married I did use them, by the way, but it was.. it was arduous because it was right when they were relatively new, and it was about a 400 question survey. It was really, really tough, like a multi-day thing. And they were very early in the space. They were able to reduce it to a subset of open ends with just a few closed-end questions and then, from the open-ends did populate the 400 variables that they needed in order to optimize matches. So the interesting part there for me is that they did this in a text-based environment, early leaders in sentiment and such but now you think about a 30-minute survey, and it is really hard to do that. I would argue, nobody could actually do that, maybe 0.25%. But you could actually have a conversation and have that be meaningful and then ultimately populate the data set in a way that can be analyzed for researchers.


Yeah, that’s very much my point of view. I think about qualitative and quant. They are coming together, and I think it’s a good thing because quant people are good at listening. And quant people are really experts at asking questions. But they tend to ask questions with a set of closed-end responses. And what we really need is good questions with open-ended responses where people can say whatever they want, and people that say some things that if we find particularly interesting, maybe we go back and do a return to sample through a voice survey or some other method. It might make it tougher for Data Analytics because it’s less structure data but ultimately, again, it’s getting more towards a conversation, person to computer, where each conversation eventually becomes unique. That is driven by the participant, not by the researcher. That is when we really know we have come full circle from the original days of face-to-face and CATI where it was a conversation as well. But then we put a lot of technology in between and methodologies that, in some cases, made it easier and more efficient for us, but didn’t necessarily preserve the deep insights that we were trying to get. So ultimately, if we can find a way to do things fast and at scale with deep insights, then we have really succeeded. And I think voice is going to be one of those ways that is going to help us get there.


Oh my God. Yeah, for sure. I’ve been seeing this is a growing trend. I was just at IIeX in Austin. There’s a lot of qualitative technology companies that have been entering our world over the last couple of years and really, if you think about it, a survey is just a surrogate for a conversation, it just enables a conversation at scale. And the reason that we have closed-ended questions is because I’m lazy to analyze the data… Through AI what we’re able to do in the sense of analysis, etc., natural language processing, is really, for the first time, be able to have that conversation at scale so that we can get qualitative and quantitative or qualitative insights at quantitative numbers.


Yes. And so what you can do is when you ask that question, you can get a set of responses, and then you can do probing, you can teach the AI to know what to ask.

If they say this, then ask that. It gets smarter and smarter when you train the database at asking what is the next question to ask to get deep behind the meaning of what they have in mind when they answer a question a certain way. And that’s just very hard to do in a quant survey today with the typical survey technology.


So looking forward, not too far, but relatively near term, what do you think is next for voice powered surveys?


Well, let’s see. The real challenge is, of course, that surveys are too long. You really can’t do a 20-minute voice survey. It’s just not going to work. It really has to be more like 3 or 5 minute survey because people just don’t want to talk to the voice assistants in these long dialogues. So I do believe that there is more and more opportunities to break up those long surveys, and to look at them, like you were sort of suggesting, into pieces and ask different people different parts of the questions, infer some of the questions, and so forth. We have to be much more creative in how we approach surveys, asking people questions. Voice could be a very good component of that. But it has to be changed down to, using them for specific situations where you’re hoping to get their deep thinking involved, and then open it in that kind of environment to answer a tough question, not just something that is a simple tick box.  I think that there will be a place found for voice surveys very soon that’ll be just one of the tools in the box to capture research insights. And then over time, I think that we will just find more uses. But I think the initial uses will be returned to sample, like I mentioned, or diaries that every repetitive nature or very short surveys, which could go out for a very large portion of the population. And from that, you call out a smaller group that you want to do in more depth and maybe move them back to an online survey. So I think there’s going to be a lot more switch mode-type things and broken-up surveys into pieces, and a much wider range of interviewing techniques. Back to your IIeX example, certainly the tools around big qual have been coming out at a rapid pace over the last few years. There’s a lot of great stuff there to do more and more survey interviewing. And I think that we will be leveraging them in a voice environment fairly soon.


My guest today has been Frank Kelly, Head of Innovation and NPD at Ipsos. Frank, thanks so much for joining me on the Happy Market Research podcast.


Hey, thanks very much.


All of you, for more information on the Insights Association’s NEXT conference, again that’s June 13th and 14th of this year. Please visit That is  I hope to see there. Have a great rest of your day!

NEXT 2019 Pre-Conference Series – Ellen Kolstø – IBM

The 2019 NEXT pre-conference series is giving listeners an inside look into companies such as IBM, Voice Metrics, Ipsos, and Pulse Labs.. Join insight leaders on June 13 – 14 in Chicago for NEXT, where you can discover how technology and innovation are changing the market research industry. In this episode, Jamin Brazil interviews Ellen Kolstø, Design Principal at IBM.

Find Ellen Online:




Hi, I’m Jamie Brazil, and you’re listening to the Happy Market Research podcast. My guest today is Ellen Kolstø, Design Principle at IBM Q. International Business Machines Corporation, or IBM, is an American multinational information technology company that is headquartered in New York, with operations in over 170 countries. In 2016, IBM launched the IBM Q Experience, which is an online platform that gives the general public access to a set of IBM’s cloud based quantum computing. Ellen has hosted lectures at the University of Texas on design for artificial intelligence and has served in senior roles on both the agency and services side for companies including JWT, Young & Rubicam, Leo Burnett and BrainJuicer. Ellen, thanks for being on the Happy Market Research podcast today!


Happy to be here! Thank you.


Tell me a little bit about your background. This is kind of helpful for us because it level sets, and gives us a little bit of context of who you are.


Yeah, always a great question. So I started life in the agency environment as a strategic planner and so it came up through that world of account planning. I’d like to say it came over from the Mayflower, sometime in the 80s, from the British, and I grew up in that culture where it was very much about understanding customers and working with them and doing the research yourself so that you could translate that into creative strategy for communications. So I started in that world, and did that for quite a while. Then I felt that over time, the balance of the amount of research that was getting conducted shifted over to clients themselves, and they were taking on more of that in their own realms, and agencies were doing a little bit less of that. And so I found it very attractive to move into the realm of market research, where I could spend all my time conducting research, which is my favorite thing. And that is when I moved into that world and into BrainJuicer, now known as System1. I liked that environment as well because we did a lot of really innovative types of research using technology, so it combined these two worlds that I’ve been playing in, especially most recently. We did a lot of online ethnography and also online community. So you had a lot of tools to use and have consumers come with you for weeks and months in some cases as they work through different experiences with you, so that you could maximize products. And it was really fun, whether it was a long-term engagement, working with them on their relationship to cookies and unboxing experiences or how they selected their phone service and all that kind of fun that went along with that. So I did that for a few years, and then, I had this interesting opportunity where someone said: “Hey, IBM is looking for people with deeper search experience in what we call ‘user research and technology’”. Looking for that for Watson, specifically in the realm of AI, specifically they built up that team because Watson was new three years ago, it was just getting started, especially with the design team, and that is the group that creates the user interface and all of the tooling that our customers use to create AI themselves. I decided to go talk to them, and it was a really great experience. And I ended up there in a completely different realm: total technology, business to business, enterprise environment, but in a completely new and exciting space. And I was very energized by that. And that is how I ended up making my way to IBM through some of the other areas.


Where did you grow up as a kid?


I grew up in Houston, Texas, of all places. So I had actually spent my career moving around and worked in San Francisco and Chicago and Boston, and all these other places. Then I decided to come back to Texas and work in Austin at an agency, and came back to my roots here. And I really love Texas because it’s an amalgamation of a lot of things in this one giant state. You’ve got big corporations. You’ve got rural areas. You’ve got tech corridors and Austin, agencies in Dallas. So it’s just a lot offered here, but yes, I grew up in Texas and decided to come back to the Wild West, if you will.


So I did some digging and preparation for this episode. In 2015, on LinkedIn, you published a long form blog titled “Customers as Mentors”. And you opened with what is probably one of the best quotes I’ve ever heard, and I’ve never heard before, which is pretty unusual. And that was: “The purpose of business is to create a customer who creates customers.” And I thought:  “That is exactly right!” So I know you recently spoke in Austin at IIeX, and then you’re going to be speaking at the NEXT conference coming up in Chicago on June 12th and 13th. What are some of your favorite examples of how AI is helping us better create customer advocates?


Well, that’s an interesting question, and part of my point in that blog was that it’s really great when companies or good companies start to look at their own customers as potential mentors for new customers as in you’ve got all these customers you have a relationship with who’ve been through the journey of adopting your product, especially in categories where those products where there can be a lot of work to adopting them —and technology being a space very much like that. So if you pair them up with brand new customers, and get them started together, and wouldn’t that be a great thing to do? And I think some companies have looked into that, but I think it’s still right for growth. So it’s interesting that when you bring AI into that because AI obviously as a machine has a different perspective. It’s a human-generated perspective because we make these machines right now. But the rule that I think AI can play in that it’s almost becoming that mentor itself because you’re seeing that in a lot of the spaces where AI comes in the chat bot space for the conversational system space where let’s say, it’s midnight, and for whatever reason, you decided to download that new piece of software, and you’re not sure how to do it. And you need help. That’s the time when you may turn to a machine, and AI can help you get through that process, go through that journey of downloading that software correctly. So it ends up creating machine mentors where what I was talking about were human mentors. But you end up having these machine mentors, and they can be as useful and helpful because they’re available 24/7. They ideally, if it’s done well, know the questions you’re going to ask. That doesn’t always happen right now, but it is the vision. The vision is to be able to get the help when you need it, how you need it.


I know you’re going to be a little bit biased here, but who do you see in the space leveraging AI for driving customer experience particularly well?


Well, that’s a great question. I am biased, and it’s some of the folks that we’ve worked with, I will say —I was using that example of downloading software, I would say Autodesk, which is the company that makes AutoCAD and all of that software that helps architects and a lot of people that are doing a lot of rendering. They have a very advanced system that allows you to do a lot of things and get a lot of answers directly through that system. And they have worked long and hard to get a system that’s very thoughtful, that’s very focused on the key questions that customers need and is able to really help them. Now, it’s a different focus in market research. In many cases, we are not always looking at AI right now as being a direct interface to us. It’s more than, it’s a tool to help us in active analytics or insights in your engines to understand a lot of large scale data if you’re a market researcher. At this point, we are not using bots to field for us. Ha! Maybe somebody is. Maybe somebody is trying, but I think we still want to be the one asking the questions. Obviously, you could argue that surveys are an automated form of that, but it’s a different type of research data collection. But at this point, I think AI is in the realm of being a tool in market research, and I would say that it is definitely the best place for it to be right now.


I have spent about maybe a third of my career doing qual and the balance quant. Research is really just a conversation at scale. You don’t need to do research when we only have one customer because you’re talking to that customer, hopefully. But as soon as you are IBM, then we have got a lot of customers, and we can’t actually understand the customer sentiment or put the customer in the center of the conversation unless we actually conduct research and facilitate that conversation. What’s interesting about AI to me is, and you probably saw this at IIeX, that there’s a lot more companies that are entering into market research that are leveraging AI for qual, which is allowing bigger base sizes to be done and historically possible. And when you think about my career, this is way in the 90s, late 90s, mid 90s, we would do things like collages. You have probably these kinds of projects.




And then, we would basically try to put together the respondent collages in a master collage, which is really funny if you knew my art. I never got a repeat customer on that one. I don’t think I delighted customers there. My point is that we were able to actually conduct these kinds of exercises, and then the machine put them together. The AI put them together in a way that is actually meaningful and connects to the audience. Are you seeing that sort of application in market research looking forward? Is that one of the growth areas?


Well, it is. It’s funny that the presentation I made at IIeX was actually around caution with AI.


Oh, interesting!


Understanding where the models are at this stage of the game is not to say that, as I said, you can’t use them or have them be a part of products and services; that can be very helpful. But I’ve spent the last three years watching our customers build AI in their own systems, and seeing the tremendous amount of work it takes to build a really solid, stable model that is reliable, that is as balanced as possible. I mean, bias is what it is, so it’s going to exist but you can get as close as you can. It’s a tremendous amount of effort and work. It’s not something you stand up quickly. It also requires, in some cases, hundreds of thousands to millions of data points for something to be really reliable. Think about if you start as a child and you don’t really know the difference between a cupcake and your dog. You’re not really familiar as a little kid but you start to see that thing over and over and over and over, all these elements, and that’s how you learn. AI is the same way. So you can’t expect after, in some cases, five times an image comes up that AI can correctly identify every time that it’s a Porsche. There are so many elements to a Porsche to get it right, from the shape of it to the texture to the colors to the different elements that are on the vehicle to the logo. It’s got to pick apart all those things, put it back together and identify that as a Porsche. And that’s kind of the value or the promise of neural networks, right? But it takes a lot of work for a model to get that right. And so I was illuminating at that conference, under the hood, how the sausage is made, which is what I will be doing partly at NEXT too just to arm market researchers with an understanding that I think the smart move right now is to use AI but use it with caution, and double check what you’re getting! Don’t expect that it’s a black box that magically spits out the right answer, or that its first passive data is going to be better than what you could do. It may not be, and it takes a while for it to learn from other people, to run enough times, to get things right. And we are at the point where you just have to make sure that your own human intelligence is a part of the mix. It’s not magic. It is very much augmented intelligence, which is what we like to say at IBM. It’s going to add to what you’re doing, but it’s not at this stage going to replace you or what you are able to do.


Yes, I just had a conversation yesterday with Aggie Kush, the Head of Insights he had a lot of titles, he was the Head of Insights for BSkyB. He finished his PhD talking about machine learning. One of the things that he identified going through his thesis, and I think was actually core to it, is that AI in and of itself can reinforce biases that we have, maybe even a gender bias, because it’s recognizing these patterns and then basically playing of the pattern recognition so gone unfettered, it actually could not have the outcome, whether it is social or otherwise, that we might want, meaning that we really got to pay attention to the models and the actual implications of the of what the machines are telling us.


Yes, you play in right into an example I gave at that presentation, which was a study that was done in 2015 around Google Search. Google Search is a great example of AI in use and with a large trained model. All of us when we do search or training that model, right? And this isn’t a dig on Google because, in fact, the way this worked out made perfect sense with what you’re saying. But within their search, university looked to see that whenever someone searched on CEO, they focused on this one instance. When you searched on CEO in 2015, 27% of the CEOs globally were female. But yet when you searched on CEO and Google, female CEOs only came up 11% of the time, which would tell you: “Oh, hey, my model is biased”. Now, Google rightfully came back and said: “Hey, this is based on what people are putting out, whether it is ads, whether it is articles, whatever images they are using, that’s where this is pulling from.” And the university came back, I believe it was Washington University, that came back and said: “Well, that may be true, but we also believe that whatever people are clicking on is training your model.” So if only 11% of the time are clicking on female images, then the model things that that’s the amount of time people want to see female CEO images. And it will continue to under-represent. So it’s exactly the point you made. And it is unintentional bias because that’s the other thing I’ve heard a lot of discussion around: this idea that machines will be able to be unbiased because they’re machines, and they will avoid the unconscious bias that humans have. Well, no, actually, humans are part of the training process. And so that unconscious bias was absolutely present in that example. Nobody was consciously, I believe, trying to say: “I’m going to search every time until it changes its model.” No, it just happened to be that that’s the way it went. And now you have got bias in that model. And that is the other reason I say to always double check what kind of models companies are working with because how much work are they doing to troubleshoot these kind of issues? Are they really looking back at their models and saying: “Oh, we know the types of people that are using our software, whatever we are offering that has AI in it, and we’re going to go back and double check and see how that’s augmenting our model.” Because AI models are never done. You don’t create one and walk away. You are constantly working on it and seeing how it changes because it’s a very constantly changing amorphous thing. So that is where I get on my soapbox about. How do you use it? I still believe it has tremendous promise, and it will always have tremendous promise. But you want to make sure and use your own intelligence in all of this as well. And don’t underestimate your own intuition at certain points.


Do you think there’s some overlap? Because we moved away from the institutional tracker. I mean, not like whole sale, but it’s become a smaller and smaller piece of the corporate budget. You know what I’m talking about, right? The quarter million dollar or million dollar…


Okay. Yes, I worked for a lot of them.


Yes. So those are going away, but at the same time, as to what you’re talking about, I have never heard it cast exactly like that. But these machine learning AI systems are in a lot of ways uncovering the direction of the consumer, which is really one of the big intends of measurement from the trackers. Do you think there is an analogy there?


Potentially? Depending on how people are interacting with AI in the tracker and who is answering the questions, I think there will always be an opportunity to double check what you are getting back as a result of that. Different from a survey, without AI in it, where there is an answer, you click on it and it’s done, AI is always training and because it’s always training, yes, things can change. And so you are just going to want to know how that might change. So, sure, it’s certainly something to keep an eye on for sure.


I think it’s a bad idea now that I hear you answer that question. Okay, so how can modern insight pros use AI?


With caution. Ha! I say that because, again, I believe there’s a lot of value. Like I said, where I get most excited in market research is with Predictive Analytics. I think there’s just a tremendous amount of opportunity. We always struggled with market media modeling. We are always trying to model things to understand what people were going to be doing. And we never had a really great way to at least get an idea of where people were headed. And predictive analytics, especially where AI can aggregate a ton of data, look across many things and start to make connections, will be invaluable. And I think we will get a much more accurate understanding of what could be happening if we were to run certain media mixes, what do we think the outcomes could be. I think that that’s where it’s got a tremendous amount of promise, and I’d be very excited to see how that moves forward.


Yes, I did a fair amount of modeling in my early career. The way that I was taught to do it, which is to say, there’s lots of ways to do it, is you find it, you asked a question in your survey, which is something like “probability of purchasing a TV”, and then that level sets against actual TV purchases over that period of time and so it gives you a baseline. And then you ask another question similar to that but about a new product that your customers are interested in measuring and then, perform a regression. And then all of a sudden you’ve got that or a Van Westendorp or some other kind of methodology that is leverage in order to come up with the predictive… well, Van Westendorp is a little bit different. My broader point is do you see marketing research as a discipline starting to use and leverage AI in order to do these market predictive models versus the traditional, old school stats point of view?


I would say it is probably being more valuable in that space, for sure. We worked on so many regression models, and I still couldn’t tell you if I really knew if any of that was going to play out. It was hard. There is a famous quote… Oh, gosh, I’m not going to get this right. Something about “I know half my advertising works. I just don’t know which half.”


And then we categorize that half we don’t know under branding.


Right! Exactly! And it’s never going to be a completely exact science. I think predicting behaviors is very hard. But statistically, it still was not quite enough of an indicator of what was really happening out there. AI has the ability to look at a lot of things and because it can also look at unstructured data, you have this unique opportunity where it could look across more than just the statistics. Now, it can look across conversations and different things that can be fed into the whole pie and tried to get a better understanding of what could potentially happen. That’s where AI’s promise has always been and that it has now so much more data to draw from to try and find these answers to very complicated questions.


AI is part of the tool kit, right? And let’s say that you’re entering into the insights role inside of an organization, marketing research or some other some other way. Well, actually, let’s focus on market research, what skills do you think the person should be cultivating in order to successfully drive inside of the firm, basically informing the executive level business decisions?


Yes. There’s a lot of different things. So the first one that came to mind because it is the one that I constantly run up against is flexibility. You have to be willing to roll with what comes along, not only with all of the changing technology and the different things that come up, but it can be very difficult to leave sometimes your opinions at the door and say: “OK, well, let me look at this a little bit differently”. Insights? When you get to the executive level too, they need to be pretty battle tested, right? You want to make sure that you feel pretty good about them, which means you have to at some point vet them in various different ways to know that you have something collectively that you feel is going to stand the test of time, especially the enterprise, where big, big, big decisions get made, right? And so you have to be flexible, the tools you use in the kind of data you’re looking at. You have to be willing to look across a whole bunch of different types of data, trying different methods. I don’t think you can do “plug and play” anymore. I mean, I think I i’s back to your point about all of those longitudinal studies, and all these tracking studies of “there was one way to do it”. You did that every time and you reported that number at the end of the year. And now, there’s so much innovation and change. I think staying on top of it is challenging. But I think also being willing to be flexible and reinvent at various times is going to be a really important skill set.

I am also going to go back to, and this feeds into flexibility a bit, creativity, which is also super important. And it’s a funny thing because I think what really helps that is to be able to draw from things that aren’t all related to what you’re doing or even in some cases, your domain, right? It’s looking out what completely different companies or different competitors are doing or even people completely outside of the industry that you are in, and trying to see how you can maybe utilize some of those elements in what you are doing to try and come up with new ways to think about things. Every industry is getting so incredibly competitive, certainly saturated with a lot of known insights. Getting something new and different is just requiring a whole other level of flexibility and creativity and inventiveness that you are just constantly having to hone, and it’s not easy to do because you’ll get in myopic into your workflow and then go: “When was the last time I even read anything on a new technique in this area?”, but it’s something to keep in mind.


This is such an interesting point to me. When I started my career, it used be the case that it was adequate to conduct a consumer survey and then analyze, PowerPoint, and then story-tell, right? But it was all in the context of that study. Now it feels like that’s wildly inadequate, right? You need to really hone in on providing the context, market, business, social, whatever, of that particular insight because the context informs so much of the implication of the data. And so one of the things that I’m seeing more and more in research reports is that maybe 25% is spent on both the setup, the context, and the implications at the business level. So it’s almost like we’re moving a little bit broader, and then also going deeper with the insights.


Wow! It’s so funny that you mentioned that because context is a big, big thing with me. I completely agree. It is telling stories, and it’s telling stories with the details where you can really start to see what’s happening. And I think in on the side of technology, especially with usability, there has been a tendency towards scores and just very almost quant-like representation of the learning. And I have pushed to put a lot more context even around that kind of thing. Just because somebody is navigating through a website does not mean there is not a lot of interesting things, especially if you are sitting there watching them, that can tell you about their thought process or why on that day, they ended up in certain parts of the experience. And that is where it gets interesting. It’s also true that your insights are better remembered with context. Without context, they are “somebody wants that”. But when you can go back and replay a story to somebody else about the context of why they want it, it gets institutionalized, it gets internalized, it gets retold and it’s that whole fireside chat kind of phenomenon. I’m a big believer in context. I would almost say that the context is 90% of it. And I completely agree with your point.


What I described is actually incorporating a lot more data really into the narrative that you build out. But the master, storyteller, they’re doing that. But now they’re actually the content on the slides, and the actual story that they tell is re-tellable. So it’s actually a hell of a lot less content that winds up getting displayed, and the story is profoundly simplified to its core essence. So it’s really interesting; it’s a much harder job today than it was before, I think. It is one of the reasons we have to leverage any tools that we can in order to help us.


Yes, and that’s where again unstructured data comes in, right? It is all of that kind of conversation. It’s interesting how AI will be able to help us with that. I think insights engines will get a lot better, and they will start to be able to serve up that context in ways that we can’t possibly get through all that data, and that will be super exciting when that happens, and that all of that context is that we want to hang on to.


Yes, insights and context, that would be interesting business to start, I think.


No kidding! That would be great, right?


I think I am doing about 1,000 interviews, and I’ve said the story before on the show. So I apologize to the listeners about the redundancy, but it’s rare. It’s worth mentioning. I did a quant study, relatively short, and then at the end, I asked: “Please do me a favor, and take a 15-second video or some period of time video of your environment”. And one lady, I’ll never forget it, took a video of a number of kids where they were running around like chickens with their heads cut off, as my mother would say. And I was thinking to myself: “All of a sudden it drew everything into question about the insights that she was providing in that survey for me.” You know what I mean? It feels like… totally. That was really important, the context of her providing that insight, which in that case was potentially moving a multimillion dollar ad buy. So it seems like maybe they’d want to know that? I don’t know. Anyway.


Absolutely! I did mobile ethnography, like I mentioned in BrainJuicer, where we had customers videoing various things, unboxing experiences, as I mentioned, in all sorts of things. And you saw the context there of their world. Right? There was one really funny when I was doing on a cookie that was being introduced. The husband was more excited about the cookie than the wife. And the wife was the one in the study, and he kept creeping into the video and taking it, and she eventually had to hide the box from him. But it was an interesting dynamic that you want to say. And the cookie was targeted to women, as they can be because it had a certain dietary benefit. But it was like “Who cares? See, this guy loves it.” So, yes, there’s so many stories that could be told by being in that environment, obviously the power of ethnography and the power of storytelling.


Yeah, which links to where you started, that is, the power of AI because it’s so hard to do that at scale.


Yes. It is hard to do that, yes. Yeah, it is the promise of it, and it will get there for sure, and it will change everything. I still firmly believe that even as it starts to be able to go through a lot more of that data and comb through it and give insights, I think humans are still going to be very, very much in the mix with it in terms of building off of it. You know how you probably collaborated with another researcher before, and you have kind of rift off each other to come up with the ultimate viewpoint on something or the ultimate insight. I believe that is how the relationship will move forward with AI.


Oh, I completely agree. This whole fear around AI removing jobs in the least, in the next 50 years, maybe 50 years, but not in 20 years, at least not from my vantage point, it’s all about partnership. I liked your augmented intelligence point of view.


Yeah, I agree. I just don’t see that happening.


So on a future look, how are we going to be different as an industry in five years?


Oh man. Well, let me get my AI together, and I will tell you. Ha! Where’s my predictive analytics? I will give you one viewpoint I’ve been thinking a lot about. And this is because I am in technology now and more so, in this space. But I think your UX research and market research are going to morph because I am already seeing in the realm of usability and user experience, all of that research, a lot of researchers in that space saying: “God, we need to understand more about the market. We need to do more up-front qual”. And then, when I was at IIeX, they had several sessions on usability, which was pretty funny, because some of us from the team went to that conference and they said: “Wow, they introduced usability like it was a new technique.”  I think it’s pushing into the realm of market research to say: “Hey, nothing is stopping you from wanting to dig deeper into the online experiences of your customers even though you might be at the brand level, right?” So I think we’re going to see all of this come together as one big realm of customer research, and I think it should because customers will engage with you all over the place. And why wouldn’t you have one researcher, a team of researchers looking across all of it, from the market to the online experiences to everything else in a meaningful way that doesn’t separate out user experience from market research.


We have addressed this next question, but I’m gonna ask it anyway, just to see: If you were going to create a company today servicing the industry, the insights industry specifically, what problem would you address?


Yeah, I like your context one a lot. So this is what I’ve been thinking about for a while, and I don’t know if it’s controversial or not, but it’s this whole idea of “is bias really a bad thing?” The reason I say that is that in research we are constantly saying you can’t be biased, we got to be unbiased, and we all know that’s impossible. You want an unbiased sample, and this and the other will. The panel probably already has biased from a million different angles right that you have drawn from. We know as humans, bias is inherent, certainly there is bias you absolutely want to be careful of, anything that harms anyone. But in some cases, bias is to be learned from. And if it exists, how might we learn from it and gain insights from the bias itself rather than treat it is something we just should either ignore or pretend, we have maximized it out of the equation. So for a business to understand how we can work with bias rather than avoid or against it, I think could be really interesting to figure out. Even with that Google example, there’s more going on there, with how people are clicking on those CEO images. What is it? Is it purely gender bias? Are there other things at play? What can be learned to unpack some of those elements that will help us better understand the role of bias? I would also argue, in some cases, bias is not any different from having a hypothesis. Having a hypothesis means I have a point of view on something without all the data. And I am biased in a certain direction because I think this might be what is going to happen. And then I will go into a study with that hypothesis, and I will obviously look to see that plays out. But we all know you are looking more for that particular other things because that is where your mindset is. It’s not a bad thing. It is something we all do. But how might we think about how to reframe the use of bias in a way that we can learn from it, that we can improve the outcomes and treat it as something that is a part of the mix, not something that we just should avoid.


Yes, it would be fun from a start perspective, it would be really fun, and it’s useful to think about… You are familiar with Myers-Briggs, of course, or whatever personality profile thing?


Yes, yes.


So, like, for Jamin Brazil, what biases do I have in my life that I probably honestly just don’t know about, that are just a function of culture and context?


Absolutely. Yeah.


That would be a really interesting… I don’t know how we would do that, but it that does seem like something AI could address.


That would be a great Myers-Briggs. You are right. Because then that’s something you would know going into any future work. Okay, this is a mindset I’m coming in with, and now what do I do to either to mitigate it or to in some ways celebrate it. Because it’s a funny thing too: I was reading a Harvard Business Review article recently that talked about how employees get reviews, and so many times, reviews are a negative experience because it focuses a lot on your weakness. “You should be doing this.” “You should be doing more of that” instead of “Okay, let’s celebrate what you are good at and find other things for you to do that celebrate this thing that you are good at.” So it’s kind of that same idea. How could you take what might seem like a negative and say: “Well, there may be ways in which this could be extremely helpful with certain studies”, “Having this viewpoint could really make me the best researcher for this type of research” as opposed to “Oh, you are biased in a certain direction, and now you’re not good for certain things.”


Yes, totally. It is such an interesting point of view. I can pick on my grandfather here, my late grandfather so, I will tread lightly. But my point is that he grew up in a World War II generation. And there was just a completely incorrect set of biases that were ingrained there, not in a positive way. I am not saying he was part of some terrible group or anything like that, but it was just different, really different. He didn’t fit into a millennial culture, how is that? And yet, with no malicious intent or anything along those lines, it was just the framework that he understood and agree in incorrectly. So the opportunity for him to get informed on that, to hear: “Hey, these are the things that you know you have inherent biases” because you can often find see them in other people, but they don’t really not be able to see them themselves. And that’s the point. It’s hard to see the blind spots in ourselves. Something like that could be really interesting.




Sorry about this. I totally went away with the conversation.


No, what is interesting about your grandfather, too, is that, who knows? His perspective might be getting smaller and smaller and smaller as millennials grow. So that maybe a perspective that’s also interesting to understand or potentially having a certain study, where there is another angle to things, you know what I mean?


Totally. Out of micro level in and a man macro level, start seeing how that plays out. That’s so interesting. All right, my last question: What is your personal motto?


Ha! I guess the one that comes closest to encapsulating me is: “Always be prepared.”

I learned that a long time ago for my father, who approached everything with a lot of preparation, thoughtfulness. He had he had a plan for everything, and it really served me well of just having some level of preparation is, I think, sometimes 90% of the job, 90% of the battle, whether you’re reading secondary research ahead of a study or you are just getting smart about an industry or you are having a conversation with some stakeholders. Before you get started with something, you have got a good jumping off point that means you are not just going in shooting from the hip in many cases. I’m someone who likes to have a level of preparation. So it’s ironic because in some ways AI is very much about that. Building models is very much about a tremendous amount of preparation going into any kind of work that you are then going to do with it. But yes, that’s my thing. I like to be prepared.


I love that. I got to end on two stories to that point: A good friend of mine, Jennifer Crawford, she took a bet on me when we were at Decipher in the early days. She is the owner of a New York-based research company called Research Solutions. And I remember I co-pitched with her to Meredith about an online diary, something you’re really familiar with, and in that pitch she came in with a folder that was about 0.25-inch thick of preparation. There was a bunch of stuff in it about the meeting. And so we left after 45 minutes. I don’t know if we actually opened it. Maybe we got through two or three pages in the binder, or the folder. And this is the only time I have ever heard a customer say: “I want to thank you so much for being so well prepared for this meeting.” And we won the business. It was a windfall for both of us, the firms. It was spectacular. Anyway, sorry about my reminiscing. But preparation, as it turns out, I think it’s really important. Oh, and the second one I want to mention is Voss Media. Voss Media, which is a big company, is inundated with papers about states of industries, etc. And they actually subscribed to an AI-based system, which does the processing so that they can reduce all this vats of information into a quarry string and pull out the pieces that are relevant and say that they have 99% coverage on their content. So anyway, yeah, I like the preparation point. Thanks so much for sharing that.

My guest today has been Ellen Kolstø. Sorry about that hick-up. Ellen Kolstø, Design Principle at IBM Q. Thank you, Ellen, for joining me on the Happy Market Research podcast today.


Thank you. It was lovely being here.


Everyone else, this is in conjunction with the upcoming NEXT conference. You have a couple weeks still to register. You can find out information online, of course, at as well as Google Next, and it is located in Chicago, on June 12th and 13th, I believe. It is going to be a wonderful event. I hope to see you there as always. I love your screenshots, feedback. Share this. It’s appreciated. Have a great rest of your day.

Ep. 218 – NEXT 2019 Pre-Conference Series – Dylan Zwick – Pulse Labs

The 2019 NEXT pre-conference series is giving listeners an inside look into companies such as IBM, Voice Metrics, Ipsos, and Pulse Labs.. Join insight leaders on June 13 – 14 in Chicago for NEXT, where you can discover how technology and innovation are changing the market research industry. In this episode, Jamin Brazil interviews Dylan Zwick, Chief Product Officer at Pulse Labs.

Find Dylan Online:




Hi, I’m Jamie Brazil, and you’re listening to the Happy Market Research podcast. This is a special episode that’s connected to the Insights Association’s NEXT conference in Chicago, that is, this June 13th and 14th. My guest today is Dylan Zwick. Dylan. I said your last name, right?


That was correct. Yeah, Dylan Zwick. I’m always dead last in the alphabetical order.


That’s funny. I was always first in photos because I am 5’8’’ so we have that objective position. Dylan is the co-founder and Chief Product Officer of Pulse Labs. Pulse Labs is a solution that enables users to launch and gather consumer opinions via voice devices such as Alexa and Google Home. Dylan, thanks for being on the Happy Market Research podcast today!


A pleasure to be here. Thank you so much for having me.


You are speaking at this year’s NEXT event on voice. When did you first realize that voice was important?


So I first realized that voice was going to be big back in 2016 when I bought my first Echo. So I started playing around with Alexa, and realized that what had been the dream of science fiction now for decades, you know, the ability to speak and actually have a conversation with a computer was actually becoming science fact, you know, it was it was becoming reality. And so, I played around with building my own Alexa applications and started exploring the tools that were out there for developers and designers for Alexa application, and for voice applications more generally, and realized that this was going to be a huge space and also that I really wanted to be a part of it. So that’s what got me initially involved.


Yeah, I mean, Alexa in and of itself is really interesting. One of the things that I think is… If you pull back, YouTube right now, I forget what the data is, something like 60% of the Internet is there. It’s a massive amount. And if you look at the bet that Google placed when they did that acquisition, it was, they consolidated the different product lines into a single thing, and then they centralized the KPI to one centralized point of focus, which was the number of daily videos unloaded. And that created so much focus from an R&D perspective that that was all anybody cared about. It wasn’t predicated on revenue or eyeballs or anything like that. That was it. And then subsequently, of course, that was the tail that wagged the dog. Amazon is actually doing the exact same thing with respect to Alexa. I mean, my kids… My 12-year-old can create an Alexa skill. It is crazy easy how they have made the development side of this accessible.


Yeah, that’s been a huge focus for Amazon. And the Alexa team is to open up as many tools for developing essentially applications or, as they call them, skills, on Alexa and then, trying to provide these and encourage as many independent developers to build skills there as they can. So you have a ton of skills, actually, that have just been built by independent developers, and then also a bunch of skills that have been built by brands or professional agencies. And there are even companies out there that are focused exclusively on the building of Alexa skills. And then yeah, you’ve mentioned, you know, they’re also very interested in providing tools that makes this as easy as possible. So you even have a blueprint tool that essentially lets you quickly create a standard but personalized skill without the need to have any programming background at all. And I focused on Alexa and what I just said but Google is also pursuing a similar strategy, and that you can also build applications on Google’s Assistant. They’re called actions, and they’re really trying to build out, and expand, and encourage that ecosystem as well. So all the major voice players and, to be honest, Bixby and Cortana are also very interested in that. So all the major voice players are really trying to provide a platform for as many content creators to participate on as they can.


So the Bixby thing was interesting, right? They launched… I think it was last year. It was Samsung’s voice device. And Cortana. It is interesting to me that Cortana and Siri haven’t had a more dominant role in voice so far, especially considering the head-start that Siri had. Do think that developers are going to need, or should I say, brands are going to need to deploy across all of the major players? Or is it… I may even roll it back a little bit further. I don’t know how old you are, but I was in the Silicon Valley during the whole rise of the dot-com, and there were probably 12 searching engines.


I remember.


…like Infoseek and You had this… exactly! So you had to really pay attention if you wanted to get visibility on the Web in terms of where, what the users… do you think ultimately it is going to be one ring to rule them all?


So right now, it’s certainly a duopoly. So right now, most of the market share there is being taken by Amazon and Google. And so if you are a brand and you are building an application for voice, most of the time you’re going to want to… Brands are interested in actually just building on both of the platforms, and it tends to be pretty easy to port applications built on one from the other. Once you’ve built, for example, a Google action or Alexa skill, translating that over to the other platform —as I said, it’s not trivial, is a whole bunch easier than building a new one from scratch. So because of the market share that both of the major smart speaker players have, most brands when they build a voice application are interested in building on both – and it’s becoming easier to port that so it’s becoming less expensive to do both, kind of at the same time, so most are interested in doing that.

In terms of the other voice assistants, that is, Cortana, Siri and Bixby, they’re all making interesting plays, but they are mostly not competing directly with Amazon and Google on the smart speaker market. So Cortana is actually positioning themselves as much more of an enterprise voice offering. So the idea would be that Cortana would be kind of your voice assistant in the office and sort of the business aspect of voices system. And then, Bixby has an offering that is very tied to your phone, to your Samsung products, though, so it’s really tied to what people are doing on their smartphones. But yeah, I would say that we will see how the future shakes out in terms of who is going to be dominant. I don’t think it will be one single player, but I also don’t think it will be five or six.


That sounds like your framework is really centered around use cases and the context of the interaction. So I have my Samsung TV, of course, and similarly, I have got my Alexa sitting there, but it’s actually funny. So I got Bixby on my Samsung TV that I set up. But I still use Alexa on my Samsung TV. From an interaction perspective, it is kind of funny.

(Oh, sorry, Alexa, stop!)


Ha! I’ve had that happen many times. So yeah, that’s really what you’re getting at is the fundamental goal. The reason that these major huge tech companies are so interested and invested in those platforms is not because they really want to dominate the speaker module. But it’s not because that the clock radio market was so important to them that they’re just going to go in there and crush that. What they really want to do is they view voice assistants as being the operating system of the Internet of Things. So you’re not just going to be talking to your smart speaker, or even just to your phone, but also to your car, to your television, as you brought up, to your refrigerator… I mean, you’re going to be talking to all of these different electronic appliances, and it’s going to be ubiquitous and the primary means, or at least one of the primary means of interaction, will be via voice. So that is essentially the big dream there. Whether it’s going to be something that’s dominated by one particular company, or whether there’s going to be maybe just some underlying framework that isn’t owned by anybody but that everybody kind of builds on, and that maybe you’ll be able to access Alexa or Google Assistant or whatever voice assistant you want, from any of these touch points, will be interesting to watch as it develops.


I mean, what I’m finding so fascinating is the way that we interact with voice. Alexa, for example, has skills, and I forgot what you said, Google Home’s reference was.




Actions, right. So it’s a very much of a human interaction, that is, it is that part of the UX experience. So I could see a scenario where you could successfully address Cortana, and to your point, in a business context, and then, similarly, I could use Bixby for maybe my refrigerator or my appliances or what have you. And then maybe at a personal level, I just want to go ahead and interact with the lady whose name I won’t say. That is really interesting.

And I love how you started out talking about the science fact because being a geek and Star Wars nerd, and Star Trek fan, the way that both of those environments projected the future, it turns out that Star Trek was right with this voice AI always being part of you.


Yeah, and sort of as a side note there, Jeff Bezos is famously a Trekkie, so Jeff Bezos is actually famously a big Star Trek fan. And the Star Trek or Enterprise computer was actually part of the inspiration for Alexa. So it may not be entirely coincidental that they seem to be similar. From what I understand, the Star Trek computer is actually part of the inspiration behind Alexa.


Oh, that’s so interesting! All right. So you’ve worked with several firms, many firms, including today’s top firms, on voice application. What has been the most exciting aspect of that? And then also, what do you see as one of the larger challenges in this early stage?


Yeah, absolutely. So the most exciting thing about it is that voice has the potential to be, essentially, the lowest friction form of interaction between a person and a computer, and also the most essentially natural and intuitive one. Speaking conversation is something that we learn and understand almost innately. There are parts of the human brain that are just specifically wired to communicate this way, and so voice interface is something that, if done right, is going to be the most intuitive and easiest type of interface that anybody is going to be able to use. And you have actually seen that with the… I remember six or seven years ago, a friend of mine’s young child walked up to a television and started touching it, and the television did not respond. And the child thought that the television was broken because he had become so used to touch interfaces. And even before touch, we’re seeing kids that are talking to their smart speakers. And so it’s going to be expected that any sort of technology that you interact with, you’re going to be able to talk with. And if you can’t, it’s going to seem broken. But the big promise there is that, as I said, it’s a super low friction way of interacting with technology, and it also is a form of interaction that can take place when you were otherwise occupied. And so a couple of examples: the big one that’s been so successful right now are things like requesting music to be played, just saying “A word”: “play Despacito”. Or asking for the weather or any of those kind of quick functions that you want to do every day and that can be made really easy and low friction. But what I think you’re going to see is most of the audio consumption is increasing rapidly. People are listening to music and podcasts and radio broadcast or radio shows more and more on digital devices. An so the ability to kind of interact with what you’re listening to via voice is extremely promising because usually when you are listening, you are doing something else. So if you are a marketer, and let’s say that you have an audio advertisement that’s playing on Spotify or podcasts or something like that, the ability to just say —if you want to know more about this, say: “Alexa, tell me more”. And have that instantly send you an email that will tell you more about what was being advertised, and then take you right back to what you were listening to, I think it has enormous potential and power.

Another scenario, another context is driving. So if you are driving, you are in an inherently hands-free situation, your hands should be occupied, so not hands-free because your hands should be occupied. And so it is an inherently audio scenario in which you are able to, for example, order food on your way home for pick up from a drive through via voice in your car, I think has enormous potential to kind of transform a lot of those flows.

What challenges there are today? I would say the biggest challenges are discoverability. It can be hard to really know what is currently there and available, and to remember what skills you want to invoke to do what. So that has been an issue. And then there are other scenarios, other interactions that I think to be the best way to input information, but are not necessarily the best way to get information back. So if I asked for a list of the 10 most popular movies from last year via voice is probably the easiest way to request that information, but then having something come back and say: “The most popular movie was X. The second most popular movie was Y”, etc., might not be the best way to get that information back. So something like a list might make more sense as a visual response. And I think that the combination of voice and audio with visual, so opening voice up as one medium through which you can communicate, is opening up a lot of new possibilities. I think that multimodal is going to be a major part of the voice applications in the ways that we use voice over the next few years.


Yes, for sure. And I think from a researcher’s point of view, thinking about the opportunity for ethnography to be done even though you do not have the video component tethered to it, but always on a feedback option is really, really powerful. If you are thinking about CPG-type or products, whether they are software or service or whatever, that we interact with or real things, then you can always provide feedback as long as you have that particular device handy.


Exactly, and so…


And you could do that, to your point, before while multitasking. So you have the new Alexa Auto that is a really interesting —I think they’re doing a limited release right now. So it’s the in-car version of the Dot, or whatever, Echo. So the closer that you can provide feedback to the actual experience, the better the data, the less the time degradation of the insight.




And right. If you think about being in the shower, and Head & Shoulders wants to do a new product test, and I’ve got my device inside of the bathroom, I can actually provide feedback on that experience while I am in the shower, where before that was just always impossible. You could never garnish that kind of information.


Exactly. So something like the ability to quickly provide, like a net promoter score or rating and then some quick feedback or data about a particular experience can be done very low friction via voice. You could have something like, on your Head & Shoulders bottle or something like that that said: “To provide feedback or rating, just say this particular thing to Alexa and then answer two questions”, which is something that I think people would be much more likely to do than say: “Go on a website and fill that out.” Or if you have the ability to say yes to the bottom of their seat. Say you can use Alexa to provide this feedback, and then we can maybe even send you an email coupon or something like that.

And then you mentioned CPG. Another big, exciting possibility here is that CPG most of the time people purchase is replenishment and reordering. Traditionally, packaging has been mostly geared towards standing out and convincing the consumer to make a particular purchase while that purchase is available on the shelf, and competing with other similar products. However, with this huge shift that we are seeing into purchasing online, I think packaging might even be somewhat rethought as a way of convincing consumers not necessary to make a purchase but to reorder. And the ability to say, for example, let’s say that you’ve got a roll of paper towels, and when you’re done with the paper towel roll on the actual cardboard roll itself, it says: “To reorder this, just say Alexa XYZ”. And it could be just a quick two-turn interaction or something like that, and you would have a replenishment of what you just finished on its way. I think that has enormous potential for tons of consumer packaged goods.


Oh my gosh, totally. I have never heard that example before. Thank you for sharing it. That literally blew my mind. This is going to be the headline quote by the way of the episode because one of my big challenges in moving to a voice consumer journey is that it’s an invisible journey. So the opportunity for a brand like Scott to intercept the point of purchase is quite literally zero. It’s all about my brand affinity, which at the end of the day, paper towels are paper towels for me; maybe not for other people but I don’t particularly care as long as it does what I want it to do. And so, if you can get that brand into the speaker, paper towel ring thing or whatever, now all of a sudden you do have an opportunity to create that connection with the consumer.




And this is what’s interesting: you could actually spawn the transaction because it’s a voice-based trigger


You could just say it right there at the moment where they are thinking: “OK, I need to reorder”. You could just be instantly there, and it’s the simplest transaction it could be. It’s basically just this thing that I have, that I’m out of, I want to re-order replacement of exactly this thing. “To do that, say exactly this”. And it will happen.


I think we should scrap everything we’re working on. And that is the direction…


Ha! This is what I thought at the end of 2016. I thought: “All right. I got to scrap what I’m working on and pursue voice.” Because exactly, it’s things like this that got me and continue to get me super excited about it.


So, for our listeners, is Pulse Labs’ website, and if you’re going to go visit there, there’s really two paths. One is on the consumer side, the customer side, that is, somebody that may want to leverage the platform to gather consumer opinions through voice. And the other is to actually sign up as a panelist to provide feedback. So I want to talk a little bit about your platform. What type of insights are being captured in your voice surveys?


So right now, primarily, we have been focused actually on testing usability testing, mostly for designers and developers, skills and applications. So if you are building an Alexa skill or Google action, and you want to get a gauge on how usable it is, whether people are understanding it, whether one particular approach makes more sense than another, you can use our platform to quickly and easily test with real world users. And we are able to do all of our testing directly on devices. So you can test any Alexa-enabled device. We used to do the test on any Google Assistant-enabled device, and we provide a level of data on those interactions that is exactly unavailable anywhere else in the market today. So it is designed essentially… if you’re building something on voice to get real user feedback and really deep, detailed feedback on exactly how people are using your application, Pulse Labs provides a platform and a panel for providing and gathering that feedback.


So I have not come across a business exactly like yours in our space. Did you do any pivots? Was your start different from where you are right now?


We have not done any. Small pivots? Absolutely. Changes in approaches or changes in focus? I would say absolutely. Major pivots in what our product offering is and what our vision is? No. So our vision from the very beginning has been to provide user research to your real world, real people… user research to anybody (brands, developers, designers, agencies), basically anybody who wants a presence on voice, and wants to understand how real people are using voice, and how real people are interacting with voice, and how they can effectively build their presence there.


In 2023, $80 billion dollars is the projected number that will be spent on voice devices in a voice consumer journey context. What do you think research will look like at that point in time as we see such a migration of the consumers’ expense moved to that environment?


I think that research is going to be based around “How do you make this as easy as possible for the users?”, “How do you make it as convenient as possible?”, so they have easy access whenever and wherever they needed. But also, if you are a brand, “How do you remain top of mind here?” Do essentially, how do you be the orient, and how do you set yourself up so that if a customer just wants to order paper towels or something like that, it’s your paper towels that they are ordering. And that is part of the big play for the voice platforms: they want to have some control and say over who gets that top position. With Google AdWords, it’s always a fight to be on the first page. With voice, it is going to be a fight to be the top, the number one, the one that is recommended, and the one that is provided. There is going to be a lot of research, a lot of understanding devoted to how to make yourself number one, and then how much number one is worth.


Yes, that’s really interesting, especially in the context of how many generic brands are now owned by Amazon and Google. This speaks to the overall importance of ensuring that you are “the Kleenex” of your brand category.


Yes, exactly, exactly.


All right, so the NEXT conference is coming up. You are going to be talking about voice. What is one practical take-away? I know that you are tilting your cards here, but what is one practical take-away that our listeners can gleam from your upcoming talk?


So the practical take-away will would be if you are a marketer or a brand and you want to build something on voice, what you want to do, what you want to focus on are one or two very key use cases that voice can do better than what is currently available right now, that are valuable to you, and then execute on those. Too often, we tend to see either brands think: “Okay, we’re going experiment with this. Let’s put together some application”, and it might be either frequently asked questions application, or maybe they’ll just say: “Let’s take the API that we have, that feeds all of our product line for our website and just connect it to Alexa.” Usually those approaches don’t work so well.

So the important thing is to think of things like what we just talked about, such as the ability to reorder paper towels at the point when you’re done using your current batch, and make that seamless and easy. Those are the sort of approaches that are most successful, and that we will see the best ROI.


Yes, that’s great. I think that the application of Kmart in Australia —I heard this through the Voicebot podcast, which I’m sure you’re listening to, and I’m going to try to distill the information a bit – they were talking about how they actually had a tremendous success. I guess there’s some legislation around not being able to procure a product through voice yet. But the ways that Kmart became dominant in a voice framework is that they provided proximity to the actual product. So if the consumer wants to buy something, they would say: “Is it in stock?”, or “Where is it near me?” and that is how they would get directed specifically to the store. So it is an interesting story for me in that they started talking about how the brand is empowering the consumer and getting close to them, adding the value. Another one, I think it is Chrysler, that has an automatic start feature on one of their automobiles. It’s actually one of the top 100 Alexa’s skills. So it could be cold outside, and you can just tell your voice device: “Hey, start my car.” And it will start. It’ll warm up the car for you before you before you get in. The more the brands start adopting this technology, and the better they’re going to be positioned when this action stuff actually scales.


Yes, exactly. Exactly.


Well, I can’t wait to hear your talk. My guest today has been Dylan Zwick, co-founder and Chief Product Officer of Pulse Labs. Thanks so much for being on the Happy Market Research podcast, Dylan.


Thank you very much for having me. It’s been a pleasure. Thank you. Thanks a lot.


For all of you who are listening, if you’re not signed up for the Insights Association’s NEXT conference, I would highly recommend you do that. Again, that is June 13th and 14th in Chicago. You can also find out information on our website I’ll be including links to Dylan’s information and his company’s information in the show notes. I really hope to see you at the NEXT conference. Have a wonderful rest of your day!