The 2019 NEXT pre-conference series is giving listeners an inside look into companies such as IBM, Voice Metrics, Ipsos, and Pulse Labs.. Join insight leaders on June 13 – 14 in Chicago for NEXT, where you can discover how technology and innovation are changing the market research industry. In this episode, Jamin Brazil interviews Ellen Kolstø, Design Principal at IBM.

Find Ellen Online:

LinkedIn

Website: www.ibm.com/us-en


[00:01]

Hi, I’m Jamie Brazil, and you’re listening to the Happy Market Research podcast. My guest today is Ellen Kolstø, Design Principle at IBM Q. International Business Machines Corporation, or IBM, is an American multinational information technology company that is headquartered in New York, with operations in over 170 countries. In 2016, IBM launched the IBM Q Experience, which is an online platform that gives the general public access to a set of IBM’s cloud based quantum computing. Ellen has hosted lectures at the University of Texas on design for artificial intelligence and has served in senior roles on both the agency and services side for companies including JWT, Young & Rubicam, Leo Burnett and BrainJuicer. Ellen, thanks for being on the Happy Market Research podcast today!

[00:49]

Happy to be here! Thank you.

[00:51]

Tell me a little bit about your background. This is kind of helpful for us because it level sets, and gives us a little bit of context of who you are.

[01:02]

Yeah, always a great question. So I started life in the agency environment as a strategic planner and so it came up through that world of account planning. I’d like to say it came over from the Mayflower, sometime in the 80s, from the British, and I grew up in that culture where it was very much about understanding customers and working with them and doing the research yourself so that you could translate that into creative strategy for communications. So I started in that world, and did that for quite a while. Then I felt that over time, the balance of the amount of research that was getting conducted shifted over to clients themselves, and they were taking on more of that in their own realms, and agencies were doing a little bit less of that. And so I found it very attractive to move into the realm of market research, where I could spend all my time conducting research, which is my favorite thing. And that is when I moved into that world and into BrainJuicer, now known as System1. I liked that environment as well because we did a lot of really innovative types of research using technology, so it combined these two worlds that I’ve been playing in, especially most recently. We did a lot of online ethnography and also online community. So you had a lot of tools to use and have consumers come with you for weeks and months in some cases as they work through different experiences with you, so that you could maximize products. And it was really fun, whether it was a long-term engagement, working with them on their relationship to cookies and unboxing experiences or how they selected their phone service and all that kind of fun that went along with that. So I did that for a few years, and then, I had this interesting opportunity where someone said: “Hey, IBM is looking for people with deeper search experience in what we call ‘user research and technology’”. Looking for that for Watson, specifically in the realm of AI, specifically they built up that team because Watson was new three years ago, it was just getting started, especially with the design team, and that is the group that creates the user interface and all of the tooling that our customers use to create AI themselves. I decided to go talk to them, and it was a really great experience. And I ended up there in a completely different realm: total technology, business to business, enterprise environment, but in a completely new and exciting space. And I was very energized by that. And that is how I ended up making my way to IBM through some of the other areas.

[03:45]

Where did you grow up as a kid?

[03:47]

I grew up in Houston, Texas, of all places. So I had actually spent my career moving around and worked in San Francisco and Chicago and Boston, and all these other places. Then I decided to come back to Texas and work in Austin at an agency, and came back to my roots here. And I really love Texas because it’s an amalgamation of a lot of things in this one giant state. You’ve got big corporations. You’ve got rural areas. You’ve got tech corridors and Austin, agencies in Dallas. So it’s just a lot offered here, but yes, I grew up in Texas and decided to come back to the Wild West, if you will.

[04:30]

So I did some digging and preparation for this episode. In 2015, on LinkedIn, you published a long form blog titled “Customers as Mentors”. And you opened with what is probably one of the best quotes I’ve ever heard, and I’ve never heard before, which is pretty unusual. And that was: “The purpose of business is to create a customer who creates customers.” And I thought:  “That is exactly right!” So I know you recently spoke in Austin at IIeX, and then you’re going to be speaking at the NEXT conference coming up in Chicago on June 12th and 13th. What are some of your favorite examples of how AI is helping us better create customer advocates?

[05:14]

Well, that’s an interesting question, and part of my point in that blog was that it’s really great when companies or good companies start to look at their own customers as potential mentors for new customers as in you’ve got all these customers you have a relationship with who’ve been through the journey of adopting your product, especially in categories where those products where there can be a lot of work to adopting them —and technology being a space very much like that. So if you pair them up with brand new customers, and get them started together, and wouldn’t that be a great thing to do? And I think some companies have looked into that, but I think it’s still right for growth. So it’s interesting that when you bring AI into that because AI obviously as a machine has a different perspective. It’s a human-generated perspective because we make these machines right now. But the rule that I think AI can play in that it’s almost becoming that mentor itself because you’re seeing that in a lot of the spaces where AI comes in the chat bot space for the conversational system space where let’s say, it’s midnight, and for whatever reason, you decided to download that new piece of software, and you’re not sure how to do it. And you need help. That’s the time when you may turn to a machine, and AI can help you get through that process, go through that journey of downloading that software correctly. So it ends up creating machine mentors where what I was talking about were human mentors. But you end up having these machine mentors, and they can be as useful and helpful because they’re available 24/7. They ideally, if it’s done well, know the questions you’re going to ask. That doesn’t always happen right now, but it is the vision. The vision is to be able to get the help when you need it, how you need it.

[07:04]

I know you’re going to be a little bit biased here, but who do you see in the space leveraging AI for driving customer experience particularly well?

[07:14]

Well, that’s a great question. I am biased, and it’s some of the folks that we’ve worked with, I will say —I was using that example of downloading software, I would say Autodesk, which is the company that makes AutoCAD and all of that software that helps architects and a lot of people that are doing a lot of rendering. They have a very advanced system that allows you to do a lot of things and get a lot of answers directly through that system. And they have worked long and hard to get a system that’s very thoughtful, that’s very focused on the key questions that customers need and is able to really help them. Now, it’s a different focus in market research. In many cases, we are not always looking at AI right now as being a direct interface to us. It’s more than, it’s a tool to help us in active analytics or insights in your engines to understand a lot of large scale data if you’re a market researcher. At this point, we are not using bots to field for us. Ha! Maybe somebody is. Maybe somebody is trying, but I think we still want to be the one asking the questions. Obviously, you could argue that surveys are an automated form of that, but it’s a different type of research data collection. But at this point, I think AI is in the realm of being a tool in market research, and I would say that it is definitely the best place for it to be right now.

[8:39]

I have spent about maybe a third of my career doing qual and the balance quant. Research is really just a conversation at scale. You don’t need to do research when we only have one customer because you’re talking to that customer, hopefully. But as soon as you are IBM, then we have got a lot of customers, and we can’t actually understand the customer sentiment or put the customer in the center of the conversation unless we actually conduct research and facilitate that conversation. What’s interesting about AI to me is, and you probably saw this at IIeX, that there’s a lot more companies that are entering into market research that are leveraging AI for qual, which is allowing bigger base sizes to be done and historically possible. And when you think about my career, this is way in the 90s, late 90s, mid 90s, we would do things like collages. You have probably these kinds of projects.

[09:41]

Yes!

[09:42]

And then, we would basically try to put together the respondent collages in a master collage, which is really funny if you knew my art. I never got a repeat customer on that one. I don’t think I delighted customers there. My point is that we were able to actually conduct these kinds of exercises, and then the machine put them together. The AI put them together in a way that is actually meaningful and connects to the audience. Are you seeing that sort of application in market research looking forward? Is that one of the growth areas?

[10:15]

Well, it is. It’s funny that the presentation I made at IIeX was actually around caution with AI.

[10:25]

Oh, interesting!

[10:26]

Understanding where the models are at this stage of the game is not to say that, as I said, you can’t use them or have them be a part of products and services; that can be very helpful. But I’ve spent the last three years watching our customers build AI in their own systems, and seeing the tremendous amount of work it takes to build a really solid, stable model that is reliable, that is as balanced as possible. I mean, bias is what it is, so it’s going to exist but you can get as close as you can. It’s a tremendous amount of effort and work. It’s not something you stand up quickly. It also requires, in some cases, hundreds of thousands to millions of data points for something to be really reliable. Think about if you start as a child and you don’t really know the difference between a cupcake and your dog. You’re not really familiar as a little kid but you start to see that thing over and over and over and over, all these elements, and that’s how you learn. AI is the same way. So you can’t expect after, in some cases, five times an image comes up that AI can correctly identify every time that it’s a Porsche. There are so many elements to a Porsche to get it right, from the shape of it to the texture to the colors to the different elements that are on the vehicle to the logo. It’s got to pick apart all those things, put it back together and identify that as a Porsche. And that’s kind of the value or the promise of neural networks, right? But it takes a lot of work for a model to get that right. And so I was illuminating at that conference, under the hood, how the sausage is made, which is what I will be doing partly at NEXT too just to arm market researchers with an understanding that I think the smart move right now is to use AI but use it with caution, and double check what you’re getting! Don’t expect that it’s a black box that magically spits out the right answer, or that its first passive data is going to be better than what you could do. It may not be, and it takes a while for it to learn from other people, to run enough times, to get things right. And we are at the point where you just have to make sure that your own human intelligence is a part of the mix. It’s not magic. It is very much augmented intelligence, which is what we like to say at IBM. It’s going to add to what you’re doing, but it’s not at this stage going to replace you or what you are able to do.

[12:57]

Yes, I just had a conversation yesterday with Aggie Kush, the Head of Insights he had a lot of titles, he was the Head of Insights for BSkyB. He finished his PhD talking about machine learning. One of the things that he identified going through his thesis, and I think was actually core to it, is that AI in and of itself can reinforce biases that we have, maybe even a gender bias, because it’s recognizing these patterns and then basically playing of the pattern recognition so gone unfettered, it actually could not have the outcome, whether it is social or otherwise, that we might want, meaning that we really got to pay attention to the models and the actual implications of the of what the machines are telling us.

[13:55]

Yes, you play in right into an example I gave at that presentation, which was a study that was done in 2015 around Google Search. Google Search is a great example of AI in use and with a large trained model. All of us when we do search or training that model, right? And this isn’t a dig on Google because, in fact, the way this worked out made perfect sense with what you’re saying. But within their search, university looked to see that whenever someone searched on CEO, they focused on this one instance. When you searched on CEO in 2015, 27% of the CEOs globally were female. But yet when you searched on CEO and Google, female CEOs only came up 11% of the time, which would tell you: “Oh, hey, my model is biased”. Now, Google rightfully came back and said: “Hey, this is based on what people are putting out, whether it is ads, whether it is articles, whatever images they are using, that’s where this is pulling from.” And the university came back, I believe it was Washington University, that came back and said: “Well, that may be true, but we also believe that whatever people are clicking on is training your model.” So if only 11% of the time are clicking on female images, then the model things that that’s the amount of time people want to see female CEO images. And it will continue to under-represent. So it’s exactly the point you made. And it is unintentional bias because that’s the other thing I’ve heard a lot of discussion around: this idea that machines will be able to be unbiased because they’re machines, and they will avoid the unconscious bias that humans have. Well, no, actually, humans are part of the training process. And so that unconscious bias was absolutely present in that example. Nobody was consciously, I believe, trying to say: “I’m going to search every time until it changes its model.” No, it just happened to be that that’s the way it went. And now you have got bias in that model. And that is the other reason I say to always double check what kind of models companies are working with because how much work are they doing to troubleshoot these kind of issues? Are they really looking back at their models and saying: “Oh, we know the types of people that are using our software, whatever we are offering that has AI in it, and we’re going to go back and double check and see how that’s augmenting our model.” Because AI models are never done. You don’t create one and walk away. You are constantly working on it and seeing how it changes because it’s a very constantly changing amorphous thing. So that is where I get on my soapbox about. How do you use it? I still believe it has tremendous promise, and it will always have tremendous promise. But you want to make sure and use your own intelligence in all of this as well. And don’t underestimate your own intuition at certain points.

[16:46]

Do you think there’s some overlap? Because we moved away from the institutional tracker. I mean, not like whole sale, but it’s become a smaller and smaller piece of the corporate budget. You know what I’m talking about, right? The quarter million dollar or million dollar…

[17:01]

Okay. Yes, I worked for a lot of them.

[17:04]

Yes. So those are going away, but at the same time, as to what you’re talking about, I have never heard it cast exactly like that. But these machine learning AI systems are in a lot of ways uncovering the direction of the consumer, which is really one of the big intends of measurement from the trackers. Do you think there is an analogy there?

[17:30]

Potentially? Depending on how people are interacting with AI in the tracker and who is answering the questions, I think there will always be an opportunity to double check what you are getting back as a result of that. Different from a survey, without AI in it, where there is an answer, you click on it and it’s done, AI is always training and because it’s always training, yes, things can change. And so you are just going to want to know how that might change. So, sure, it’s certainly something to keep an eye on for sure.

[18:08]

I think it’s a bad idea now that I hear you answer that question. Okay, so how can modern insight pros use AI?

[18:13]

With caution. Ha! I say that because, again, I believe there’s a lot of value. Like I said, where I get most excited in market research is with Predictive Analytics. I think there’s just a tremendous amount of opportunity. We always struggled with market media modeling. We are always trying to model things to understand what people were going to be doing. And we never had a really great way to at least get an idea of where people were headed. And predictive analytics, especially where AI can aggregate a ton of data, look across many things and start to make connections, will be invaluable. And I think we will get a much more accurate understanding of what could be happening if we were to run certain media mixes, what do we think the outcomes could be. I think that that’s where it’s got a tremendous amount of promise, and I’d be very excited to see how that moves forward.

[19:12]

Yes, I did a fair amount of modeling in my early career. The way that I was taught to do it, which is to say, there’s lots of ways to do it, is you find it, you asked a question in your survey, which is something like “probability of purchasing a TV”, and then that level sets against actual TV purchases over that period of time and so it gives you a baseline. And then you ask another question similar to that but about a new product that your customers are interested in measuring and then, perform a regression. And then all of a sudden you’ve got that or a Van Westendorp or some other kind of methodology that is leverage in order to come up with the predictive… well, Van Westendorp is a little bit different. My broader point is do you see marketing research as a discipline starting to use and leverage AI in order to do these market predictive models versus the traditional, old school stats point of view?

[20:17]

I would say it is probably being more valuable in that space, for sure. We worked on so many regression models, and I still couldn’t tell you if I really knew if any of that was going to play out. It was hard. There is a famous quote… Oh, gosh, I’m not going to get this right. Something about “I know half my advertising works. I just don’t know which half.”

[20:53]

And then we categorize that half we don’t know under branding.

[20:55]

Right! Exactly! And it’s never going to be a completely exact science. I think predicting behaviors is very hard. But statistically, it still was not quite enough of an indicator of what was really happening out there. AI has the ability to look at a lot of things and because it can also look at unstructured data, you have this unique opportunity where it could look across more than just the statistics. Now, it can look across conversations and different things that can be fed into the whole pie and tried to get a better understanding of what could potentially happen. That’s where AI’s promise has always been and that it has now so much more data to draw from to try and find these answers to very complicated questions.

[21:46]

AI is part of the tool kit, right? And let’s say that you’re entering into the insights role inside of an organization, marketing research or some other some other way. Well, actually, let’s focus on market research, what skills do you think the person should be cultivating in order to successfully drive inside of the firm, basically informing the executive level business decisions?

[22:14]

Yes. There’s a lot of different things. So the first one that came to mind because it is the one that I constantly run up against is flexibility. You have to be willing to roll with what comes along, not only with all of the changing technology and the different things that come up, but it can be very difficult to leave sometimes your opinions at the door and say: “OK, well, let me look at this a little bit differently”. Insights? When you get to the executive level too, they need to be pretty battle tested, right? You want to make sure that you feel pretty good about them, which means you have to at some point vet them in various different ways to know that you have something collectively that you feel is going to stand the test of time, especially the enterprise, where big, big, big decisions get made, right? And so you have to be flexible, the tools you use in the kind of data you’re looking at. You have to be willing to look across a whole bunch of different types of data, trying different methods. I don’t think you can do “plug and play” anymore. I mean, I think I i’s back to your point about all of those longitudinal studies, and all these tracking studies of “there was one way to do it”. You did that every time and you reported that number at the end of the year. And now, there’s so much innovation and change. I think staying on top of it is challenging. But I think also being willing to be flexible and reinvent at various times is going to be a really important skill set.

I am also going to go back to, and this feeds into flexibility a bit, creativity, which is also super important. And it’s a funny thing because I think what really helps that is to be able to draw from things that aren’t all related to what you’re doing or even in some cases, your domain, right? It’s looking out what completely different companies or different competitors are doing or even people completely outside of the industry that you are in, and trying to see how you can maybe utilize some of those elements in what you are doing to try and come up with new ways to think about things. Every industry is getting so incredibly competitive, certainly saturated with a lot of known insights. Getting something new and different is just requiring a whole other level of flexibility and creativity and inventiveness that you are just constantly having to hone, and it’s not easy to do because you’ll get in myopic into your workflow and then go: “When was the last time I even read anything on a new technique in this area?”, but it’s something to keep in mind.

[24:48]

This is such an interesting point to me. When I started my career, it used be the case that it was adequate to conduct a consumer survey and then analyze, PowerPoint, and then story-tell, right? But it was all in the context of that study. Now it feels like that’s wildly inadequate, right? You need to really hone in on providing the context, market, business, social, whatever, of that particular insight because the context informs so much of the implication of the data. And so one of the things that I’m seeing more and more in research reports is that maybe 25% is spent on both the setup, the context, and the implications at the business level. So it’s almost like we’re moving a little bit broader, and then also going deeper with the insights.

[25:45]

Wow! It’s so funny that you mentioned that because context is a big, big thing with me. I completely agree. It is telling stories, and it’s telling stories with the details where you can really start to see what’s happening. And I think in on the side of technology, especially with usability, there has been a tendency towards scores and just very almost quant-like representation of the learning. And I have pushed to put a lot more context even around that kind of thing. Just because somebody is navigating through a website does not mean there is not a lot of interesting things, especially if you are sitting there watching them, that can tell you about their thought process or why on that day, they ended up in certain parts of the experience. And that is where it gets interesting. It’s also true that your insights are better remembered with context. Without context, they are “somebody wants that”. But when you can go back and replay a story to somebody else about the context of why they want it, it gets institutionalized, it gets internalized, it gets retold and it’s that whole fireside chat kind of phenomenon. I’m a big believer in context. I would almost say that the context is 90% of it. And I completely agree with your point.

[27:11]

What I described is actually incorporating a lot more data really into the narrative that you build out. But the master, storyteller, they’re doing that. But now they’re actually the content on the slides, and the actual story that they tell is re-tellable. So it’s actually a hell of a lot less content that winds up getting displayed, and the story is profoundly simplified to its core essence. So it’s really interesting; it’s a much harder job today than it was before, I think. It is one of the reasons we have to leverage any tools that we can in order to help us.

[27:43]

Yes, and that’s where again unstructured data comes in, right? It is all of that kind of conversation. It’s interesting how AI will be able to help us with that. I think insights engines will get a lot better, and they will start to be able to serve up that context in ways that we can’t possibly get through all that data, and that will be super exciting when that happens, and that all of that context is that we want to hang on to.

[28:16]

Yes, insights and context, that would be interesting business to start, I think.

[28:17]

No kidding! That would be great, right?

[28:23]

I think I am doing about 1,000 interviews, and I’ve said the story before on the show. So I apologize to the listeners about the redundancy, but it’s rare. It’s worth mentioning. I did a quant study, relatively short, and then at the end, I asked: “Please do me a favor, and take a 15-second video or some period of time video of your environment”. And one lady, I’ll never forget it, took a video of a number of kids where they were running around like chickens with their heads cut off, as my mother would say. And I was thinking to myself: “All of a sudden it drew everything into question about the insights that she was providing in that survey for me.” You know what I mean? It feels like… totally. That was really important, the context of her providing that insight, which in that case was potentially moving a multimillion dollar ad buy. So it seems like maybe they’d want to know that? I don’t know. Anyway.

[29:21]

Absolutely! I did mobile ethnography, like I mentioned in BrainJuicer, where we had customers videoing various things, unboxing experiences, as I mentioned, in all sorts of things. And you saw the context there of their world. Right? There was one really funny when I was doing on a cookie that was being introduced. The husband was more excited about the cookie than the wife. And the wife was the one in the study, and he kept creeping into the video and taking it, and she eventually had to hide the box from him. But it was an interesting dynamic that you want to say. And the cookie was targeted to women, as they can be because it had a certain dietary benefit. But it was like “Who cares? See, this guy loves it.” So, yes, there’s so many stories that could be told by being in that environment, obviously the power of ethnography and the power of storytelling.

[30:13]

Yeah, which links to where you started, that is, the power of AI because it’s so hard to do that at scale.

[30:24]

Yes. It is hard to do that, yes. Yeah, it is the promise of it, and it will get there for sure, and it will change everything. I still firmly believe that even as it starts to be able to go through a lot more of that data and comb through it and give insights, I think humans are still going to be very, very much in the mix with it in terms of building off of it. You know how you probably collaborated with another researcher before, and you have kind of rift off each other to come up with the ultimate viewpoint on something or the ultimate insight. I believe that is how the relationship will move forward with AI.

[31:03]

Oh, I completely agree. This whole fear around AI removing jobs in the least, in the next 50 years, maybe 50 years, but not in 20 years, at least not from my vantage point, it’s all about partnership. I liked your augmented intelligence point of view.

[31:18]

Yeah, I agree. I just don’t see that happening.

[31:24]

So on a future look, how are we going to be different as an industry in five years?

[31:29]

Oh man. Well, let me get my AI together, and I will tell you. Ha! Where’s my predictive analytics? I will give you one viewpoint I’ve been thinking a lot about. And this is because I am in technology now and more so, in this space. But I think your UX research and market research are going to morph because I am already seeing in the realm of usability and user experience, all of that research, a lot of researchers in that space saying: “God, we need to understand more about the market. We need to do more up-front qual”. And then, when I was at IIeX, they had several sessions on usability, which was pretty funny, because some of us from the team went to that conference and they said: “Wow, they introduced usability like it was a new technique.”  I think it’s pushing into the realm of market research to say: “Hey, nothing is stopping you from wanting to dig deeper into the online experiences of your customers even though you might be at the brand level, right?” So I think we’re going to see all of this come together as one big realm of customer research, and I think it should because customers will engage with you all over the place. And why wouldn’t you have one researcher, a team of researchers looking across all of it, from the market to the online experiences to everything else in a meaningful way that doesn’t separate out user experience from market research.

[32:59]

We have addressed this next question, but I’m gonna ask it anyway, just to see: If you were going to create a company today servicing the industry, the insights industry specifically, what problem would you address?

[33:13]

Yeah, I like your context one a lot. So this is what I’ve been thinking about for a while, and I don’t know if it’s controversial or not, but it’s this whole idea of “is bias really a bad thing?” The reason I say that is that in research we are constantly saying you can’t be biased, we got to be unbiased, and we all know that’s impossible. You want an unbiased sample, and this and the other will. The panel probably already has biased from a million different angles right that you have drawn from. We know as humans, bias is inherent, certainly there is bias you absolutely want to be careful of, anything that harms anyone. But in some cases, bias is to be learned from. And if it exists, how might we learn from it and gain insights from the bias itself rather than treat it is something we just should either ignore or pretend, we have maximized it out of the equation. So for a business to understand how we can work with bias rather than avoid or against it, I think could be really interesting to figure out. Even with that Google example, there’s more going on there, with how people are clicking on those CEO images. What is it? Is it purely gender bias? Are there other things at play? What can be learned to unpack some of those elements that will help us better understand the role of bias? I would also argue, in some cases, bias is not any different from having a hypothesis. Having a hypothesis means I have a point of view on something without all the data. And I am biased in a certain direction because I think this might be what is going to happen. And then I will go into a study with that hypothesis, and I will obviously look to see that plays out. But we all know you are looking more for that particular other things because that is where your mindset is. It’s not a bad thing. It is something we all do. But how might we think about how to reframe the use of bias in a way that we can learn from it, that we can improve the outcomes and treat it as something that is a part of the mix, not something that we just should avoid.

[35:19]

Yes, it would be fun from a start perspective, it would be really fun, and it’s useful to think about… You are familiar with Myers-Briggs, of course, or whatever personality profile thing?

[35:29]

Yes, yes.

[35:30]

So, like, for Jamin Brazil, what biases do I have in my life that I probably honestly just don’t know about, that are just a function of culture and context?

[35:48]

Absolutely. Yeah.

[35:49]

That would be a really interesting… I don’t know how we would do that, but it that does seem like something AI could address.

[35:53]

That would be a great Myers-Briggs. You are right. Because then that’s something you would know going into any future work. Okay, this is a mindset I’m coming in with, and now what do I do to either to mitigate it or to in some ways celebrate it. Because it’s a funny thing too: I was reading a Harvard Business Review article recently that talked about how employees get reviews, and so many times, reviews are a negative experience because it focuses a lot on your weakness. “You should be doing this.” “You should be doing more of that” instead of “Okay, let’s celebrate what you are good at and find other things for you to do that celebrate this thing that you are good at.” So it’s kind of that same idea. How could you take what might seem like a negative and say: “Well, there may be ways in which this could be extremely helpful with certain studies”, “Having this viewpoint could really make me the best researcher for this type of research” as opposed to “Oh, you are biased in a certain direction, and now you’re not good for certain things.”

[36;54]

Yes, totally. It is such an interesting point of view. I can pick on my grandfather here, my late grandfather so, I will tread lightly. But my point is that he grew up in a World War II generation. And there was just a completely incorrect set of biases that were ingrained there, not in a positive way. I am not saying he was part of some terrible group or anything like that, but it was just different, really different. He didn’t fit into a millennial culture, how is that? And yet, with no malicious intent or anything along those lines, it was just the framework that he understood and agree in incorrectly. So the opportunity for him to get informed on that, to hear: “Hey, these are the things that you know you have inherent biases” because you can often find see them in other people, but they don’t really not be able to see them themselves. And that’s the point. It’s hard to see the blind spots in ourselves. Something like that could be really interesting.

[37:59]

Absolutely!

[38:00]

Sorry about this. I totally went away with the conversation.

[38:04]

No, what is interesting about your grandfather, too, is that, who knows? His perspective might be getting smaller and smaller and smaller as millennials grow. So that maybe a perspective that’s also interesting to understand or potentially having a certain study, where there is another angle to things, you know what I mean?

[38:24]

Totally. Out of micro level in and a man macro level, start seeing how that plays out. That’s so interesting. All right, my last question: What is your personal motto?

[38:32]

Ha! I guess the one that comes closest to encapsulating me is: “Always be prepared.”

I learned that a long time ago for my father, who approached everything with a lot of preparation, thoughtfulness. He had he had a plan for everything, and it really served me well of just having some level of preparation is, I think, sometimes 90% of the job, 90% of the battle, whether you’re reading secondary research ahead of a study or you are just getting smart about an industry or you are having a conversation with some stakeholders. Before you get started with something, you have got a good jumping off point that means you are not just going in shooting from the hip in many cases. I’m someone who likes to have a level of preparation. So it’s ironic because in some ways AI is very much about that. Building models is very much about a tremendous amount of preparation going into any kind of work that you are then going to do with it. But yes, that’s my thing. I like to be prepared.

[39:37]

I love that. I got to end on two stories to that point: A good friend of mine, Jennifer Crawford, she took a bet on me when we were at Decipher in the early days. She is the owner of a New York-based research company called Research Solutions. And I remember I co-pitched with her to Meredith about an online diary, something you’re really familiar with, and in that pitch she came in with a folder that was about 0.25-inch thick of preparation. There was a bunch of stuff in it about the meeting. And so we left after 45 minutes. I don’t know if we actually opened it. Maybe we got through two or three pages in the binder, or the folder. And this is the only time I have ever heard a customer say: “I want to thank you so much for being so well prepared for this meeting.” And we won the business. It was a windfall for both of us, the firms. It was spectacular. Anyway, sorry about my reminiscing. But preparation, as it turns out, I think it’s really important. Oh, and the second one I want to mention is Voss Media. Voss Media, which is a big company, is inundated with papers about states of industries, etc. And they actually subscribed to an AI-based system, which does the processing so that they can reduce all this vats of information into a quarry string and pull out the pieces that are relevant and say that they have 99% coverage on their content. So anyway, yeah, I like the preparation point. Thanks so much for sharing that.

My guest today has been Ellen Kolstø. Sorry about that hick-up. Ellen Kolstø, Design Principle at IBM Q. Thank you, Ellen, for joining me on the Happy Market Research podcast today.

[41:20]

Thank you. It was lovely being here.

[41:22]

Everyone else, this is in conjunction with the upcoming NEXT conference. You have a couple weeks still to register. You can find out information online, of course, at https://happymr.com/next2019 as well as Google Next, and it is located in Chicago, on June 12th and 13th, I believe. It is going to be a wonderful event. I hope to see you there as always. I love your screenshots, feedback. Share this. It’s appreciated. Have a great rest of your day.