Happy MR Podcast Podcast Series

Ep. 232 – Aji Ghose – How Sky is Using a Mix Methodological Approach to get a Complete View of the Customer, and How You Can Too

My guest today is Aji Ghose, PhD in computational cognitive modeling, and is Head of Research and Data Science at Sky. Sky is a media and telecommunications conglomerate owned by Comcast and headquartered in London. Sky is Europe’s largest media company and pay-TV broadcaster by revenue with 23 million subscribers and more than 31,000 employees as of 2019.

Find Aji Online:

Twitter: www.twitter.com/ai_modeller 

Website: www.sky.com

Find Jamin Online:

Email: jamin@happymr.com 

LinkedIn: www.linkedin.com/in/jaminbrazil

Twitter: www.twitter.com/jaminbrazil 

Find Us Online: 

Twitter: www.twitter.com/happymrxp 

LinkedIn: www.linkedin.com/company/happymarketresearch 

Facebook: www.facebook.com/happymrxp 

Website: www.happymr.com 

This Episode’s Sponsor:

This episode is brought to you by HubUx. HubUx reduces project management costs by 90%. Think of HubUx as your personal AI project manager, taking care of all your recruitment and interview coordination needs in the background. The platform connects you with the right providers and sample based on your research and project needs. For more information, please visit HubUx.com.


On episode 232, I’m interviewing Aji Ghose, Head of Research and Data Scientist at Sky, but first a word from our sponsor. 


This episode is brought to you by HubUx.  HubUx is a productivity tool for qualitative research.  It creates a seamless workflow across your tools and team.  Originally, came up with the idea as I was listening to research professionals in both the quant and qual space complain about and articulate the pain, I guess more succinctly, around managing qualitative research.  The one big problem with qualitative is it’s synchronous in nature, and it requires 100% of the attention of the respondent. This creates a big barrier, and, I believe, a tremendous opportunity inside of the marketplace.  So what we do is we take the tools that you use; we integrate them into a work flow so that, ultimately, you enter in your project details, that is, who it is that you want to talk to, when you want to talk to them, whether it’s a focus group, in-person, or virtual or IDI’s or ethnos; and we connect you to those right people in the times that you want to have those conversations or connections – Push-Button Qualitative Insights, HubUx.  If you have any questions, reach out to me directly. I would appreciate it. Jamin@HubUx.com   


Hi, I’m Jamie Brazil, and you’re listening to the Happy Market Research Podcast. My guest today is Aji Ghose, Ph.D., in computational cognitive modeling, grounded semantics, machine learning, Head of Research and Data Science at Sky. Sky is a media and telecommunications company. It’s owned by Comcast and headquartered in London. Sky is Europe’s largest media company and pay-TV broadcaster by revenue with 23 million subscribers and more than 31,000 employees as of 2019. Aji, thanks very much for joining me on the Happy Market Research Podcast.


Thanks for having me.


I’d like to start out with getting a little bit of context about yourself. Tell us a little bit about your early days, your parents, maybe how they informed your current career.


Sure. So, I grew up in Vienna, Austria, and my mom was a math teacher. So from quite an early age, I was used to seeing a lot of numerical books around the house and just got a sort of flavor for numerical puzzles and those kinds of things. And my dad was retired. Both are retired now. My dad was a diplomat for the U.N., so was pretty much in civil service for about 35 years and was more interested in politics and diplomacy, etc. And for me the contrast between the world of diplomacy versus math was quite straightforward. I found one really boring, and I really liked other. Clearly, I went for the more mathematical route. 


Thinking about like math, a lot of people would… I say a lot of people; maybe a lot of my kids think of math as kind of the boring subject in their life, which is funny because about 80% of my life is centered around math as a professional. What was your connection into mathematics? 


So, in addition to my mom, I think the main connection to math was that I had a really good math teacher growing up. Her name was Ms. Salmachi.  She introduced us to math in a really fun, exciting way. Math wasn’t this boring dull thing you did inside a classroom. Math was something you did in the real world. You explored things: how rivers streamed. You just had fun. So the way I got engaged with math was more as a way of just we would go outside. Yes, we‘d learn a few things, but I would really have a lot fun. So in many ways it was actually better than gym or anything like physical education ‘cause you had a lot time to just explore things, ask questions and the emphasis wasn’t “Oh, look at these dry equations and try to figure it out.” No, no, we would try to find the real world with numbers.


Oh, that’s such a great point about the importance of engagement with a subject and the role that the teacher plays. Do you keep in contact with her at all? Does she have any sense of her impact on your life and the success that you’ve had? 


Yes. Well, not recently, but after I got my bachelor’s in psychology quite a few years ago, I reached out to her and said, “The main reason why I opted for all the math and statistics courses (which is quite rare. Most of students have a tendency to avoid those subjects.) was down to her teaching” and that of another one, Mr. Selvincy, who was my technology teacher. And he got me more sort of into computer science and the code side of things quite early on. I was like eleven. So I reached out to both of them. In fact, now that I’ve got my Ph.D., I was planning actually to reach out to both of them.


And congratulations on the Ph.D. I really don’t have a sense of context in terms of when you achieved that. Joaquim Bretcha the president—I call him El Presidente—of ESOMAR, he mentioned on Twitter that I should tell you “Congratulations.” 


Thank you very much. To use his words, “very much a fresh Ph.D.” I literally passed my oral exams on Monday. 


Wow. So quite literally. Wow. That’s amazing. Yeah. Congratulations. Full-heartedly You’ve experienced a lot of… Just accomplishing your Ph.D. is a pretty remarkable challenge. But when you think about some of the biggest challenges that you’ve overcome, what would one be that stands out for you, either personally or professionally, and how’d you overcome it? 


So one of the things that I had to personally overcome, probably the hardest thing, is having my dad diagnosed with cancer almost six or seven years ago.  I’m in London but he’s based in Vienna, Austria; so, there was a lot of distance between the two. How do you cope with that? My father still is very proud, a very traditional man, who doesn’t usually share a lot. So, he wouldn’t, for example, share details of him going for therapy or surgery, etc. So I would always find out after the fact. And he would go, “Oh, no, I was all fine.  No need to worry about it.” That was quite tough over the last few years, especially working full-time and at the same time, doing a Ph.D.  


Yeah, that’s a heavy lift. How’s he doing now? 


Yeah. He’s doing well.  With the condition he has, it’s a little bit difficult to tell, but it’s stable now.  So that’s the main thing.  


The issue with the parent transition is actually something that I’m terrified about addressing to be honest. My father is 81 years old and similarly to yourself. I don’t know what if it’s like generational or what, but he was also diagnosed with cancer and didn’t tell my sister and myself until after everything was pretty much in hand and sorted out. I think you’re right part of it is just this pride thing. Like this is my burden to bear, and I don’t want to leave it on my kids, but as I think about like how it has informed me as a now parent, I’m definitely more in line with sharing, I think, maybe a little bit more in, in context of those struggles for preparation, etc. But it’s definitely, I think, a generational shift. 


Yeah, absolutely. My dad is 77, I think. He grew up in India. The cultural shift or generational shift is quite a significant. Similar to you, I don’t have any kids yet but let’s be open and upfront about everything. 


There you go. So let’s shift gears a little bit – machine learning. Talk to us a little bit about what machine learning is. That’s probably a really good place for us to set a baseline. 


So machine learning is really the study of using algorithms, which are essentially recipes for performing a range of tasks, etc., without explicitly telling the computer what to do. So the machine learns typically through a lot of examples what we want it to do and what failure typically looks like as well. So, a good example would be classification. Instead of assuming a certain type of relationship, it all comes down to teaching a machine to figure out from multiple, many examples what right looks like or what the two different types of people, five different types of people look like. So that’s machine learning. To just contrast machine learning to traditional programming, traditional programming: You have to sort of tell the computer what to do, and in many ways, you need to be the architect of the algorithm itself. It’s really the human intelligence traditionally encoded in the machine, whereas in machine learning, it’s based on a lot of data and the machine gradually learning some representation that’s useful. 


When I think back on my statistics background in the late nineties or mid to late nineties, I was doing a lot more segmentation analysis and regression and kind of basic stuff like that—modeling out projected impact on product adoption in markets and whatnot. We would use math to basically create at a basic level of histogram. Right. And then apply some art, I guess, to the segmentation piece of it. It sounds like the application of machine learning here is really more of a standing position where it’s always in that process of understanding and segmenting. 


Yes, so that’s a really interesting point actually. So it’s a bit more dynamic than traditional … We’re talking about machine learning, obviously. For a lot of applications, traditional statistics is still the right way of doing things. But the way machine learning sometimes approaches a problem is instead of a human analyst having to encode or recode variables in certain way or combine variables (there’s some fancy name for that is called feature engineering), instead of a human analyst having to do that, a lot of machine learning algorithms, not all but a lot of them are really good at doing that bit, which actually people like me in the sciences, we have like stats courses all about feature engineering. These days with some machine learning algorithms, you don’t really do much of that feature engineering as you sort of have to do with more traditional algorithms.       


So from your vantage point, where are seeing it intersect with modern market research and how are companies using it to help inform their decisions? 


So, one of the use cases of machine learning, especially in market research in particular, is understanding unstructured data. You can use machine learning for any data set, structured or unstructured, with structured typically being simple, tabularized data that we’re all used to. You can use machine learning for everything. However, for unstructured data like text, images, that’s where machine learning can really start looking at the patterns and start identifying trends that you‘d be very hard pressed to find in using standard statistical methods. So a simple example would be if you have a video with a bunch of different objects, you can’t really identify what’s going on. But with machine learning it’s very easy to segment a scene.  “That’s a table; that’s a person; two people are talking.” And those kinds of things are quite useful for market research purposes as well because, all of a sudden, automatically start tagging videos or looking at intonations in voice using advanced speech analytics. And, more important, you can start integrating multiple types of data as well. So I talk about speech, for example, and I talk about images. We can start integrating the two, and start understanding, is there overall narrative structure that’s developing in a particular video clip? And across different types of video clips are there different narrative schemas that are used. Those kind of things can be done, codified and start saying, actually, for these kinds of videos, campaign A might form really well, whereas for the scheme of this, then actually campaign B or C might work pretty well. So those kinds of things have historically not been possible with standard methodologies, but machine learning can help decode those hidden patterns in very complex data sets. So, image, text but also sensor information. We at Sky don’t make that much use of, but I know others in FMCG areas do. 


Yeah, it’s an interesting use case. One of my theses, evolving thesis in the space is that one of the biggest opportunities in front of us is triangulating truth. And triangulation is really a misnomer here. But what I mean is we have a survey or whatever, data collection and IDI etc. and that’s really an isolation of a feedback without context. And the way that we can provide greater context is by incorporating into that, the self-reported data, whether it’s behavioral or transactional or just the longitudinal points of view that that person may have had over time. But you’re taking it even a step further by being able to process the environments,  literally, from the video perspective of the environments that the people are giving the feedback in, which is important maybe in the framework of the way that I’m going to respond on a bus ride is probably different than at home before bed. 


Exactly, absolutely. So, my Ph.D. was all about looking at visual scenes and trying to understand meaning that’s embedded in the visual scene itself. And one of the things I’m really passionate about working at Sky is to collaborate with the quality headed up by Sergio, my equivalent on the qual side. They connect rich data sources from people watching Sky or connected to services from the comfort of their home. All my data at all times doesn’t really get utilized. After the research was done, we move onto the next project but what if we started slowly looking at how behaviors change over time, taking those images and then learning what those associations are about.  That is potentially quite a rich source of information. 


That’s actually one time I did a… This goes back a couple of years, literally two years. I did a study where I was…  I can’t remember any questions, 10- to 15-question surveys, so short. At the end of it I said, please take a picture of a video of your room where you’re giving me the feedback. And I’ll never forget one lady took a picture and she had these little kids and they were running around like Lord of the Flies maniacs. It was hilarious. And, all of a sudden, it really dawned on me that I could be missing a lot of information. If I just made decisions based on this one mom in the context of how her day was going, when she was providing that feedback, I may be in trouble. Of course, on average we assume everything works out. Just like on average everybody has 2.3 kids. So…


Exactly, yeah. 


How should we be training the modern analysts and the data scientists that are entering into market research now? 


Yeah. For data scientists and analysts in particular, I think one main focus should initially be on communication. Yes, that might sound odd ‘cause you might expect me to say math or coding, etc. But we have a lot of people who already are really interested in data science and they’re pretty good at coding and their pretty good stats and usually they’re quite interested in math as well. But what they really struggle with is communicating their findings, understanding what the problem is in the first instance, especially junior data scientists and analysts might have produced exceptional models that are sophisticated, etc. But actually you’re like, are you really solving the problem that you’re trying to address?  They’ll very quickly jump from models but not really engage with clients or stakeholder or internal stakeholders. So then communication is definitely something that we in the industry need help data scientists and analysts improve but also in universities. There’s something about just recently the University of London, which is for people who are doing MSE programs and things like computation, making sure that they can present their work. They can present really complex ideas in fairly simple, simple terms.

Comm is one. The second one, I’ll say statistics. A lot of people are starting to ignore statistics, especially people more with a computer science background.  They do one course, etc. or even a degree program, and they just assume if they can run a machine learning algorithm, A, B, and C, that’s it. That’s data science. They don’t need to worry about things like probability, standards, distribution and all those traditional types of statistical methods.  And that’s really problematic because in order to interpret the output of the machine algorithms, actually statistics is very useful. So, stats is definitely a focus area, I would say. Thirdly, algorithmic thinking. So I’m not saying coding. A lot of times people go on a course and think, “OK, I can run a couple of scripts, and that’s it.  I’m great.” Whereas I’m talking about algorithmic thinking, which is what is a business? What is the business problem? What are the key components? What do we need to focus on? How do I create a model or a set of models that really addresses a question I’m trying to answer. And that can only be done if you move away from that mentality of “Problem 8 equals recipe C.” A lot of times, I see data science being run that way where people automatically say, “Oh yeah, that’s just another neural network.” etc. We kind of have to move away from that. Becoming problem solvers should really be the goal.  The final one I would say is “Learning to learn.” I’ve been in the world of data science for about 10 years now. I’ve used all sorts of different languages from basic MBA: SaaS all the way to things like R, Python, NetLab. The only caution that I’ve come across is I’ve never been too wedded to a single algorithm, etc. I’ve always had to develop. If you don’t have that “Learn to learn” mentality, data scientists will find it increasingly harder to do their job in the future.  


Going back to the first point that you raised about the importance of storytelling and really building out a narrative that people can relate to, that ultimately provides action to the organization as opposed to just knowledge:  How are you training? Like is there a lynda.com set of courses like that for the masses? How are organizations helping the people that are highly technical. I think what you’re talking about in data science a lot of ways is analogous to developers, which from a cast perspective have been seen as maybe not the best. They’re not the marketing guy, right? Or gal. So, how are they able to… Like if I want to develop myself in this way and pull the skill out myself, do you have some recommended resources? 


So I don’t have any recommended resources in terms of structured courses, some more formal training, but I’ve found it quite useful at Sky and places like Kantar Media, was basically through buddying up very technical people with quite non-technical people or in many cases, buddying up someone who is, for example, a content research expert but hates analytics with someone who dreads talking about content research but is very familiar with the latest machinery and techniques. And getting them to work on a project together. So, initially, it’s difficult and they may not realize that the other person’s skills are useful. At Sky, I had a quant research and data science and all the time I’m sort of a translator between the two. But over time when you do buddy people up with very different skillsets, they start seeing the benefits and they start learning.  The non-technical person will start to understand when regression might be more useful than a cross-train. A data scientist might actually realize that, “Oh, there’s a reason why the quant RM decided to simplify the narrative because otherwise the stakeholder would really have no idea what they’re talking about.”


That’s right. I mean that’s like a two-fold benefit, isn’t it? When I started Decipher, Jamie Plunkett was really the math brain of the business. I learned a lot. In fact, I learned 90% of my stats from him despite the fact, of course, I took the classes in college, but the point is that we saw benefit with really cross-training each other and, and shoring up those skills that may not come naturally to the individual. 


Exactly. And for me, it’s an ongoing process. I love engaging with my peers whether it’s head of insight or head of strategy. And start understanding they start integrating that information in their role and how does the head of global brand strategy, for example, start making decisions on back of that. I think that should be applied across all levels and experience bands because it generally helps everyone to do their job better. 


So, tell me about the research project you’re the most proud of and it may be your thesis.


Actually, yeah. It is for me the research project for sure that I’m most proud of so far. So my Ph.D. thesis is called “Grounding Semantics in the Real World.” It’s all really about understanding how much meaning is there in the visual world. People, obviously, talk about analyzing text is important, but in my thesis and research I’ve shown that actually in a simple photograph of like an office, you have books that are related to tables or desks and computers. The fact that all these objects are co-occuring in the same scene actually says something. It says that these are really related in a specific point in time and space. That is very similar to how text analysis is done as well. Text analysis has traditionally looked at what are the types of words that co-occur in a particular document. Because of that, you can start understanding what key themes are. My research is probably the first to do the same but in the visual world. So it’s not really decoding the themes and the topics and the meaning that’s in real life. And, as an example of how can quite useful, one of my chapters in my thesis was looking at gender bias. So people typically will understand that, “Oh, yeah, gender bias is an important thing.” And in texts clearly it’s quite likely to have a lot of gender bias.  However, people normally don’t think about images and how much bias there might be in images? One of the things I looked at was traditional terms like professionals like teacher, doctor, nurse, CEO, etc. I just scraped the web and just downloaded a bunch of images. And then using my algorithm, I looked at the scenes and then codified that information computationally. Based on that, you had women that were very close to association with nurse and teacher and very traditional male associations with CEO. There was a real divide in the sort of semantic space, derived from the real-world visual environment. So that finding was interesting, but it was also, obviously, also very worrying because it shows that when you train machine-learning algorithms using techniques, you’re also encoding the biases of the real world. And that’s something that a lot of people haven’t really paid attention to until very recently. 


That’s probably the most important point that anyone’s ever made on the podcast. Right. So we almost could have this—and I hadn’t actually considered it—You almost have this like self-reinforcing bias that’s being generated by the algorithms that are serving us up the content in the way we want it because that’s sort of the way that we’ve been fed it. 


Exactly, absolutely. From a lot of my Ph.D. work, I’ve noticed that in fact I’m building on many people’s work here.  Other people have found similar things in language. These algorithms not only encode the biases but, in fact, in some cases they amplify the biases. So they really… It’s almost like they’re creating a caricature bias in a statistical model, which is even worse. 


That’s so interesting. Which kind of gets back down to one of the reasons understanding the math is so important because, if you’re just accepting the outcomes, then you could really be the puppet, not the puppet master. 


Yeah, yeah, exactly. 


So, market research has gone through a lot of changes. What we’re talking about is a pretty material change. How do you think the space is going to be different in the next five years? 


I think the top change around market research training in next five years will be around increased automation of all the repetitive processes across functions: quant research, data engineering, data science, etc. and people having to be comfortable with idea that things that they have always done manually, it’s just going to be done automatically. And that’s something they’ll have to get used to, and that’s a real culture shift. Secondly, I would say there’s going to be more of a transition from data processing to data engineering. So instead of having big departments, and I’ve definitely experienced that at Sky in my own department. Then instead having big teams of people that turn out tables for researchers and data scientists, you’re going to have people that focus more into an automation of systems to enable unfettered access to data. So quant research and data scientists create their own paths quite easily instead of going through the quite archaic process of “Oh, someone has to create tabs and then that has to be codified and so on and so on. Some Wizard NDP has to go through some code and long list etc.”  So I think that will start disappearing. And people in data processing will start, hopefully, retraining in the field of data engineering. So yeah, that was something that we noticed across a few different companies as well. 


Are you seeing companies that are doing this? So, for example, I know Microsoft has a team where they’re taking the disparate pieces of data that they have (telemetry data) and now they’re codifying that into a schema, and then the next step for them and their process is to combine the self-reported data that’s coming in from, whether it’s qual or quant, into that schema so that it can be processed in a streamlined manner as opposed to having to be done by hand every time, which is tough. 


Yeah, no, exactly. 


My question is are you seeing like… Is this innovation coming from within the brands or are you seeing it being empowered from outside? 


So I think it’s being empowered from the outside because a lot of these kinds of things require technical expertise that companies like Sky or at least in their research area or data science area typically don’t have in-house. So I know, for example, companies like IBM are working a lot on creating very helpful, higher level schemas to essentially help non-technical people automate as much of their tasks as possible and to add value in different ways. So, IBM’s compu services is a great example of integrating vast amounts of data in nuanced ways and ways that humans can still shape. But then once you went through a couple of different paths, the algorithm started learning what sort of automation is required and then, it can actually do that for the humans. So you don’t have to have knowledge engineers as you had in the 80s, having to codify all of this manually. You can do it a few times and then systems like IBM and, you know, Google has done it as well… can just do it automatically. 


Yeah, I mean it’s so funny to me like even at a basic level, we still ask gender in surveys even though it’s vastly known right at the respondent level; especially if you’re using third party panel, it’s always known. That’s just one example. I think I did a study once on it and, on average, you’ll see about 80 to 90 questions in a survey, of which about 17 to 20 are data points like ethnicity, etc. that are actually known about that respondent. And what’s funny is those are the questions that are used oftentimes in screening criteria, which then creates this like mass repeat. So if somebody’s trying to take a survey, they’re going to get asked that again and again and again. And, I mean this is like the simplest example I can think of where there could be a lot of benefit in that integrated data schema. 


Yeah, I know, absolutely. 


So, AR didn’t come up, augmented reality; voice didn’t come up, which I thought was interesting for you. Do you see those as playing a big role in market research in the next five years? 


I think voice is going to play potentially more of a dominating role. However, at the moment, the main stumbling block is quality of data. So either there isn’t enough data or the data is such poor quality that, even if you use off-the-shelf algorithms, you don’t get that much information out of it. In fact, we’ve had both of those situations at Sky where you have a lot data to capture, but some of the quality isn’t there. I think also the algorithms for analyzing voice have to go quite a long way in order for it to be fairly insightful for very large organizations like Sky with very sophisticated insight functions, that span some 100 people etc. But for a very small company, I can see voice being very useful and might provide quite immediate insights, but I’ve not really seen it being done at scale except for a few particular companies, at least one startup that I know of called Verbalization, but we haven’t used them at Skype so far. But I know that they, for example, codified voice and text in terms of like 300 or 200-300 dimensions, psychological dimensions and then using those that can add much more value at the insight level using voice data. But I think those are the kinds of things we’re still not seeing enough of. The gap between data and so what are you going to do with all that information? 


Awesome. I really appreciate the lead on that company. I’m going to see if I can get them on the show. I just checked out their website. It’s pretty cool. And I’ll include a link in the show notes as well. So biggest issue. What is the biggest issue that either you’re facing at Sky or like the thing that you wished that, gosh darn it, somebody in this world would solve this particular problem and make a market research much better?


For me, the biggest issue is not getting the balance right between market research and other types of insight generation. So that can include Big Data analysis or AB testing or econometrics, not just Sky—pretty much most companies I come across through lecturing at the Market Research Society. Companies fall between two extremes. On one extreme, they do only AB testing and they don’t really value market research. On the other side, you have people that love market research so much they don’t care about CRM or Big Data, etc. And the problem is that people rarely find the right balance of market research and other types of data for free exploitation. And a lot of times that leads to market researchers getting a bit of a bad rep because market research agencies especially will always pitch proposals, etc. without understanding their real deep, deep in business context. And that can be quite frustrating for internal stakeholders. So, if I was at any other market research agency, the main thing I would focus on is getting market researchers to really understand how businesses operate right now, how that‘s changing, how big companies have huge infrastructure invested already in Tableau or other sorts of dashboards, etc. and that market research has to wedge itself into that ecosystem as opposed to being a standalone because that’s usually when market research is ignored a lot of the time. 


Yeah, that’s a really interesting point. You’re talking about a couple of things there thematically that have popped in the interviews that we’ve done here with major brands. It sounds like what I keep hearing is there’s a constant need to have deep partnerships for the actual agencies and the people that are servicing the brands to truly understand the business context of the insight and then partner perhaps a little bit more downstream with how that insight is going to get played out in the broader organization. 


Yeah, exactly. 


It’s interesting. That’s one part of it. And the other part of it, I think, that is, gosh, it’s so interesting. This is a fascinating point to me. So, I just presented at Facebook a couple, last week. (Maybe it‘s two weeks ago now.) And in the presentation, I talked about how many people were Ux researchers or market researchers. And the majority were centric to data science and, well, Ux first and then data science and then market research. So I thought that was really fascinating, right? And it wasn’t just that there were a lot of, there were a lot of companies, Google and others that were represented in this audience. And so, what I find so interesting is they are all doing similar types of research, but it feels like there’s a re-inventing of the wheel that’s happening inside of these newer disciplines that market research has kind of been doing for decades. And yet, there isn’t this like cross-sharing or the Venn diagram hasn’t quite, you know what I mean? There, there’s not the knowledge transfer that exists.


No, no. That’s something I’ve noticed at quite a few MRS conferences depending on the type of session especially if you’re at digital native live, like your Google and Facebook. A lot of the time they’re re-inventing techniques that market research addressed many years ago but at a more strategic level and yet people sort of bucket all of market research as that’s just asking questions, right? And simplifying market research into just “You ask questions; you get answers.” They sort of ignore the fact actually that a lot of the AB testing methodologies that  actually stem from market research.


That’s a great point, such a great point. 


So, people create these artificial divisions between.. The same thing with machine learning versus statistics. People love sort of separating things into buckets. I feel there aren’t enough people that try looking for the overlaps between all the different areas and try to truly get the best combination of solutions for whatever business problem there is. 


Do you think there’s a role inside of big organizations, more of this product manager for consumer insights? I don’t know what the title would be, but if that makes sense.


Yeah, no doubt. Yeah. I’m not sure what the optimal title would be, but yeah. Some translation role as well. 


Yeah, Lenny Murphy shared a list with me of technology companies centric to market research and there’s over 600, which actually surprised me. I thought it was closer to 200. So it’s a massive amount and the amount that’s spent—Now, of course, you have like the Qualtrics is in there. So you’ve got 400 and some odd million dollars—but the sum of monies that they represent is in excess of $12 billion, which is a meaningful amount of spend that’s happening. And so, we’ve entered into this time where consumer insights is… It’s really never been easier to get the voice of the consumer or a person’s point of view. But at the same time, it’s probably never been easier to screw up the implications of that and walk away with a bad point of view or incorrect point of view of what that data actually means. Because you don’t necessarily have the… Just because it’s easy, doesn’t mean you should do it right. A scalpel in the hands of a kid is a big problem. So,  the broader point is just because everybody has a license to do a survey, does that mean that that’s acceptable and should brands think about that broader role of…? Again, I don’t know what the title is. “Consumer Insights” keeps coming to mind, but I feel like that might be too narrow. So anyway, that’s what I got. That’s my soapbox. What is your personal motto?


My personal motto is a famous quote from Isaac Newton, which is “If I’ve ever seen further, it is by staying on the shoulders of giants.” And that’s been my guiding principle all the time, which is try finding established studies or academics or anything you can get hold of and then see…  Instead of reinventing the wheel, what can I develop on top of the existing literature to address a particular business challenge or academic challenge. 


That’s a perfect way to go out. My guest today has been Aji Ghose, Head of Research and Data Science at Sky. Thank you sir, very much for being on Happy Market Research Podcast today. 


Thanks for having me.


Everyone else, if you found any value in this, I would greatly appreciate it if you took the time to screenshot it, share it with your friends, and as always, your reviews are greatly appreciated. You can always reach me any time through email Jamin@happymr.com as well as the handle Jamin Brazil on any social media platform. Have a wonderful rest of your day. 


This episode is brought to you by HubUx.  HubUx is a productivity tool for qualitative research.  It creates a seamless workflow across your tools and team.  Originally, came up with the idea as I was listening to research professionals in both the quant and qual space complain about and articulate the pain, I guess more succinctly, around managing qualitative research.  The one big problem with qualitative is it’s synchronous in nature, and it requires 100% of the attention of the respondent. This creates a big barrier, and, I believe, a tremendous opportunity inside of the marketplace.  So what we do is we take the tools that you use; we integrate them into a work flow so that, ultimately, you enter in your project details, that is, who it is that you want to talk to, when you want to talk to them, whether it’s a focus group, in-person, or virtual or IDI’s or ethnos; and we connect you to those right people in the times that you want to have those conversations or connections – Push-Button Qualitative Insights, HubUx.  If you have any questions, reach out to me directly. I would appreciate it. Jamin@HubUx.com  Have a great rest of your day.