Q] As Chairman, what are your top three strategic priorities for MRUCI over the next 12–18 months?
Our priority number one, two, and three, right now is to get the IRS out. Once that is achieved, we will look at other things. The Media Research Users Council of India has the mandate to conduct research across all media, and multiple studies could come out of it. But historically, IRS has been the flagship product. Unfortunately, we haven’t published an IRS in the last 5–6 years. So yes, the immediate focus is firmly on delivering a bigger and better IRS, and only after that will we consider other initiatives.
Q] You’ve said you want to ‘restore the IRS as the gold standard in readership research.’ What are the current gaps or criticisms of the IRS, and how do you plan to address them?
The biggest criticism of IRS is that it hasn’t been conducted recently. But whenever it is done, the entire industry relies on it and is considered the bible of research. Everyone turns to it to understand India in terms of demographics. It’s a seminal study because it provides a comprehensive picture of India, covering demographics, product penetration, and, most importantly, Media Consumption of the country as a whole.
Q] One of the consistent concerns has been that IRS results are released too infrequently to be actionable. Will MRUCI consider more frequent, smaller, or rolling releases instead of waiting for one big annual survey?
Our last two surveys were done in 2017 and then again in 2019, which was released in March 2020. Those were fairly close to each other. After that, COVID became the major hurdle which led to the gap.
An IRS survey requires us to go home-to-home. While many surveys today are moving online and picking up samples digitally, that doesn’t work for an establishment survey like IRS, which is meant to present the true picture of India. The randomization and sample selection have to be rigorous; it can’t be a self-selecting online sample. Because of that, from 2021 onwards, for nearly three years, we simply couldn’t carry out the survey due to COVID. Following this long gap, some broader concerns were also raised at the board level. That’s why we decided to first relaunch the study with a pilot phase. In the pilot, we’ll address three or four critical issues currently at hand, and then expand it into a full-scale survey.
Q] Why are you coming up with a pilot phase for the IRS?
There are four key points. First, compared to five years ago, the offline–online interplay has become far more important. While the IRS captures all media, it is considered the bible for print readership. That’s why it’s called the Indian Readership Survey. Within IRS, it’s critical to understand how print and online interact. This can’t be done just by running small samples everywhere, you need to go into markets, collect data, analyse it, and then identify the actual relationship between print and online consumption.
Second, we want to extend this same approach to other media. Right now, no one has an accurate picture of a ‘day in the life’ of a media consumer, how much time is spent on TV, newspapers, radio, social media, short-form videos, OTT, and so on. If IRS can map this out, we’ll get a much clearer idea of overall media consumption. For example, in rural India, the average could be three-and-a-half hours a day, while in urban or affluent segments it may be much higher, with several hours spent across different media.
Third, we are keen on employing more robust protocols to capture the affluent homes. This has been a perennial challenge, one that we have put a lot of focus on to improve.
And fourth, we need to redefine how frequency is measured. It shouldn’t always be limited to the past 24 hours. For example, in digital or e-commerce, very few people shop every day, but that doesn’t mean they aren’t active shoppers. Likewise, in media, frequency of engagement needs to be recalibrated. So, we recommend looking at broader, more realistic frequency measures that reflect actual consumer behaviour across both digital and traditional platforms. These four areas are essential to address in the current study.
Q] With the IRS not being published since 2019 and suspended during COVID, how has this data vacuum affected ad planning and credibility for publishers and agencies? Which stakeholders are most pressing MRUCI for its revival today — advertisers, agencies, or publishers?
At the end of the day, everyone, whether it’s Media owners, agencies or advertisers, are seeking reliable measurement. Especially in the case of Print, IRS is the only readership study available. The other option is ABC, which measures circulation, but even that is selective, since not all publications report all their editions. So, circulation figures are piecemeal and not exhaustive. Only IRS provides readership insights, which you can’t get anywhere else.
Post-COVID, everyone agrees there has been an impact on readership, but the extent remains unclear. Some publishers claim readership is back within 10% of pre-COVID levels, while others suggest the drop could be 30%. In some cases readership may have dropped but the publications reach within valuable consumer segments may be high. Resorting to 2019 pre-COVID numbers leads to a lot of assumptions and decisions based on perceptions.
The challenge is that we don’t have precise, up-to-date numbers. That’s why this research is being welcomed by all stakeholders, it will give a clearer picture. Publishers facing criticism that their readership has declined will finally have data to support their case, and where declines exist, agencies and advertisers will be able to quantify them properly. So I would say, everyone - publishers, advertisers, agencies - is looking forward to this.
Q] In what ways will emerging technologies, digital tracking, AI, or big data be leveraged to modernize the readership survey, while maintaining the reliability of traditional measurement?
The technology we’re using is designed to ensure that field controls are completely real-time and automated. In a study of this scale, that’s critical. We begin with starting addresses, and interviewers are expected to follow the process properly when conducting interviews. Errors can creep in if someone tries to cut corners such as skipping starting points, rushing to complete interviews, or not asking questions exactly as prescribed. Questions must be asked as written, with no prompting or leading, and responses must be recorded accurately.
This is where technology plays a big role. With tablets now in use, supervisors can monitor everything in real-time including listening live to what an interviewer is asking a respondent. It’s like conducting a back-check while the interview is happening. Reporting also comes in real-time, and both the research agency and MRUCI’s own field and research experts review the data immediately to flag any issues.
We also track interviewer productivity. For example, if one person manages only two interviews in a day while another claims ten, when the average is five, we know something is off either too low or suspiciously high. These controls are vital, especially in a study of this magnitude where interviews are happening across the country. Finally, confidentiality is also safeguarded, so no one outside knows where interviews are being conducted. Technology underpins all of this, ensuring both accuracy and integrity in the process.
Q] How will MRUCI balance financial prudence with the need for investments in operations, coverage expansion, and technology? Are there new revenue or funding models under consideration?
IRS runs on a subscription model. Traditionally, the main subscribers have been newspaper publishers and agencies. However, we strongly believe advertisers should also be part of it, given the wealth of information the study provides and the valuable analytics that can be built on top of it. For now, the immediate priority is to get the study underway. But over time, the goal is to broaden the subscriber base so that it isn’t dependent solely on publishers or agencies. If that happens, I am confident it will be a success.
Q] What metrics will you use internally to measure whether the changes you’re implementing are working? How will MRUCI know in 1–2 years that it has succeeded or is on the right track?
In any research, validity and reliability are key.
For validity it is important for the questionnaire to be well designed, sampling frame needs to be representative and we need to eliminate respondent bias.
For reliability we look at controlling relative error. For instance, if a publication has 10% readership in a market the question is: how reliable is that 10%? With 90% confidence, we should be able to say that the number lies between 9% and 11%. That’s how we reported it last time, and that’s the approach this time as well. In fact, we’ve improved the research design further, working at more granular levels, which will enhance reliability even more.
Bottomline for an establishment survey the trick lies in understanding all the variables that can impact validity and reliability and then control them. If your input and throughput is good, so will the output be.