Read Time:11 Minute

PARTICIPANTS

Mr T.S. Krishnamurthy  |  Former Chief Election Commissioner, Election Commission of India 

Mr Sriram Seshadri  |  Psephologist & Political Analyst

Prime Point Srinivasan  |  Founder & Chairman, Prime Point Foundation & Digital Journalists Association of India

Mr V Kumaraswamy  |  Author, Columnist & Consultant

Mr T.S. Krishnamurthy 
Former Chief Election Commissioner, Election Commission of India 

 The question before us is not merely technical — it is one of democratic integrity. As far as opinion polls and exit polls are concerned, the intention has always been to help the voter and the political parties gauge the public mood. But their value to the actual management of elections is limited, and their limitations are considerable.

The Election Commission did attempt, some decades ago, to impose restrictions — for instance, banning exit poll results from being published between phases of a multi-phase election, so that early results would not influence voters in later constituencies. The Supreme Court, however, held that freedom of speech and expression could not be curtailed by banning opinion polls or exit polls outright. So we settled on a regulatory mechanism: disclosures must be properly made, methodology must be stated, and publication must be timed so as not to distort the polling process. But we do not treat these polls as authoritative — and there are good reasons for that.

The first reason is voter psychology. A large bulk of voters — and the studies bear this out — make up their minds only on the day of voting, or in the final two or three days before. They keep their eyes and ears open throughout the campaign, hear everyone out, form tentative opinions that shift, and finally decide at the booth. Fence-sitters, by definition, resist prediction. The uncertainty continues almost until the moment of voting.

The second reason is the truthfulness of respondents. Voters who give their opinion in a pre-election poll need not be giving their true opinion. Opinion polls may reflect broad trends, but they do not necessarily reflect the actual voting intention of the electorate on polling day. The sample may even include people who ultimately do not vote at all.

There are also concerns about neutrality. Political parties have expressed to us — quite directly — that some opinion polls appear designed to benefit a particular side. I am not accusing anyone, but the suspicion exists and it is not without basis. What the Election Commission can do — and what I would personally advocate — is to push for greater disclosure rather than deeper auditing. Pollsters should be transparent about their methodology, sample sizes, timing, and funding. So long as intentions are honourable and quality improves, the Commission should not interfere further in their working. Free elections require free expression, and that includes the expression of poll forecasts.

Mr Sriram Seshadri 
Psephologist & Political Analyst

I fully agree with Mr. Krishnamurthy on voter psychology, especially the late-decider phenomenon. In my experience, at least 35 to 40 percent of voters decide in the final three days before an election. That one fact alone should make any forecaster deeply humble.

Let me explain how psephological polling actually works in India — and why it so often falls short. There are two distinct instruments: pre-election opinion polls, conducted weeks or months before voting day, and exit polls, conducted outside booths on election day itself. They differ fundamentally in what they can capture. An opinion poll surveys likely voters and tries to predict both turnout and vote choice — two variables, both uncertain. An exit poll surveys actual voters who have just voted, so it needs only to predict vote share. Even so, exit polls have a stronger information base, which is why their accuracy tends to be higher — though far from perfect.

The complexity of Indian elections makes all of this immensely difficult. We have over 970 million voters, 22 scheduled languages, dozens of dialects, and profound caste arithmetic — where sub-caste, not merely caste, determines voting behavior. Coalition politics shifts constantly: what held in one election fractures in the next. The shy voter phenomenon is real — in several constituencies I have worked in, voters from subordinate caste groups refused to share their true preference in front of dominant-caste neighbours who were present at the time of data collection. In Bihar and Uttar Pradesh in 2019, male community members in certain Muslim-dominated areas told us they would vote one way, while the women inside the household — audible but not visible — were saying something entirely different. The women voted as they said they would, not as the men declared.

My most instructive personal failure was Karnataka 2023. On the ground, all signals pointed toward a hung assembly or a BJP-JDS combine. We did not capture the Lingayat community’s resentment against BJP, nor the way Vokkaliga JDS supporters chose to back D K Shivakumar as their community leader through Congress rather than waste their vote on a non-viable JDS. Congress won over 130 seats. I had predicted something very different. That failure forced me to rebuild my model from the ground up.

Now, what can corporate leaders and managers take from all of this? The parallels are direct. First: build multiple scenarios, not a single forecast. Just as opinion polls show a trend rather than a certainty, business forecasts should present a range of outcomes — including a wave scenario and a black-swan scenario. Second: beware of urban and elite bias. Business planning that draws only on upper-class or digitally-engaged consumers misses the rural majority — exactly as a poll that over-represents English-speaking respondents misses the mood of the ground. Third: late momentum matters. The last-mile execution — distribution, word of mouth, point of sale — can shift a market as dramatically as a final-week election event can shift a constituency. Fourth: invest in ground intelligence. Your sales force on the ground often knows more than any survey. Fifth: triangulate. No single data source is enough. The more independent sources that converge on a finding, the more confidence you can have in it. Sixth: embrace humility and adaptation. In 2019, I predicted BJP at 300–305 seats; they won 303. Then I grew overconfident, and Karnataka punished me for it. Models must be continuously revised, and forecasters must be prepared to say — publicly — when they got it wrong.

Prime Point Srinivasan
Founder & Chairman, Prime Point Foundation & Digital Journalists Association of India

Let me start with a ground-level experience that tells you something fundamental about the limits of exit polling. Some years ago, students from a mass-communication department were engaged by an agency to conduct an exit poll at a polling booth in T Nagar, near Panagal Park. That booth had 1,200 registered voters. In an educated urban area like that, turnout is typically around 30 to 35 percent — meaning roughly 400 people would vote. Yet the students arrived with 600 response sheets to fill. When I met them at the booth, they told me their target was 700. The math simply did not add up. By evening, I learned from their professor that 600 completed sheets had been returned. The students had collected responses from people who never voted. That is not a rogue incident — it is a symptom of a structural problem in how polling data gets collected and reported.

The deeper issue is that people rarely reveal their true political preferences — not to pollsters, not even to family. I discovered this years ago when I used to travel between Delhi and Chennai on the Tamil Nadu Express. Fellow passengers, strangers to me and to each other, would share the most intimate family grievances within hours of meeting. I was struck by how much more candid they were with strangers than they would ever be with people they knew. I studied this carefully and arrived at a finding I have used ever since: people are brutally frank in precisely three conditions. First, when they are anonymous. Second, when they are speaking to a third party with no stake in the outcome. Third, when they are in a group of peers who share a common identity.

From this insight I developed what I call image audit — a structured method of capturing hidden perceptions by recreating those conditions deliberately. When I applied it to organisations and politicians, we were able to surface 95 to 98 percent of perceptions that would never emerge in a conventional survey. The moment a respondent suspects the interviewer belongs to a particular party, or might report their answer back to someone with authority over them, the truth disappears. Pollsters face this every day in the field.

I run the Sansad Ratna Awards, which recognise outstanding parliamentary performance, and Mr T.S. Krishnamurthy is co-chair of the selection committee. Every MP we have interviewed in confidence tells me the same thing: in the final 10 to 15 percent of the electorate — the genuine fence-sitters — the decision is made not by ideology but by the last narrative they hear. We have seen candidates blast Sansad Ratna recognition on their hoardings on the day before the election and gain a meaningful vote margin from it. Uddhav Thackeray’s candidate in one constituency, a consistent Sansad Ratna awardee, won by two lakh votes in 2024 — one lakh more than in the previous election. That last-minute shift in narrative is real, and it is exactly what makes polling so unreliable in its final numbers even when the directional trend is correct.

In Conversation

Can artificial intelligence
improve poll predictions?

Mr Sriram Seshadri

AI can definitely perform far better analysis of past historical data than any human being and generate sophisticated forecast mechanisms. But here is the critical constraint: AI is only as good as the data it is trained on. If the underlying data is biased — because people did not tell the truth, because phone-based collection captures only 30 seconds of a respondent’s attention, because ground realities were not mapped — then AI will simply produce biased results faster. It can remove human analytical bias from the modelling stage, but it cannot enrich data that was poorly collected in the first place. Ground truth still matters above all else.

Prime Point Srinivasan

I have spent years studying how perceptions form — and my conclusion is that the deepest problem is not the tool but the moment of asking. People reveal their true opinions in three specific conditions: when they are anonymous, when they are speaking to a third party who has no stake in the outcome, and when they are in a group with shared identity. Outside those three conditions, they play it safe. When I started doing image audits for organisations, using exactly those conditions, we were able to capture 95 to 98 percent of hidden perceptions. The same logic applies to electoral polling. No AI model can solve the fundamental challenge that voters will not tell you the truth unless the conditions are right.

What distinguishes robust electoral forecasting from environments prone to misinformation?

Mr T.S. Krishnamurthy

Uncertainty is universal — even internationally. The Trump versus Clinton and Trump versus Biden outcomes confounded the most sophisticated Western pollsters. Human behaviour simply is not fully predictable. In India, given our electorate’s scale, the variations can be substantially larger. Rather than treating opinion polls the way one treats an astrological forecast — as if they must be right — we should regard them as directional indicators only. And instead of attempting to audit poll agencies, the Election Commission would do better to mandate rigorous disclosure: methodology, sample size, funding source, and timing. Transparency, not regulation, is the right response.

What are the ethical responsibilities of pollsters and the media in communicating uncertainty?

Prime Point Srinivasan

This is where the deepest problem lies. Turn on any evening debate and you will see so-called senior journalists who are more vigorous party advocates than the official party spokespeople themselves. The audience watches — but increasingly does not believe. Media has become entertainment, and people have learned to discount it. What I find most troubling is that some exit polls are not merely inaccurate — they are manufactured. I know of at least one instance where a political party, knowing it was going to lose, approached television channels and offered to pay for exit poll coverage that showed them winning, hoping to generate favourable stock market momentum in the interim days before the actual result. Some channels refused. Some did not. This is the reality.

Mr Sriram Seshadri

I agree entirely, and I will add one structural point. The moment opinion polling becomes a commercial business, the incentive structure is distorted. A pollster who depends on a channel for airtime, or on a party for access, cannot be independent. If independent pollsters reported their findings exactly as the data showed — methodological limitations included — it would build genuine public confidence in the exercise. Greater disclosure is not just the election commission’s responsibility; it is every pollster’s and every editor’s ethical obligation. Without it, public trust in forecasting — whether electoral or corporate — will continue to erode, and the entire enterprise becomes less than worthless.

ALSO

Discover more from Business Mandate

Subscribe now to keep reading and get access to the full archive.

Continue reading

MMA app

FREE
VIEW