Site icon The Daily Signal

What Pollsters Missed on Trump and How It Applies to Government Spending and Taxes

Donald Trump recognizes wife Melania early Wednesday at a victory event in West Palm Beach, Florida. (Joe Raedle/Getty Images)

Donald Trump last week won every battleground state and cruised to an electoral victory. “Top” data experts and pollsters didn’t see it coming.

Trump’s big win should serve as a lesson for those who blindly trust the “experts,” whether pollsters or government bureaucrats who wield control over the budget and the economy.

The worst 2024 high-profile polling miss—by 16 points—came from the pollster who the media establishment put on the highest pedestal, Ann Selzer, whose 2012 polling for The Des Moines Register famously foreshadowed Barack Obama’s upset of Mitt Romney in Iowa.

Politico and others referred to Selzer’s polling as the “gold standard.” Clare Malone of ABC News’ FiveThirtyEight wrote a particularly gushing feature on Selzer in 2016, saying:

When Homer wrote of his hero Odysseus, he was ‘that godlike man.’ The D.C. poets use the same laudatory epithet-style when they sing of Selzer; she is uniformly ‘the great’ or ‘most respected’ or, as I saw on a book jacket recently while walking through the Des Moines airport, ‘Iowa’s polling queen.’

When Selzer’s Nov. 2 poll, released three days before the election, showed Kamala Harris winning the safe Trump state of Iowa by three percentage points, Harris supporters who follow polling were exuberant, ready to pop the champagne. MSNBC host Rachel Maddow said, “If anybody is accurate, it’s likely to be Ann Selzer.”

If the “best pollster in the business” had Harris winning Iowa, surely she’d win in a rout.  

But Iowa voters didn’t get the memo.

Iowans showed up for Trump en masse, giving the former president a 13-point win, the biggest margin for a Republican since Richard Nixon over George McGovern in 1972.

Almost all pollsters underestimated Trump’s support to some extent. According to statistician Nate Silver, who models presidential election outcomes, Harris had a one-point national edge in the final polling average, while the seven key battleground states were all virtual ties.

Based on polling and other inputs, Silver’s reputed election model showed that Harris and Trump had almost identical probabilities of winning, giving Harris the narrowest of edges at 50.015% to 49.985%.

Instead, Trump carried the night with relative ease.

Much like pollster Selzer, the media has lauded Silver as a “renowned election guru” and “polling wizard” ever since he accurately predicted the outcome of all 50 states in the 2012 election pitting Obama against Romney.

But 2024 wasn’t the first time Silver’s modeling understated Trump’s chances—the president-elect outperformed Silver’s model by about the same number of electoral votes in 2020 (42) and by even more in 2016 (71).

Although Silver’s model projected that Trump had a near 50-50 chance of winning the Electoral College in 2024, it put Trump as a big underdog to win the popular vote.

But not only did Trump win the popular vote, he’s ahead by a healthy margin of two percentage points. According to Silver’s probabilistic model, such an outcome was possible, but unlikely. For Silver’s model to miss by as much as it has in the same direction in three consecutive presidential elections suggests something’s wrong with his model.

Part of the problem is what’s put into the model: The polls consistently understate Trump’s support, so Silver’s model has too.

As they say in statistical modeling: garbage in, garbage out.

But Silver, along with his election-modeling counterparts at his former company, FiveThirtyEight, act as gatekeepers deciding which pollsters are deemed credible and which are not. Selzer’s polling is, according to Silver, rated an A+.

In theory, Silver and FiveThirtyEight are filtering out what they deem as unreliable polls, so “garbage in, garbage out” is no excuse.

Selzer, Silver, and FiveThrityEight exist within a self-reinforcing media ecosystem. The anointed establishment elites set the media narratives and dictate what information is considered credible and what is not.

There’s nothing inherently wrong with polling and statistical modeling. These tools can bring great value in understanding the world we inhabit and the issues society faces. But they should be viewed with healthy skepticism. And nobody should be seen as so authoritative that he is above questioning, especially when conducting complex modeling.

Unfortunately, overreliance on “top expert forecasts” isn’t just a problem for polling and election modeling. It’s also a problem that’s deeply embedded in our federal budgeting process, and it leads to higher taxes, more government spending, and less economic growth.

The Congressional Budget Office and Congress’ Joint Committee on Taxation forecast the budget and economic impacts of tax and spending legislation. When the media says that a bill would increase or decrease the deficit over 10 years by, say, $1 trillion, chances are the forecast came from CBO or JCT.

All too often, Congress treats these forecasts as gospel. Lawmakers tailor bills—often in gimmicky ways—to get the best possible score from these government gatekeepers. This is especially true with the most consequential tax and spending packages, which go through what’s called the budget reconciliation process.

If the Congressional Budget Office determines that a reconciliation bill would add to the deficit outside of the 10-year budget window, the legislation isn’t permitted under the Congressional Budget Act. This arrangement gives CBO and the Joint Committee on Taxation outsize power over Americans’ lives, even if that power is indirect.

When CBO scored the Biden-Harris administration’s 2022 Inflation Reduction Act as reducing the budget deficit, it provided proponents with all the cover they needed to ram the big spending bill through Congress. Never mind that outside estimates correctly determined that the bill would add hundreds of billions—if not trillions—to the deficit.

Even CBO later acknowledged that the green energy subsidies in the bill would cost about $428 billion more than they originally projected. But by then the damage was done.

On the other hand, forecasts by the Congressional Budget Office and the Joint Committee on Taxation exaggerate the deficit impact of tax cuts—especially pro-growth tax cuts—creating a giant obstacle for lawmakers pursuing positive reforms.

JCT also puts out distribution tables that purport to show who will benefit from tax reform and by how much. But at best, these tables tell only a part of the story.

The Joint Committee on Taxation doesn’t have a crystal ball that can show what entrepreneurs will do when allowed to keep a little more of the money their businesses earn. Government forecasters can’t predict every job created, every technology developed, or every life-changing opportunity that will be made possible.

They can’t foresee the dynamism that would be unleashed and the innovations that would be possible if businesses and individuals weren’t so bogged down by the IRS and government regulators. 

Modeling the U.S. economy is immensely more complex than estimating what percentage of Americans will vote for one candidate or another. It involves countless assumptions by CBO and JCT, most of which remain hidden from the public.

Lawmakers should remember that government scorekeepers are human and just as prone to error and bias as the statisticians and pollsters who have egg on their face after Trump’s big electoral win.

The incoming Congress cannot allow technocrats at the Congressional Budget Office and the Joint Committee on Taxation—who work for Congress, not the other way around—to stand in the way of passing conservative legislation that gets the economy back on track.

Exit mobile version