Guns vs Butter? What do you think is the most important problem facing this country today?
Public opinion researchers depend on certain questions as essential public opinion barometers, like presidential job approval or Bud Roper’s right-direction/wrong-track measure. Perhaps no other question is as often used to determine what is foremost in the minds of the public than the open-ended “What do you think is the most important problem facing this country today?” Respondents offer their concerns in their own words, unaffected by potential bias introduced by limited lists of answers. Access the most important problem datasets here.
Asked since 1939, tracking results from the “most important problem” (MIP) question reveals shifting public concerns over time, as the country grappled with crises from recessions to war to natural disasters. Now, through the work of Colton Heffington, Brandon Beomseob Park and Laron K. Williams (University of Missouri), public opinion analysts interested in this question will have a set of powerful new tools at their disposal. These researchers have merged datasets from 1939 to 2015 to create the MIPD Individual Dataset containing the individual-level responses to the MIP question, as well as demographics, economic evaluations, presidential approval, and party competency questions; the MIPD Aggregate Dataset, a survey-level dataset containing the aggregate percentages of MIP responses; and the MIPD Annual Dataset, offering the percentage of Americans identifying various categories as the “most important problem” facing the country. They have also created a Stata command file to allow other researchers to create their own datasets from the individual-level dataset by varying the temporal domain (annual, quarterly or monthly), the subgroup (i.e., Democrats vs. Republicans), the specific question wording, and the coding scheme.
Compared to What? Media-guided Reference Points Dataset
How do voters evaluate the state of the economy when making a choice at the polls? Conventional economic voting literature suggests a simple answer: A good economy helps incumbents in a given election but a bad economy hurts them. How then do voters differentiate a ‘good’ economy from a ‘bad’ economy? Are all positive numbers in growth rates (i.e. 0.1%, 2%, 5% etc. ) understood as ‘good’ while negative ones are ‘bad’? Since the data does not speak for itself, people tend to base their assessments on comparison between their own absolute performance and reference points.
By looking at domestic media from Lexis-Nexis, I find spatial reference point across elections from 33 democracies since the 1980s. This dataset will be available in my article titled, “Compared to What? Media-guided Reference Points and Relative Economic Voting” in Electoral Studies (forthcoming).