Examination of Modern Research Methodolodies Used in Political Science
Examination of Methodologies Used in Political Surveys
The Beginning: Gallup, AAPOR, Chicago University
The Roman Empire conducted the first censuses to determine their tax policies. Since the days of the Roman Empire, surveys have determined public policy. Polls or surveys and political science are so intertwined—and political surveys are so ubiquitous—that political science, campaigners, political psychologists, political marketers seem dependent on quantitative methodologies and statistical science. Indeed, George Gallup’s 1936 political survey later published in Public Opinion Quarterly in 1937, initiated the genesis of the evolution of the science of surveys. Chicago University lured some of the greatest minds in sociology and social psychology to its hallowed halls. Chicago University—not the Ivy League schools of the East—tenured professors dedicated to a new field called political science. Political science—my passion and the passion of the members of this class and the faculty of the burgeoning, nationally respected government department at Suffolk University—the Chicago School’s contribution. The erudition of Gonsell and Merriam, two of the founders of political science, still influences our methodologies to this day. Sergeant First Class Gosnell came from a working-class background. His love of campaigns and politics began with the when the man who invented the bully pulpit and the modern presidency, President Theodore Roosevelt, began his 1904 campaign. Unable to afford a graduate degree from an Eastern school that would never subsidize the education of a working-class kid, he found a home at Chicago University. Chicago University lured great graduate students away from the East with the promise of financial aid, academic freedom, and a faculty exploring the new field of political science. Gosnell knew of the reputation of Chicago as a city: it was run by gangsters and election fraud was not an atypical campaign strategy. Yet, Gosnell saw that the city operated efficiently. With an interest in political science and big city campaigning, Chicago became his new home. At Chicago University, he and his professor made a significant impact on quantitative methods. However, they always used a mixed methods approach. Meriam wanted to continue to utilize open-ended questions and at-home interviews, and Gosnell contributed to the quantitative aspect of their mixed-methods approach[E1] .
By 1904, many states adopted a direct popular vote for Senate seats. At that point, the Chicago University had developed a political science program. Gosnell used U.S. census data to gather demographic data that he would use to obtain a representative sample of the City of Chicago. Certain percentages of various demographics would serve to give them a truly representative sample. Merriam utilized a qualitative methodology in the form of open-ended questions combined with Gosnell’s quantitative methodology and multiple-choice questions. The data collection methodology consisted of a personal interview in the respondent’s home (Issenberg 2012) The result was the first use of the quantitative methodology for political science purposes. Moreover, from its inception, a quantitative methodology utilized qualitative methods in the process of data collection.
To this day, we use a hybrid of quantitative and qualitative methodologies. In the contemporary process, we utilize focus groups, a form of quantitative methodology[E2] . Political surveys still utilize demographic data information from the U.S. Census to develop a probability sample in the quantitative political survey approach. Data collection has evolved with technological innovations that make it easier and more comfortable for the facilitator and respondent to communicate. For many years Random Digit Dialing or RDD involved calling a number starting with a root number and working from the root number on the left to the rest of the numbers until 10 surveys are completed. At that point, one starts with a new root number. The quantitative methodology in the form of a political survey depends on the focus group qualitative approach [E3] for language use and survey design.
For years this approach yielded accurate predictions in elections. However, RDD is becoming less and less relevant with online panels and a hybrid RDD/cell phone voter list. This type of data collection is more accurate than RDD alone. There is a generational and financial divide between folks who have a landline and folks who only utilize cell phones. To ameliorate this problem that skews the outcome of our data analysis, businesses offer cell phone numbers that are known to be owned by people in the jurisdiction one is polling. Moreover, these companies sometimes utilize text messaging surveys, or online survey panels.
We design Scale design consist of a question with an ordinal five-point Likert Scale, and answers that measure strength of opinion. We employ a subsequent statistical analysis, particularly cross-tabulation. [E4] [E5] The result is that we discover relationships amongst independent variables in which the observations serve as the dependent variable.
Gosnell’s survey, which was a hybrid of qualitative and quantitative, was the first poll to use probability sampling, which makes it the first scientifically rigorous poll (with some qualitative methods built into the poll).
This was the first use of social psychology to provide stimuli to develop interested in political outcomes—in political science. Gosnell and Merriam believed that randomized control was the only way to glean what stimuli could affect voter behavior. The 1923 poll cost $5000.
A study with a control and treatment working population (treatment received postcards, one with positive reinforcement and one with negative reinforcement). This led to the first scientific field experiment outside of the field of psychology and was widely heralded as a vanguard study. It showed that mobilization efforts worked with populations that had yet to receive information on why it is important to vote (Italian women and women of color).
Controlled scientific field experiments were not used until post WWII with the ubiquity of phones and computing power with complex regression analytical capabilities. Polls questions were designed, and transcripts of the question were used to conduct political surveys where rigorous scientific analysis could be applied. This was done with a varying degree of success. In 1948, The Chicago Tribune predicted that Dewey would defeat Truman, which was obviously a miscalculation. This happened because no tracking polls were put in the field (they stopped following the voters in the middle of the election, and this led to the embarrassment of “Dewey Defeats Truman”).
A 1952 poll was done by Campbell for $100,000, 1900 respondents and 124 questions. The folks responsible for the American National Elections Studies, a repository of data from elections done every-other-year and an analysis of their responses, grew into the definitive data resource in political science. The survey questions brought into view things like, “Did your co-workers influence you?” and questions that are a person’s part best predictor of how they will vote: parties were seen as social clubs and voters rarely went against partisan politics. However, they found that sometimes a candidate comes along with the personal qualities to get people to cross party barriers. Campbell then wrote, The American Voter.
The American Voter is an amalgamation of Social Research articles that are peer-reviewed; surveys of congressional and presidential elections since 1952; and the subsequent analyses. The folks responsible for the American Voter became known as National Election Studies and is sponspored by the National Science Foundation. It is one of the most authoritative sources for examining methodologies. Interestingly, and appropriately, National Election Studies uses a methodology that supports the argument that qualitative methodologies play a significant role in political surveys. National Election Studies uses In-Depth Interviews prior to launching a survey and after the survey is completed. The goal is to get more depth and nuance in analyzing why respondents chose certain answers on a political survey. Often the results of a quantitative political survey are unexpected. In order to understand why voters chose an unexpected response, we need to utilize focus groups and In-Depth Interviews to dive deeper into the question and the response, which helps facilitators better understand why voters chose an unexpected response in a political survey.
Moreover, facilitators can employ qualitative methodologies prior to fielding a political survey in order to understand the issues that respondents are most concerned about. This can assist the facilitator with survey design.[E6]
The surge of political surveys in recent years caused the industry tremendous damage. Campaigns and organizations often utilize political surveys, not for the public benefit, but for their own selfish reasons, which involves deceit, greed, and grave injury to legitimate polling organizations.
Let us firstly examine the greed of NGO’s, which, born out a political system in which power often resides with the side with the most campaign funds. For instance, in the 2020 race between Senator Markey and Congressman Kennedy, Markey alone raised over $14.5 million. Throughout his career, Kennedy’s name recognition and Kennedy brand helped him raise, according to the last FEC filing, $24 million, with $4.1 million cash on-hand (Open Secrets). Hedge fund Crescent Capital, a company that successfully raised $25 billion across seven funds, contributed the most campaign funds to Kenned y over the years (Open Secrets). How can the NGO’s like environmental groups and the ACLU ever keep up with a hedge fund for political power?
Marketers are familiar with the terms sugging and fugging: fundraising under the guise [E8] of a survey. I have received one of these “surveys” that are naught but attempts to fundraise. Dr. Wilson has received similar fugging from similar groups. Unfortunately, this produces s detrimental effect in the marketing surveys and political surveys industry. According to Marketing Research, “Such practices lead to (a) biased survey results, (b) decrease future survey participation, (c) blur the lines between research and business activities.” (Market Research
The push poll [E9] is one of the ways campaigns manipulate political surveys. The “facilitators” of push polls are often the brainchildren of campaign advisors and campaign managers. Instead of using political surveys as a science, they use surveys as a way to further their political ambition. The facilitators of push polls use the guise of political surveys to plant the seeds, often fallaciously, that negatively affect the opponent’s campaign. For example, during the primary in South Carolina when George W. Bush was facing his last opportunity to stay competitive with the Senator McCain campaign, Karl Rove fielded a push poll that asked the following question: “Would you be more or less willing to vote for John McCain if you knew he had an illegitimate black daughter?” This push poll was a serious gamble; however, tracking polls showed that it had a major effect on how Republicans in South Carolina would vote, and they began to swing towards supporting George W. Bush. The truth of the matter is that Senator McCain adopted a dark-skinned child from South-Eastern Asia, and she was often on the campaign trail with her adoptive father. In political science, perception often matters more than truth. As a campaigner, a political scientist, a political philosopher given a scholarship by Suffolk University to study the effects of fascism and communism on Eastern Europe, a clerk in both the judicial and legislative branches of our Commonwealth’s government, I am loathe to speak of the ubiquity of my art’s corruption, but as someone who is ultimately a lifelong student in search of unvarnished truth and not fallaciousness, I must admit that the realm of political science attracts more of incorrigible bad actors than any other field I have pursued. Political ideologues criticize folks who study the fields of business and marketing. Ideologues always worm their way into politics, usually infecting our political processes at the extremes of the various political philosophies. The modus operandi of these spiritually and intellectually diseased souls lies in their inability for the superior argument to persuade them. The ideologue, the bad actor, the extreme partisan leaves a wake of logical thought fallacies behind them. Since this is not an exercise in political philosophy, social psychology, or spirituality, I must leave the topic of bad actors in political science a topic to attest to on another occasion. Although the last bit presents as parenthetical, or, worse tangential, I beg for the readers’ indulgence for a moment in order to proffer important conclusions. Firstly, the manipulation of surveys for fundraising increases non-response bias and damages the credibility of the research methods the fields of marketing and political science depend on to understand the consumer and the voter, respectively. Secondly, push polls produce the same negative outcomes in political science and unduly influence the results of elections. Unethical push polls weaken our elections. When ill-designing men with a lust for power perform push polls, we damage one of our most important democratic rights and our entire democratic-republic and its citizenry suffer. Therefore, as marketers, students of marketing, including political marketing, it is incumbent upon all of us to live up to an ethical standard. We aim not just to protect our field; we aim to protect our fellow citizens, ourselves, and the democratic-republic that our Founders expected us to insulate from those nefarious incorrigibles with dangerous proclivities. To maintain our ethical standards, we continuously question and examine the way we collect and analyze data.
This brings AAPOR back into our discussion. AAPOR sets strict ethical guidelines for research practices. In implementing those ethical standards, AAPOR maintains the following goal: “Our goals are to support sound and ethical practice in the conduct of survey and public opinion research and in the use of such research for policy and decision-making in the public and private sectors.” AAPOR is also an institution that sets the standard in both qualitative and quantitative methodology.
Example of non-response bias
Historically, there has been a lot of issues getting Trump supporters to respond to polls, which has created response bias that pollsters have had a difficult time overcoming. However, Trump supporters are a significant and important part of the electorate; therefore, it is a necessity to try to measure their responses on political surveys, especially right now during the historic times we are facing, politically speaking. Even though he is facing legal trouble, Trump is still the heir apparent of the Republican Party as the party’s frontrunner. The fact that so many of Republican candidates and party leaders have come to his defense proves that he is still in control of what Republicans see as a significant and important part of the modern Republican Party.
We face challenges though when polling Trump supporters who are often unwilling to share their beliefs with anyone who they see as closely associated with institutional politics. They tend to easily come to believe in conspiracy theories; eschew any type of traditional news outlets. They traditionally get their news from purveyors of conspiracy theories on YouTube. Most importantly, they come from a politically ideology that has already traditionally shown that academia is a world of liberal bias.
Therefore, we must take significant and creative steps to understand and measure the way Trump supporters think about various topics. And we must also find a way to prevent response bias by preventing his supporters from immediately hanging up the phone. This is a difficult
obstacle to overcome. Perhaps, in the end, weighting is the only way we can truly get a statically relevant notion of what Trump Republicans are thinking.
As I previously mentioned, there is significant amount of response bias in measuring the beliefs of Trump Republicans due to hang ups. So immediately, we must do something to prevent people from hanging up the phone. [E10] Perhaps the introduction to the poll should at least acknowledge their fears as well-founded to prevent them from hanging up the phone. It could go something like this: “There has been a significant questioning of the polling process by Trump and his supporters. Perhaps the questioning of polls is well founded. By polls can also help a candidate to understand the electorate well-enough to overcome certain issues.”
Here we have acknowledged their doubting of the scientific process of political polling without giving up too much ground. It initially creates some rapport with Trump supporters without being manipulative or deceitful.
Now as far as the question design, we must also be confident that as pollsters, we are an integral part of political science without scaring away Trump supporters. So perhaps their should be questions at the beginning that measure their belief in certain areas that other polls might not even acknowledge: “Do you believe that the deep state is responsible for Trump’s legal issues? a) extremely likely b) likely c) unlikely d) not at all”. “Do you have any faith in the justice system treating Trump with equality? a) Not at all b) no c) unsure d) yes.”
Then perhaps this will create the rapport necessary to get down to brass tacks:
The following is a list of Republican candidates for president. Please let me know if you are willing to vote for any of the following candidates, if for some reason, it is someone other than
Trump. (Randomize everything except other and try to indicate the importance of this question if there is response bias): a) Desantis b) Hutchinson c) Haley d) Ramasramy e) Pompeo f) Pence e) Sununu f) Tim Scott.
Quantitative Methodology were once conducted by doing in-home interviews. Then from the 1970’s the use of random digit dialing or RDD became the primary form of data collection. The use of various types of data collection methods have increased lately while the use of RDD has decreased respondents, particularly of working-class demographics, replacing the land line altogether with their cell phones. Now pollsters tend to use voting lists rather than RDD and augment the voter list with cell phone numbers.
The Web and Sampling
The computer also has introduced techniques to measuring the variables of voting and public opinion. This document will divide them by probability and non-probability samples, then in the following section, the sampling and the different sampling methods will be introduced. Here is how we can break down web-based methodologies. This table appears in Polling and the Public:
|Non-Probability Methods||Probability Methods|
|Polls as Entertainment||Intercept Surveys|
|Unrestricted Self-Selected Surveys||List Based-Surveys|
|Volunteer Opt-In Panels||Web Option in Mixed-Mode Surveys|
|Prerecruited Panels of Internet Users|
|Prerecruited Panels of Full Population|
Confidence level and margin of error in probability sampling
These are self-explanatory, so I will immediately write about the sampling designs, which have had mixed results. In non-probability sampling we cannot generalize the results to the general population as we can in probability sampling due to the lack of a scientific basis for collection. Probability sampling takes a sample from the general or working population, and collects a sample based on a confidence level usually no less than 95%. One must subsequently calculate the random sampling error in the survey’s results. It is indicative of the range of values within which the actual value is most likely to lie. First, determine the number or respondent completions or the sample size. We have already determined the level of confidence, which is 95%. This represents how certain one wants the result of the survey to be. 95% makes the result statistically relevant. A higher confidence level usually costs more, and it is wasteful to spend more money when 95% is already statistically relevant. In order to avoid confusion about confidence level and its role in surveys and random sampling error or sampling percentage, it means that one is 95% sure that the true value lies within the margin of error. Next one determines the standard deviation. The standard deviation is the sum of the square root of the mean plus the individual observation. Next calculate the standard error: standard deviation over the square root of the sample size. In order to calculate the margin of error, one needs the z-table, which is used to find the critical value. The critical value is determined by the level of confidence level and the standard deviation: 1.96 multiplied by the standard error. The margin of error increases as the sample size or number of responses increase and decreases as the sample size or the number or respondents increase.
As one can see, statistical science plays a significant role in public opinion research/political surveys. As surveys on the web increase, the ways in which we approach sampling varies. Facilitators utilize simple random and systematic sampling; stratified sampling; cluster and multi-stage sampling; however, the above statical formula is the most commonly used because of the ubiquity of RDD and Voter List survey methodology.
Question design is also an important part of the process in both quantitative and quantitative methodologies. In fact, some facilitators use qualitative methods in order to inform the design of questions in quantitative methods. Some take qualitative methods even further by conducting focus groups and IDI’s after a poll is completed in order to possibly understand why there is a shift in people’s opinions and other new hypotheses or concepts. Due to the restraints inherent in quantitative methodology, we cannot get a truly deep understanding of shifting attitudes or make new hypotheses the way we can when we can probe more deeply with questions that ask why respondents are answering a survey question a particular way. That is why qualitative methodologies must be a part of the data collection process and the quantitative methods process. In Marketing Research, V. Kumar, et. al., write, “A useful report of a group session is one that captures the range of impressions and observations on each topic and interprets them in light of possible hypotheses for further testing.”
A focus group consists of a moderator or a facilitator that encourages a group discussion using certain techniques. The group consists of 10-12, but according to V. Kumar, et. al., in Market Research, smaller groups might actually be more productive. In my experience doing Focus Groups, even a group of 6 will often take an hour-an-a-half of one’s time and the respondent’s time. That is, frankly, a lot to ask, especially if the respondent is assisting in an academic exercise out of altruism. Moreover, in my experience this amount of time from 6 respondents provides a significant of data. Recruitment may also use incentives like gift cards or pre-paid cards in order to attract a larger pool of potential respondents. The use of projective techniques is recommended to avoid both non-response bias and group-think (a type of bias in which some people feel compelled to answer questions in a way that pleases other members of the group) Market Research defines projective techniques as:
the presentation of an ambiguous, unstructured object, activity or person that a respondent is asked to interpret and explain. The more ambiguous the stimuli the more respondents have to project themselves into the task … Projective techniques are used when it is believed that respondents will not or cannot respond meaningfully to direct questions about (1) the reasons for certain behaviors or attitudes…
In attempting to research public opinion, the best technique for the facilitator to adopt is empathic interviewing. In empathic interviewing, when a respondent gives a generalization, the moderator asks for specific examples, avoids self-referencing, imagines oneself in the person’s situation, and asks open-ended, non-leading questions that answer the how, what, and why,
Although an In-Depth Interview provides us with the ability to avoid group think and peer pressure, the new concepts and hypotheses do not come from the group interaction; it relies heavily on the facilitator’s abilities are challenged. One is able to obtain large amounts of information from the respondent, but the facilitator must avoid respondent fatigue. The ability to analyze the amount of information obtained is part of the challenge in In-Depth Interviews.
An In-Depth Interview is a useful instrument in public opinion research and political research. These are sensitive topics that might be challenging in group environment to attempt to glean.
A netnography is a online conversation. Either the facilitator researches what people are saying about the topic on social media or the facilitator attempts to foster a conversation on the social media to glean what people think of the particular topic and its pertinent questions. This is a rather new and evolving research instrument. Some folks use it with AI to construct a netnography with using “big data.” Using text sentiment analysis, they can construct a netnography that is a mixed-methods approach. However, this requires computer programming skills that are not easy to develop. One can us the language “R” to find out which themes or words are used the most the Facebook data that is plugged into the code. One would need to install the FacebookR package and the Tidyverse package and know how to manipulate file paths and utilize the packages in order to have R return results in the form of percentages that various themes appear in the data. Packages associated with Tidyverse are able to give the researcher the ability to create data visualizations based on the results.
My Use of Mixed-Methods on a Simulation of a Post-100 Day Poll of Maura Healey
Due to the recent trend in public opinion research of a qualitative methods used in conjunction with quantitative methods, my data collection instruments involved the simulation of interviewing the respondents after a 100 Day Poll measuring Maura Healey’s first 100 Days in Office. I created a transcript to conduct In-Depth Interviews, a small discussion group (similar to a focus group), and an attempted netnography. I developed a mixed-methods approach, my data collection method reflected both a qualitative and quantitative mixed-methods approach. The questions for the data instruments—In-Depth Interviews, focus groups, nethnography—come from previous 100-day polls conducted by Professor Paleologos and the Suffolk University Political Research Center. Many of the questions are about campaign promises, which is reflected in my work.
My analytical approach was made possible by my utilization of Dedoose. However, instead of using it solely for qualitative methods, I took online classes that gave me the ability to adopt much more than coding would allow for. My small discussion group was done on Zoom, which allowed me to upload a transcript onto Dedoose. Firstly, I turned the transcript into a PDF file.
The In-Depth Interviews were written out in my research notebook. I used Adobe Scan to turn them into PDF’s that I could upload onto Dedoose. Coding consists of highlighting the common themes. However, some of these common themes were in uttered in positive, neutral, or negative statements. Therefore, I adopted weighting as an approach to measure just how negative or how positive these statements were; the frequency at which they were positive or negative; and then took the data visualizations that best reflected the measurements to use in this study, which are a part of this document’s appendix section. The measurement process I developed was a scale of 1-3: 1-neutral, 2-neutral, 3-positive. I did not adopt a 0-10 or 1-10 scale. Those types of scales with multiple points may work when we are asking a respondent directly how they would rate a person or a product, but to rate a statement in a transcript does not allow for a scale of this type. I am limited to interpret the statement in a three-point scale (see Appendix III).
Small Discussion Group (Small Focus Group)
I also utilized a small discussion group, consisting of only three Democrats. Snd I used Zoom and the transcripts produced by Zoom, which I turned into a PDF format, which allowed me to easily upload the transcript onto Dedoose. Dr. Wilson and I decided that it was inaccurate to consider this a focus group. There were only three respondents. Although I approached this using focus group techniques, in order to be accurate, we decided to consider this a small discussion group.
Due to the amount of people who are unaware of the promises made by the Healey campaign. I decided to show the respondents a video that was an amalgamation of Maura Healey’s debate performances in which she made promises to the voters of Massachusetts. Many of my transcript questions are based on campaign promises, so in order to probe more deeply, I had the respondents watch a video I created to remind them of the campaign promises that were made by Governor Healey.
I thought about using R computer programming language to develop the themes without having to manipulate the discussion on social media towards answering questions about Maura Healy’s campaign promises. Moreover, I already began to use Dedoose for data analytics, and I wanted to keep the same codes or themes.
However, there was a problem with introducing political issues onto a social media site such as Reddit. People become very wary and are unwilling to answer the questions. I often received responses that accused me of baiting people, or, even worse, I faced insulted and oaths. I failed to develop the data I thought I would have been able to develop in my netnographies. The next time, I will utilize text sentiment analysis.
The results portrayed Maura Healey and Charlie Baker in a positive light. Interestingly, both Democrats and Republicans saw the two governors in a positive light. Most see Massachusetts as headed in the right direction. The code “transportation system/MBTA” had the lowest mean in the 1-3 scale; however, it was still neutral and not negative falling right at 2.
I thought it rather compelling that every code or theme fell above 2 or neutral for in the measurement of their mean. Most people seemed to be rather less partisan than I had assumed. I did some research on partisanship in Massachusetts and discovered data visualizations that portray Massachusetts, not as a bastion of liberalism, but as a Commonwealth that is more independent than one would assume. However, in those visualizations, we see that those who are self-declared liberal are further to the left than those who are self-declared conservatives (see Appendix II).
If I had only adopted a qualitative approach, I [E13] would not have been able to glean this information. This supports that the recent trend towards a mixed-methods approach is indeed the correct approach to public opinion research. Through a mixed-methods methodology, I was able to glean information that allowed me to make suppositions about the partisanship—or lack of partisanship—in the Commonwealth of Massachusetts.
Please peruse the appendixes. Appendix I is the transcript I used in my In-Depth Interviews and my Focus Group. Appendix II shows the partisanship in Massachusetts. Appendix III are some of my own data visualizations.
Appendix I – Guide for Focus Group and IDI’s about Maura Healey’s first 100 Days
Overarching Research Question: What do you think about Maura Healey’s first 100 days in office?
Welcome: Hello everyone and thank you very much for being a part of this focus group. This is an important part of academic work that I am doing as part of my Masters Thesis at Suffolk University. This is being used solely for academic purposes. We greatly appreciate you taking the time out of your day to be a part of this research. Before starting, does anyone have any questions? Has everyone signed the “Informed Consent”?
Overview of the topic: To start with a little of background on our research, we are trying to gain an understanding of how folks perceive Maura Healey’s first 100 days in office.
● Speak freely. We only ask that you respect the person who is speaking by not interrupting that person. Other than that, please enjoy this 30-minute discussion on the Governor’s first 100 days in office.
● Please remember that there are no wrong answers. We want to hear your personal opinions and how you feel. This is a judgment free zone.
● At certain times we may need to move forward from a question for the sake of time and we will ask people to wrap up their thoughts in 1-2 minutes so we can finish the session on time.
- Please introduce yourselves (first name, occupation, one thing about yourself that is interesting.)
- Do you think Massachusetts is headed in the right track? Why or why not?
- Maybe: Do you consider Governor Healey a powerful person on Beacon Hill? Why or why not?
- Do you have a favorable view or unfavorable view of Governor Healey? Why or why not?
- What is the most important issue facing Governor Healey? Why do you think that?
- Do you approve or disapprove of the job the Governor Healey is doing? Why? Why not?
- Has Maura Healey kept her campaign promise to do something about health care and behavioral health? Why or Why not?
- Possibly be ready for a follow-up prompt: Has she made it easier for immigrants to access health care?
- How important do you think this promise is?
- Has Maura Healey kept her campaign promise to do something about criminal justice reform? Why or why not?
- (Look into her specific promises)
- Has she kept her tax cut promise?
- Has Maura Healey kept her promise to do something about immigration reform? Why or why not?
- Follow up prompt: This became a heavily debated issue when FL Gov. DeSantis sent immigrants out of his state (some to MA)—do you think this is an important issue?
- Overall, do you think Maura Healey is keeping her campaign promises? Why or why not?
- Do you think Gov. Maura Healey is doing a better job than Gov. Baker? Why or why not? (and then go into campaign promises followed by next question)
- What is the most important issue facing Gov. Healey?
- Why does MA tend to break glass sealings politically speaking?
● Of all the things we’ve talked about today, what is the most important issue for you?
● Are there any other thoughts on this subject that you would like to share?
Thank you for participating today! I am grateful for your support of my academic research. As previously discussed, any recording of this will be destroyed. The transcript that is used for the analytical process will protect your privacy. I hope you all have a wonderful day!
Appendix II – Massachusetts and its political leanings
Research done by Priorities for Progress found at https://www.prioritiesforprogress.org/poll-liberal-mass-thing-of-past, accessed on 25 April 2023
Appendix III – Research done by Paul Adams, MSM Candidate
Qualitative Visualization: Word Cloud
Quantitative Methods: Codes that are Weighted (1-negative 2-neutral 3-positive) and
Frequency Distribution by code and Mixed
Methods (weight: 1 negative – 3 positive)
Code by Weight (1-negative 3-positive) and Party Affiliation
MA Headed in Right Direction
Governor Maura Healey
Governor Charlie Baker
Maura Healey Administration is living up to campaign promises
 Sasha Issenberg, The Victory Lab, (Broadway Books: New York, 2012)
 Ibid. pp 16-17
 The Victory Lab, Issenberg, Sasha (Broadway Books: New York, 2012) pp 1-32
 The American Voter, Beck-Lewis, Michael, et. al. (The University of Michigan Press, Michigan, 2008)
 Polling and The Public: What Every Citizen Should Know, (SAGE: Ohio State University: 2017)
 Ibid. p 154
 Polling and the Public, Ibid.
 Marketing Research, V. Kumar, et. al., (Wiley, Hoboken, 2007) p 185
 Marketing Research, Ibid. p 188
 Ibid., p 197
 Netnogrphy Unlimited, Robert V. Kozinets and Roseela Gombetti, (Routloudge, New York: 2021)