Use of Survey Research, Content analysis, Experimental Design and Case studies

Use of Survey Research, Content analysis, Experimental Design and Case studies

Q.4:Explain why do researchers use survey research content analysis, experimental design and case studies in media research. How these techniques are different from one another?

 Use of Survey Research, Content analysis, Experimental Design and Case studies

Survey Research:

The sample survey is used to answer questions about how a large number of subjects feel, behave, or are, especially with regard to variables that change over time.

Survey research is the study of a portion or sample of a specific “population” (magazine subscribers newspaper readers, television viewers, the population of a community or state). If done according to statistical principles, generalizations can then be made from the sample to the population with a certain degree of assurance or confidence. A sample is less costly than a census, which is an enumeration of all the members of a population. A census allows statements to be made about actual population parameters. However, the sample, which is less costly than a census, forces the researcher to make generalizations about the population within a degree or range of probability (called the “confidence interval”), which can be calculated statistically for ahy given sample. *

Sample surveys can also compare relationships between variables by correlation (mov­ing toward answers to questions of cause and effect). Often variables of interest to the researcher cannot be manipulated in an experiment (e.g., age, race, occupation). The survey allows for comparisons between people who differ on a given characteristic and also for differences in their behaviors (e.g., how individuals of various ages, occupations, or educa­tional levels differ in their perceptions of media credibility or in their media use).

A Survey to Check News Accuracy An example of the survey technique is the use of mail questionnaires to check on news accuracy. Tankard and Ryan (1974) clipped articles dealing with science news over a three-month period from a random sample of 20 newspa­pers taken from 167 newspapers in the 26 states east of the Mississippi that have a circula­tion exceeding 50,000. Cover letters, questionnaires, clippings, and return envelopes were mailed to 242 scientists involved in the news articles. The scientists were asked to indicate possible types of errors in the articles on the checklist of 42 kinds of errors, to express their attitudes toward science news coverage in general, and to provide information regarding their recent activities with representatives of the press.

The survey resulted in 193 usable returns (only 2 scientists refused to cooperate, and 13 mailings were returned as undeliverable).

The investigators were able to specify the types of errors the scientists perceived as made most often, the scientists’ agreement or disagreement with nine-short statements regarding science writing in general, and the relationship between nine “predictor” variables and perceived error rate (such things as content category [medicine, biology, social sciences, etc.], origin of the report [staff, wire service, etc.], circulation of the newspaper, whether or not a story was bylined).

Tankard and Ryan reported that the mean number of kinds of errors was 3.50 when the scientist read the story before publication and 6.69 when the scientist did not read the story before publication. The attitude items indicated strong criticism by scientists of the accu­racy of science news reporting in general. Large majorities of the sample indicated that headlines on science stories are misleading and that information crucial to the understand­ing of research results is often omitted from news stories (Tankard & Ryan, 1974, p. 334).

A Survey of Media Credibility Ah example of the sample survey using personal inter­views in communication research is a study done by Westley and Severin (1964b) concern­ing perceived media credibility. A sample of 1,200 households was drawn from the popu­lation of an entire state, and one randomly selected adult in each household was interviewed by a professional interviewer. The sample resulted in 1,087 usable completed interviews.

In the course of the interviews the interviewees were asked to indicate which medium (television, radio, or newspaper) they would be most likely to place their trust in should they receive conflicting reports concerning several types of news and also to indicate the time they spend on a typical day with each of the media. Questions were also asked con­cerning the interviewees’ behavior in several other areas (voting, group memberships, of­fices held, visiting patterns, etc.) and were gathered on demographic variables (age, sex, educational level, occupation, income, place of residence, etc.).

It was then possible to “cross-tabulate” responses to identify the types of persons who claim to place the greatest trust in one of the media and the types of persons who report the greatest and least amount of time spent with each of the media.

The investigators concluded, among other things, that the “ideal type” of media user who was especially likely to assign relatively high credibility to the newspaper was a man. who had at least some college, resided in an urban area, and had a high-status occupation. He regarded himself as middle-class, and his father also had a high social status. He was most likely to be an independent in politics and, if he acknowledged any party, to have had only a weak party identification. He belonged to a moderate number of organized groups and tended to hold office in them but was not especially gregarious (Westley & Severin, 1964b, p, 334).

In another analysis of the data the investigators profiled the daily newspaper nonreader. They found the nonreader to be most often a rural or small-town resident with few mem­berships in formal organizations, including churches, and infrequent in attending church and in visiting with friends and relatives. The nonreader tended to have no political iden­tification or even political leaning and to be a nonvoter (Westley & Severin, 1964a, pi 51).

Ten years later the survey was replicated in another state by another group of research­ers. They concluded that by and large the newspaper nonreader was approximately the same type of person found earlier. A discouraging factor these researchers found was that significantly larger numbers of people had decided not to read the newspaper, especially the poorer and the less educated (Penrose, Weaver, Cole, & Shaw, 1974).

These samples were drawn according to sampling theory, and an error term for these specific samples was calculated. The researchers were able to make generalizations about the media use and assigned credibility patterns for the population of a state as a whole within stated parameters with a high degree of assurance. In the Westley and Severin study, survey research allowed making generalizations to a population of more than 4 mil­lion from a sample of little more than 1,000.

 

 

 

 

 

Content Analysis:

Content analysis is a systematic method of analyzing message content. It is a tool for analyzing the messages of certain communicators. Instead of interviewing people or asking them to respond to questionnaires, as in survey research, or observing behavior, as in the human experiment, the investigator using content analysis examines the communications that have been produced at times and places of his or her own choosing. It has been de­scribed as the “objective, systematic, and quantitative description” of communication con­tent (Bernard Berelson, cited in Budd, Thorp, & Donohew, 1967, p. 3).

A sophisticated use of content analysis couples it with additional information about source, channel, receiver, feedback, or other conditions of the communication situation, such as attitude, personality, or demographic characteristics. This enables predictions to be made about the communication process. In such cases content analysis is a tool used with other methods of inquiry to link message content with other parts of the communica­tion setting. It allows the investigator to deal with larger questions of the process and ef­fects of communications.

After selecting a question to be investigated or a hypothesis to be tested, the content analyst must define the population he or she will work with (publications, newscasts, time span, etc.). If the population is large, a sample is drawn, as in survey research. Categories must then be defined for classifying message content (a crucial step) and the content of the sample is coded according to objective rules. The coded content may be scaled or differen­tiated in some way to arrive at scores. If the content is to be related to other variables, these, scores can then be compared with them.

As with all quantitative research, these scores must then be analyzed (usually using the data reduction techniques of statistical analysis) and the findings interpreted according to the concepts or theories that have been tested. Magazine Coverage of Vietnam War Objectors A good example of the use of content analysis to test hypotheses based on theory and prior research is a study done to measure positions taken by 21 magazines toward objectors to the Vietnam War (Showalter, 1976).

The investigator began by citing prior theory and research concerning wartime restric­tions on freedom of expression, value-consensus theory, the mass media’s major function of enforcement of social norms, and studies of press performance in treatment of dissenters to previous American wars. He concluded that wartime minorities received little press backing.

As a result of the literature review, the researcher’s primary hypothesis was, “Conscientious objectors, a legally recognized ideological minority in wartime, receive negative treat­ment in magazine editorial content since objectors counter the dominant norm of duty to country.”

Twenty-one popular magazines indexed in the Reader’s Guide to Periodical Literature were selected for the study. All entries under “Conscientious Objectors”—a total of 110 articles and 28 letters to the editor—were analyzed. The 1-38 items yielded 999 assertions about conscientious objectors. Five coders classified these assertions in accordance with evaluative assertion analysis procedure.

The primary hypothesis was not confirmed. The researcher found that only 2 of 21 magazines viewed conscientious objectors negatively.

Showalter’s second hypothesis, “Opponents to the Vietnam War who perform services more nearly commensurate with military duty receive less negative editorial treatment than those who express their opposition in less acceptable channels,” was generally sup­ported.

His third hypothesis, “Magazine editorial positions toward objectors vary inversely with size as measured by circulation,” was substantiated.

His fourth hypothesis was “Public opposition to the Vietnam War and magazine edito­rial treatment of objectors are positively correlated.” The data indicate positive correla­tions; however, they were generally not statistically significant.

Showalter concludes: “American magazine journalists exercised more freedom in cover­ing an ideological minority during the Vietnam War than the theoretical and historical literature had suggested.”

The larger question raised by this study concerns trends in press positions toward all minority groups. Additional research is needed to determine whether journalists—in all media—are fulfilling their responsibilities in coverage of ethnic, religious, economic, and other minorities.

Such studies have implications for the well-being of the press itself. Tolerance and un­derstanding of diverse points of view in the society can help the_ press preserve its own claims to freedom.

Reagan’s China Policy in Three Elite Newspapers Another content analysis dealt with Reagan’s China policy as covered by three elite American newspapers (Chang, 1984).

Citing prior theory and research indicating the important role the press plays in the process of policymaking, the investigator raised these research questions:

  1. How did coverage of Reagan’s China policy in some elite newspapers differ before and after he took office?

What is the newspaper’s treatment of Reagan’s China policy?

Chang chose the Los Angeles Times, the New York Times, and the Washington Post be­cause of their prestige status and their importance in national politics. He defined the press agenda as “emphasis on specific problems or issues in the newspaper’s coverage of Reagan’s China policy” and measured the number of paragraphs published on a certain subject.

News, editorials, columns, and features dealing with China, international relations, and Taiwan were analyzed.

The study period was divided into two nine-month periods, one before and one after Reagan’s first inauguration, January 20, 1981.

The data indicated that the three newspapers ran roughly the same number of items about and devoted the same number of paragraphs to Reagan’s China policy. However, coverage of Reagan’s China policy differed significantly before and after his inauguration. In the former period the emphasis was on Reagan’s views concerning U.S.-Taiwan rela­tions, whereas after inauguration it shifted to U.S.-China policy.

The evidence indicates that coverage of Reagan’s China policy was more negative before than after his inauguration. Still to be answered is the question of whether the president sets the agenda for the newspapers or vice versa.

Both the Showalter and Chang investigations are good examples of how content analysis can be applied to the media to measure objectively how they deal with the news. The effect of media content on the society is, of course, a matter for further investigation. Content analysis is also used to measure media positions taken on various other issues, alleged bias in the media in politics, and changes in media positions, often, as we have safe, in correla­tion with events.

The Portrayal of Education in Prime-time Television Mayerle and Rarick (1989) used content analysis to examine 40 prime-time television series that featured education or edu­cators as central concerns during the period 1948 to 1988. They found that such series were rare and transient. The researchers analyzed

longevity and frequency of scheduling of education series, program formats, locale and educational level portrayed, role portrayals (job title, gender and race of lead and major supporting cast), and dramatic theme of the series. Two coders independently coded each program according to these variables, (pp. 141-142)

They concluded:

The world of television education, while presented positively, is less diverse in settings, activities and role depictions than the world of actual education. Television’s teachers are dedicated, work with interested students in urban high schools or colleges, and often have conflicts with cranky or inept administrators. White males far outnumber females and mi­norities in most occupational roles. Themes about nontraditional educators, family life of students and teachers, and educational problems are common, (p. 139)

 

Experimental Design:

Experimental designs are the classic method of dealing with questions of causality. An experiment involves the control or manipulation of a variable by the experimenter and an observation or measurement of the result in an objective and systematic way. When it is possible to use the experimental method, it is the research method most apt to provide answers of cause and effect. The classic experiment will answer questions of whether and to what degree a variable (the experimental or independent variable) affects another vari­able (the dependent variable).

In the simplest form of the classic experiment, two matched groups are randomly selected from a population (defined by and of interest to the experimenter), and one is given the experimental variable (in communication research it may be a news story, documentary film, piece of propaganda, etc.). After the experimental group has been exposed to the variable in question, both groups are observed or measured and any differences between them are construed as effects of the experimental treatment.

Many experiments modify the classic design, for reasons such as practical difficulties or costs. Some experiments are made far more complex in order to provide answers to addi­tional questions (e.g., how long the effects of a message will last, the effects of various types or combinations of messages, the effect of a number of different independent variables that may interact).

An Experiment of Communicator Credibility A classic experiment in communication research, conducted by Hovland and Weiss (1951), dealt with the effects of communicator credibility on acceptance of the content of a message. Identical messages were presented to :wo groups, one from a source with high credibility and the other from a source with low credibility. Opinions were measured before and after the messages were presented and also one month later. Four different topics were used (each in affirmative and negative versions) and presented to some subjects by trusted sources and to other subjects by sources held in much lower esteem.

Each subject received one article on each of the four topics, with the source given at the end of each article. Before reading the articles the subjects indicated their trust in each of a long list of sources, including those used in the experiment. The four high-credibility sources used in the experiment were judged so by 81 to 95 percent of the subjects; with the low-credibility sources the scores were only 1 to 21 percent.

The initial attitudes held toward the sources clearly affected how the subjects evaluated the presentations. Those from low-credibility sources were judged “less fair” and conclu­sions “less justified” than those by high-credibility sources, even though the articles were identical. The researchers concluded that “judgments of content characteristics, such as ow well the facts in a given communication justify the conclusion, are significantly af­fected by variations in the source” (Hovland, Janis, & Kelley, 1953, p. 29).

The researchers found greater opinion change in the direction advocated by the message when the source was of high credibility than when it was of low credibility.

However, when opinion data were obtained four weeks later, the differential effective­ness of the sources had disappeared. There was less acceptance of the viewpoints of high- credibility sources and greater acceptance of the positions advocated by low-credibility sources At that time measures were also obtained of the subject’s memory of the sources for each communication.

After ruling out other explanations, the researchers concluded that’ there exists a “sleeper effect” for subjects who showed increased belief in messages attributed to sources of low credibility; in the investigators’ words, “There is decreased tendency over time to reject the material presented by an untrustworthy source” (Hovland, Janis, & Kelley, 1953, p. 256).

The main advantages of the experimental method are the control it allows the investiga­tor and the inherent logical rigor it offers. However, many experiments are “artificial” or oversimplified in their settings and the findings must be translated to the “real” world. For this and a number of other reasons, often seemingly conflicting results are obtained from experimental designs and survey research. Carl Hovland, a pioneer in communication re­search, addressed this problem as it applies to studies of attitude change and suggested methods for its resolution. He concluded by noting the virtues of each method and the need for both methods in communication research (Hovland, 1959).

“Natural” Experiments Often “natural” experiments can be set up outside the labora­tory. An example of a planned natural experiment ^n communication is the “split-run” technique, whereby two versions of an advertisement or other message are run and the relative effectiveness of each is assessed. This may be done through follow-up questions asked over the telephone or in personal interviews, through tabulation of responses from coupons coded to identify which version has resulted in the response, or through other means.

Sometimes the experimenter may be interested in a theoretical question or in the test of a hypothesis and can design a study for an appropriate natural event. The experimenter then “follows up” the event with fieldwork. Such is the case involving the question of the effects of price advertising on the sales of beer and ale. This is a question of considerable controversy in many parts of the United States and an issue of concern for brewers, for the advertising industry, and for consumer groups concerned over alcohol consumption, its negative health aspects, and drunk driving.

In the state of Michigan price advertising of beer and wine was prohibited, allowed, and again prohibited between May 1981 and April 1984. The researcher (Wilcox, 1985) exam­ined total sales of brewed beverages (beer and ale) as reported by a sample of 65 retail outlets in lower Michigan in A. C. Nielsen in-store audits every two months over the three- year period. The researcher also examined data showing the number of surveyed retail outlets engaging in local advertising during the period of no restrictions on price advertis­ing (March 1982 to May 1983).

Examination of the data indicated that a significantly higher percentage of retail stores engaged in local advertising during the nonrestrictive period. However, the presence of price advertising appeared to have no significant effect on sales of brewed beverages (Wilcox, 1985, p. 37).

Case Studies:

While a survey examines one or a few characteristics of many subjects or units, a case study is used to examine many characteristics of a single subject (e.g., a communicator, newsroom, newspaper, news syndicate, television station, ad agency). The case study usu­ally tries to learn “all” about the area the investigator is interested in for the specific case over a period of time.

The Wire Editor as Gatekeeper “Gatekeeper” studies are classic media case studies. With the cooperation of the wire editor of a morning newspaper with a 30,000 circulation in a mid-western city of 100,000, White (1950), in what is the classic gatekeeper study, was able to compare wire copy used with that actually received during one week from three major wire services.

The wire editor saved all unpublished wire copy, about nine times as much as that which was published, and at the end of each day spent one and one-half to two hours noting his reasons for rejecting each item not used. With the data available, the investigator was able to compare the amount of wire copy actually received for each of a number of categories, both in column inches and in percent­ages of the total received, as compared with that used. He was also able to tabulate the number of times stories were rejected for various reasons.

The “gatekeeper” was then asked to consider at length four broad questions about the basis on which he selected news, his own prejudices, his concept of the audience he was making selections for, and any specific tests of subject matter or style that may enter into the selection of stories.

The researcher carefully qualified his conclusions with the parenthetical remark, “If Mr. Gates is a fair representative of his class,” recognizing that a case study deals with one example and is not a sample that can be logically or scientifically generalized. The study has, of course, provided considerable insight and information that has served as the basis for further investigation.

A Case Study of Television News Gatekeeping Using Content Analysis More recently, Berkowitz (1990) did a gatekeeper case study combining observational research with con­tent analysis to examine the selection of local news items for a network-affiliated television station in Indianapolis. He refined the metaphor of the news “gate” and reshaped the notion of gatekeeping to fit the local television situation. Working in the newsroom, he coded a total of 391 potential stories during a four-week period.

Berkowitz found that news selection decisions were based on several considerations be­sides news values, including information that was easy to explain, that would draw audi­ence, and that could be assembled with efficiency of effort. As a result of 220 hours of newsroom observation and later interviews, Berkowitz concluded that rather than use text­book news values, news workers used their instincts, citing interest, importance, and visual impact, although the latter was rarely mentioned during story conferences.

The structure of the newscast format, which called for an approximate quota of stories from various categories, had almost as much to do with story selection as did the news merits of potential stories. Berkowitz says that “this helps explain why gatekeepers do not always agree on specific stories, but they do tend to agree on the kinds of stories that con­stitute a balanced news mix” (p. 66).

Berkowitz concludes:

. . . this study found that decision-making didn’t fit the traditional mold of a lone wire editor sitting next to a pile of stories and making decisions based on either newsworthiness or personal preferences. … First, decision-making seemed to be a group process; content, therefore, was shaped by group dynamics. . . . Second, the keys to the lock—interest, im­portance, visual quality—were different than the keys searched for by past studies of news­paper wire editors or those taught in journalism classes. Whether these keys could even be used was partly dictated by organizational demands such as resource constraints and news­cast formats. . . . Stories that passed through one gate faced still other gates on their way toward being broadcast. Spot news closed the gate on planned event stories. Resource con­straints and logistical problems sometimes closed the gate on spot news stories,

Histories as Case Studies Most histories of media institutions can also be classified as case studies. An exception in communication research is an investigation of three centuries of the British press and its regulation by Siebert (1952). In this non-quantitative study hypotheses are formulated and tested and conclusions drawn, giving it some of the proper- ties of scientific research. A later example is a test of one of Siebert’s propositions in North Carolina by Shaw and Brauer (1969), using the historical method in focusing on one editor and the Newcomb ABX model of symmetry from the field of social psychology to make predictions. Case studies usually cannot be generalized to other similar situations. Most often the results are based on a single example and rarely are hypotheses formulated and tested, making it difficult, if not impossible, to generalize to other situations. The method does provide a great many observations, ideas, and insights that can be followed up with other types of investigations to yield results that can be generalized.

You may also like

Leave a Reply

Your email address will not be published.