«SPEAKING OF FAITH: PUBLIC RELATIONS PRACTICE AMONG RELIGION COMMUNICATORS IN THE UNITED STATES Committee: Dominic L. Lasorsa, Co-Supervisor Ronald ...»
Such moves toward temporal values make mainline Protestant denominations the most churchlike faith groups on the church-sect continuum (Johnson, 1963). These points led to
the following hypotheses:
This study used two surveys to gather data to answer the eight research questions and test the two hypotheses in the preceding three chapters. To repeat, the research
RQ1: How do the roles that religion communicators play compare to those of secular practitioners?
RQ2: How do the roles that religion communicators play compare to what faith group leaders expect?
RQ3: How much do religion communicators and faith group leaders agree on practices in the four models of public relations?
RQ4: How well do religion communicators know what their supervisors think about practices in the four models of public relations?
RQ5: How does expertise of religion communicators to practice each model of public relations compare to their secular counterparts?
RQ6: How does the managerial and technical expertise of religion communications compare to the expertise of secular public relations practitioners?
RQ7: How do formal education, professional association membership and exposure to trade publications affect the way religion communicators practice public relations?
RQ8: How much do religion communicators and faith group leaders agree about contributions of communication to faith group operations?
The hypotheses being tested were:
H1: Communicators for U.S. mainline Protestant denominations are more likely to agree with survey statements describing two-way symmetrical public relations than communicators for other U.S. faith groups.
H2: Leaders of U.S. mainline Protestant denominations are more likely to agree with survey statements describing two-way symmetrical public relations than leaders of other U.S. faith groups.
The surveys in this study were done in phases. Questionnaires replicated items used by Grunig and his colleagues in their research among 327 secular organizations (Grunig, Grunig & Dozier, 2002) and Broom (1982, 1986) in studies of public relations practitioners. The first survey collected responses in late 2006 and early 2007 from members of the Religion Communicators Council. The second questionnaire went in early 2008 to top executives of faith groups represented by religion communicators who responded to the first survey.
E-mail invitations with personalized salutations were sent to all 479 communicators with e-mail addresses on file in the Religion Communicators Council membership database as of October 1, 2006. No random sampling was involved.
Messages asked RCC members to go to SurveyMonkey.com to complete a Web-based survey (Appendix A). Responses were collected from members online from October 1 through December 31, 2006. Five follow-up e-mail messages were sent—one every two weeks through mid December. Those reminders went only to non-respondents. Neither the original invitation nor the reminders offered any special response incentives (Cook, Heath & Thompson, 2000; Porter & Whitcomb, 2003).
Additional responses were accepted from members attending the April 2007 RCC National Convention in Louisville, Kentucky. Louisville respondents had been part of the original sampling frame but had not taken part in the online survey. They completed a self-administered paper-and-pencil version of the questionnaire. Those answers were manually added to the database compiled by SurveyMonkey.com.
The Web-based questionnaire asked 142 questions from the 313 in the original paper-and-pencil instrument used with top communicators in Excellence surveys (Grunig, Grunig & Dozier, 2002). The online instrument was field-tested in September 2006. Five regional United Methodist communicators who did not belong to RCC completed the questionnaire. They all finished the survey in less than 30 minutes. None of the test subjects indicated any response fatigue with the instrument that was half the length of the original Excellence questionnaire. That lack of fatigue was consistent with findings from a meta-analysis of online surveys by Cook, Heath and Thompson (2000). They saw no relationship between online survey length and response rate. Instead, issue salience was a key factor driving online survey response rates. Questions in this survey appeared to be relevant to the United Methodists communicators.
The United Methodists commented on question wording, questionnaire instructions and SurveyMonkey usability. They pointed out places where they had questions about instructions or item wording and offered general impressions. Based on field-test results, some terms used in the original Excellence questionnaires were modified to fit the RCC audience. For example, “communication” was often substituted for “public relations” when referring to departments. Black (2002) had shown “communications” was the most common job title used among council members. Field testing confirmed that the United Methodists seemed to prefer “communication” to “public relations.” They understood “communication” to cover the public relations department functions covered in the original Excellence studies. That word choice was consistent with the long-running debate among RCC members about what to call their work.
Since this study appeared to be the first to probe what religion communicators did, the section on communicator roles included all 28 items developed by Broom (1982) as well as four developed later by Dozier (1984), not just the 16 items used in other Excellence surveys. Furthermore, new questions were added about producing Web sites, e-mail newsletters, blogs and podcasts. Those communication delivery methods became common after much of the Excellence research was done. Those additional roles questions enriched the overall data set. But comparisons in this project between RCC members and public relations practitioners in earlier studies were limited to the 16-item Excellence index for role enactment.
A self-administered paper-and-pencil questionnaire (Appendix B) went by U.S.
Mail in January 2008 to 87 faith group executives (bishops, general secretaries, executive directors, presidents, pastors, etc.). The mailing included a cover letter and postage-paid business-reply mail return envelope. Survey recipients headed organizations (judicatories, denominational agencies, religious orders, local congregations, professional associations of church-related agencies, colleges and universities, and health-and-welfare ministries) represented by the 185 Religion Communicators Council members who responded to the 2006-07 survey. All these senior managers were non-communicators. The sample excluded leaders of faith-related communication organizations (faith-group public relations offices, publications, news services, etc.), who were likely to be communicators themselves. Since many of those communication executives were RCC members, they could have responded to the first survey.
As with the first survey, the questionnaire replicated 73 of the 96 items from previous survey instruments used with top executives in Excellence studies of 327 organizations in the United States, Canada and the United Kingdom (Grunig, Grunig & Dozier, 2002). Most of the questions were similar to ones that religion communicators answered in the 2006-07 survey. “Communication” was again used in most places where earlier Excellence questionnaires used “public relations.” All 87 leaders were sent a follow-up post card 10 days after the initial survey mailing. That post card encouraged leaders to complete and return the survey questionnaire. Leaders who did not respond by mid February 2008 received a second questionnaire, cover letter and business-reply mail return envelope.
secular practitioners. This question probed how much religion communicators were or were not like their secular counterparts. For an answer, responses from religion communicators to 16 role measures were compared to composite means from Grunig, Grunig and Dozier (2002). Those composite means, reported as numbers between 0 and 200 from an open-ended fractionation scale, represented what practitioners in 316 secular organizations had said in earlier Excellence research. Grunig, Grunig and Dozier (2002) used four groups of four statements to define four roles: manager, senior adviser, media relations specialist and communication technician. Those categories represented roles developed by Dozier (1983, 1984) while reworking data from Broom (1982). Statements
describing the manager role were:
+ I make communication policy decisions.
+ I diagnose communications problems and explain them to others in the organization.
+ I plan and recommend courses of action for solving communication problems.
+ Others in the organization hold me accountable for the success or failure of communication programs.
Statements measuring the senior adviser role were:
+ I create opportunities for management to hear the views of various internal and external publics.
+ Although I don’t make communication policy decisions, I provide decision makers with suggestions, recommendations and plans.
+ I am the senior counsel to top decision makers when communication issues are involved.
+ I represent the organization at events and meetings.
Statements for the media relations specialist role were:
+ I keep others in the organization informed of what the media report about our organization and important issues.
+ I am responsible for placing news releases.
+ I use my journalistic skills to figure out what the media will consider newsworthy about our organization.
+ I maintain media contacts for the organization.
Statements coving the technician role were:
+ I write materials presenting information on issues important to the organization.
+ I edit and/or rewrite for grammar and spelling the materials written by others in the organization.
+ I do photography and graphics for communication materials.
+ I produce brochures, pamphlets and other publications.
Communicators in the secular studies indicated how often they performed each task on an open-ended fractionation scale. For them, 100 represented an “average” task;
50 represented “half the average”; 200 represented “twice the average”; 0 represented “does not describe”; and anything above 200 represented “as high as you want to go.” SurveyMonkey.com could not accommodate a similar fractionation scale for the online survey of RCC members. SurveyMonkey.com could handle a five-point Likert scale. J.E.
Grunig (personal correspondence, June 5, 2006) said that a five-point Likert scale would work fine and still allow approximate comparisons to overall Excellence findings.
Therefore, RCC members responded (1) never do, (2) seldom do, (3) sometimes do, (4) often do and (5) always do.
Means were calculated for responses by RCC members to each of the 16 statements. Composite means for the two groups were also calculated for each of the four roles. To facilitate comparisons with Excellence baseline data, RCC mean responses were converted from the Likert scale to numbers approximating those on the fractionation scale in Grunig, Grunig and Dozier (2002). Excellence baseline data could have been converted from the fractionation to the Likert scale as well. Either change would introduce slight data distortions. But moving from the wider fractionation scale to the more limited Likert scale appeared to increase those distortions. Since the goal was to compare approximate locations on the two ordinal scales (Sission & Stocker, 1989), the decision was made to preserve the integrity of the baseline data and risk minor distortions in the RCC results. In the conversion, 1 on the RCC scale equaled 0 (“does not describe”) on the fractionation scale. Each 0.1 above 1 in a mean on the five-point Likert scale equaled five points on the fractionation scale. For example: 1.5=25, 2=50 (“half the average”), 2.5=75, 3=100 (“average”), 3.5=125, 4=150, 4.5=175 and 5=200 (“twice the average”).
Independent samples t tests of means were run (Sission & Stocker, 1989). They checked for the statistically significant differences in mean responses between the two groups. Effect size was considered as well. Differences in means of 0.2 pooled standard deviations were considered small, 0.5 SD medium and 0.8 SD large (Cohen, 1992). In addition, Cronbach’s alphas were computed for each role scale. Alphas from the RCC study were compared to those reported by Grunig, Grunig and Dozier (2002).
their supervisors expected. To find an answer, responses were compared from RCC members and faith group leaders on 24 role measures. These measures came from Broom and Dozier (1986). The authors used six statements to define four roles conceptualized in Broom (1982): expert prescriber, communication process facilitator, problem-solving facilitator and technician. In addition, Broom and Dozier (1986) asked communicators to pick their primary role from the four categories. Broom’s categories were discussed but not measured in the later Excellence studies. Although those studies used some of the same measure statements, they examined four roles developed by Dozier (1983, 1984) and used in RQ1. Dozier (1983) concluded that manager and technician were the most parsimonious way to look at public relations roles. Nevertheless, he argued that Broom’s original four-part typology provided useful tools for dissecting the manager function.
Broom (personal correspondence, June 12, 2006) made the same point. He said reducing his measures may mask many potential differences among the roles. Furthermore, Dozier and Broom (1995) reported Cronbach’s alpha reliability coefficients of 0.94 for the 18item manager scale and 0.74 for the six-item technician scale. That is why Broom’s original model was used to answer RQ2.
To define the expert prescriber, these six statements were used:
+ I make communication policy decisions.
+ Because of my experience and training, others consider me to be the organization’s expert in solving communication problems.