More Call To Action Information


Our good friend John Meunier offered some great questions on how the data in the Call to Action report led to the conclusions of the Steering Committee in the comments of a recent post on this blog. I forwarded the questions to Neil Alexander of UMPH who was one of the leaders of the Call to Action steering committee and here are his responses (printed in red):

You shared the following questions from a correspondent on your site and I want to offer a few responses that I hope will be clarifying:

I’d be interested in learning from anyone on the committee why the four “drivers” have been given a name that implies cause-and-effect, when I’m pretty sure the research demonstrates and association rather than a causal relationship. I may be wrong. It’s been a number of years since I took stats and research methods courses.

The statistical technique used to identify the drivers – linear regression – was employed to identify what factors can be used to model, or predict a change in vitality.  You are correct that regression analysis does not identify a cause and effect relationship, which would actually require data over a period of time.  But the factors were given the name “driver” because regression analysis does show that, all else being equal, if you were to increase one of the factors, it would “drive” an increase in vitality. Meanwhile, we have also confirmed that both the TW research and our own experience suggest: 1. there are surely additional drivers that it will be important to determine and 2. the drivers appear to work best in combination and not as isolated “quick fixes.” The Steering Team believes this is useful information that provides starting points for concerted work but that we also urge continuing research, and that these starting points should not be viewed as an “end of discussion” solution as we walk the path to greater vitality in more and more churches.

I don’t understand how the report places such importance on a mix of traditional and contemporary worship when the chart provided in the Towers report shows that mixed worship styles are just as likely to indicate low vitality as high vitality in two out of three church-size categories and overwhelmingly indicate low vitality in the smallest category.

The regression analysis found that having a mix of traditional and contemporary services was a significant driver of vitality (it had a significant, positive coefficient).  The data shown in the research reinforces the finding that when looking across churches of all sizes, highly vital churches tend to provide a mix of both traditional and contemporary services.  The breakdown of that finding based on church size certainly shows that the relationship is clearly stronger in larger churches. The Steering Team believes that these findings provide insight but not easy fix solutions and that it is obvious that size of church (the number attending) is a factor in having sufficient critical mass so that variety in styles of worship is possible in a given locale.

I suspect that the issue is comparing to base line vitality, but at best what the data appears to show is a more likely — how much more likely is not discussed — association with high vitality. But this is hardly a 1:1 correlation. Putting all the eggs in the mixed worship basket based on a small to moderate effect seems unwarranted. (Statistical significance does not equate to real world significance; it just means the result is probably not due to chance factors.)

The Steering Team was mindful that the research found that there is no “silver bullet” or “all eggs in one basket” recommendation that will assure vitality.  It is not a claim we are making. Again, what the research did show is that the identified factors (drivers) work in concert to generate greater vitality in congregations exhibiting positive trends in the indicators of vitality.

I’m curious why if 50% of high vitality churches use lectionary preaching why the report suggests topical preaching is the best way to high vitality.

While the research found and reports that the type of preaching in traditional worship services has a relationship with vitality, the research did not suggest that topical preaching was the best way to achieve high vitality.  The research found that preaching in traditional worship services at highly vital churches that offer more than one style of worshiptends (slightly) to be more topical and less based on the lectionary than in congregations with relatively low vitality.  The TW report simply describes what was found when churches were surveyed. It does not offer-up recommendations for other churches.

I wonder if the presence of lots of small groups and youth programs is a consequence rather than a cause of high vitality. Same with percentage of spiritually engaged laity. I wonder if we are curious about whether that question even matters.

The research shows that the churches with measurable outcomes (using indicators that are proxies for vitality) repeatedly and predominantly were shown to have more small groups for all ages, more programs for children and youth and more laity who describe themselves as current or past leaders in the congregation.  The Steering Team believes there is a certain “I could have had a V8” factor in stating the obvious when the report notes that degrees of involvement and intentionality in reaching and serving more people (particularly younger people) add positively to the “vitality mix.” We also believe that enhancing our shared views about which factors tend to contribute to vitality matters a great deal– especially as we strive to work collaboratively and consistently across the Connection.

I wonder why the report never speaks in any depth about the caveats, limitations, and reasonable questions that the methodology and research decisions might give rise to. In any social science journal, the authors would be expected to present a candid assessment of the limits and gray areas. We are given neither these nor any actual statistical reporting.

The Steering Team sought to fulfill a specific charge within a designated timeframe and budget. We were asked to gather data that would result in findings and recommendations to the Council of Bishops and Connectional Table leading to the reordering of the life of the Church for greater effectiveness. The report had a very precise purpose to provide starting points for action. We did not seek to produce a report suitable for the publication standards of a social science journal, etc.

One additional note:  the Towers Watson and Apex research reports include quite a bit of statistical information about the numbers of churches surveyed etc. and supporting information can also be found at umc.org/calltoaction.  Those two reports were written by research firms and provide information about their findings; they are not one in the same with the CTA’s Steering Team’s conclusions which drew insights from these findings and other sources in order to put forward specific proposals for action that will lead to meaningful and substantial change.

Are any of the steering team members qualified to critically assess this type of research and ask tough questions about the methodology and its assumptions? I know we have many laity with Ph.D.’s who do this kind of research. I ask because the report reads much more like a sales job to me than a presentation of research findings and conclusions. It used lots of words meant to impress us with the soundness of the research and the iron-clad nature of the findings. My limited experience is that social science research is always open to interpretation and beset with many caveats and conditions. (What I’ve seen in the appendices does not provide the kind of detail necessary to make that analysis.)

Again, the Steering Team Report is from one group of designated UMC leaders and offered to other leaders for consideration, debate, perfection and hopefully engaged response. The TW research is important but serves as only one element of the overall effort and provides a reference point of solid research that can be used as a stating place for both envisioning the future and positively impacting the life of the UMC. It was reviewed by the team and also the team’s outside consultant who has a long and proven history of receiving and assessing such information – evaluating both methods and findings. And yes, the TW team was made up of several PhDs.

So, based on these responses, what are your concerns or continuing questions about the Call to Action report and how it is being spun by the Council of Bishops and others in the church?

4 thoughts on “More Call To Action Information

  1. Jay, thank you for sending this off and getting the answers.

    Mr. Alexander directs us to the TW report, but I’ve looked through that and I have not seen the actual reporting of the regression coefficients, effects sizes, or p values that are absolutely essential in judging whether a particular statistical correlation is a big deal or a tiny blip. When the report says factors have a significant effect on vitality (as defined in the report) how big an effect are we talking about? What are the numbers?

    In addition, the TW report (appendix 1) used a statistical technique called Factor Analysis to create the categories that then determined the high, average, and low vitality churches, but they do not report any of the coefficients that were used to create these three factors. The use of ANOVA to do hypothesis testing raises the same sorts of questions, but in some ways more so.

    All statistical tests are based on a whole host of assumptions and requirements to be valid. And the results of most real world statistical analysis are subject to much interpretation. What we seem to have in both the TW report and then amplified in the Steering Committee report is one interpretation without considering possible alternative interpretations.

    My experience with social data like this tends to be that the case that can actually be defended by the data analysis is highly nuanced and often quite limited in its scope. You have to hedge and qualify your claims heavily because the statistics indicate a lot but also leave open lots of questions. Of course, all the hedging and qualifying does not lend itself to “firing up the troops” very well.

    The TW report has almost nothing but bare percentages. I’m sure the consultants did a lot of number crunching that they did not put in that report, but to treat their conclusions as holy writ without the underlying statistical report does not make sense to me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.