Neil Alexander is the President and Publisher of the United Methodist Publishing House, and a member of the Steering Committee which produced the Council of Bishop’s Call to Action report, which I recently wrote about. Neil e-mailed me his thoughts on that post earlier today and has agreed to allow me to share them in the interest of furthering the conversation.
Hi Jay:
I read your November 8 blog "What is Congregational Vitality" with interest. What you point to and worry about are matters that are congruent with topics and assessments the Steering Team grappled with and began to address. Perhaps your sense of the deficits of the research and findings would be moderated if we do a better job of recounting the steps in thinking about the design of the research and the subsequent findings that led to our recommendations
As we report, the Towers Watson’s researchers found that people said over and over again just what you have cited: there is an "it" factor in seeing or defining congregational vitality, because "I know it when I see it." In the FAQ’s section of the report which you scanned there is a fairly complete discussion of why the truth of folks’ experience and the reality of diversity (settings, substantive content, tone, etc.) and the fact that some very important types of quantifiable data about aspects of congregational life have not been routinely and reliably gathered and reported, make it at least difficult and perhaps impossible to have anything like a "perfect" or sufficient definition of vitality.
Thus the "indicators" that we employed are indeed reports of selected activities in local churches that have been collected for years. We saw this as a plus not a minus. These data are available for almost all of our churches, they have been collected with reasonable consistency, and they have been similarly reported over time. Those are requirements for the kind of data mining we pursued. Data mining research is not the only valid way to seek information and insight, but it is an under utilized additional tool that we believe can add to the information pool that we draw from to shape our work.
What we asked is this: Is it reasonable to believe that vital congregations (a state that we acknowledge we can’t fully define) would tend over time (five years) to exhibit positive trends in these reported statistics? We are easily persuaded that there are surely many additional attributes and fruits, but would a good collection of factors tend to ALSO include this available information? We concluded that the answer is yes and that these statistics represent a reasonable and useful set of indicators that we can begin with. They are not complete, they do not include all we want to know, but they provide a useful, available and practical starting place. In that way, the indicators listed in the Towers Watson research report became "proxies" for signs of vitality. They are not synonymous with vitality, they are not at all the sum total of the fruits of vitality, but they are clues and point to vitality.
One of the recommendations is that a good deal more research needs to be (continually) done and the implications pursued over time to determine what are the better/best indicators that would help us spot and understand the holy mystery of the nature of vitality– and that we should be vigorously and persistently collecting, reporting and discussing the implications of that information. But that’s work for later today and tomorrow. What we set out to do "in the moment" we had was to use already available information to help us get at some important findings we could start with.
What we learned through mining thousands of points of data for 33,000 congregations (an activity that some find inherently offensive because it presumably credits objective over subjective information — and so I note for the record here that we see this as a tool and not "the answer;" a tool that must be used side-by-side with other tools, some of which look a lot more like "art" than "science.") that there are behaviors that are found prominently and repeatedly in churches that trend up over time in the key indicators (professions of faith, worship attendance, benevolent giving, etc.).
The purpose is not to design the one grand program that fits all — it is instead to foster consideration about the implications and prompt us to work together to think about creative and contextually relevant ways to leverage what is learned. The purpose is NOT to say to pastors and laity leadership that if you have multiple small groups you will necessarily have a vital church. That’s obviously inadequate and also not supported by the research. The research shows that there are multiple behaviors that are seen over and over again in churches that show growth over time in some of the important indicators of vitality and that these work together to cultivate desired outcomes.
We assume that there are additional critically important "drivers" of vitality that we could not collect in a consistent way to use in this particular research project and call for help to be provided to church leaders across the Connection in analytical as well subjective ways of discerning and learning to build shared understandings, foster Wesleyan values, and shape behaviors in light of what’s learned.
There is more to be said and done – and that’s why we see the actions taken by the Council of Bishops and Connectional Table as starting places — not culminations of this endeavor. But we want to lean forward from starting places that actually launch us on pathways to a NEW place (more vital congregations over time) and we need many to join in the journey and to engage in ways that help us walk, learn, and change together!
I appreciate Neil’s comments and as I told him understand the methodology and the need that drove the task force to use certain metrics. The danger and fear comes in the potential for misusing the “drivers of vitality” as universal programmatic foci for all UM congregations – a one size fits all solution – that fails to recognize the unique context of each congregation.
How would you respond to Neil’s comments?
I’d be interested in learning from anyone on the committee why the four “drivers” have been given a name that implies cause-and-effect, when I’m pretty sure the research demonstrates and association rather than a causal relationship. I may be wrong. It’s been a number of years since I took stats and research methods courses.
I don’t understand how the report places such importance on a mix of traditional and contemporary worship when the chart provided in the Towers report shows that mixed worship styles are just as likely to indicate low vitality as high vitality in two out of three church-size categories and overwhelmingly indicate low vitality in the smallest category. I suspect that the issue is comparing to base line vitality, but at best what the data appears to show is a more likely — how much more likely is not discussed — association with high vitality. But this is hardly a 1:1 correlation. Putting all the eggs in the mixed worship basket based on a small to moderate effect seems unwarranted. (Statistical significance does not equate to real world significance; it just means the result is probably not due to chance factors.)
I’m curious why if 50% of high vitality churches use lectionary preaching why the report suggests topical preaching is the best way to high vitality.
I wonder if the presence of lots of small groups and youth programs is a consequence rather than a cause of high vitality. Same with percentage of spiritually engaged laity. I wonder if we are curious about whether that question even matters.
I wonder why the report never defines what we mean by “making disciples” but assumes that “high vitality” churches do that.
I wonder why the report never speaks in any depth about the caveats, limitations, and reasonable questions that the methodology and research decisions might give rise to. In any social science journal, the authors would be expected to present a candid assessment of the limits and gray areas. We are given neither these nor any actual statistical reporting.
Are any of the steering team members qualified to critically assess this type of research and ask tough questions about the methodology and its assumptions. I know we have many laity with Ph.D.’s who do this kind of research. I ask because the report reads much more like a sales job to me than a presentation of research findings and conclusions. It used lots of words meant to impress us with the soundness of the research and the iron-clad nature of the findings. My limited experience is that social science research is always open to interpretation and beset with many caveats and conditions. (What I’ve seen in the appendices does not provide the kind of detail necessary to make that analysis.)
These questions are the first one’s that come to mind.
Very well put. I’ll forward Neil the questions.