The Handbook of Conflict Resolution (3rd ed)

Home > Other > The Handbook of Conflict Resolution (3rd ed) > Page 153
The Handbook of Conflict Resolution (3rd ed) Page 153

by Peter T Coleman


  There are two main ways in which successful collaboration with practitioners increases the likelihood that research findings are used. First, participation usually raises the practitioners’ interest in the research and its possible usefulness. Second, collaboration with practitioners helps to ensure that the research is relevant to problems as they appear in the actual work of the practitioners and the functioning of the organization in which their practice is embedded.

  However, there are many potential sources of difficulty in this collaboration. It is time-consuming and hence often burdensome and expensive to both the practitioners and researchers. Also, friction may occur because of the disparate goals and standards of the two partners: one is concerned with improving existing services, the other with advancing knowledge of a given phenomenon. The practitioner may well become impatient with the researcher’s attempt to have well-controlled independent variables and the intrusiveness involved in extensive measuring of dependent variables. The researcher may become exasperated with the practitioner’s improvisation and reluctance to sacrifice time from other activities to achieve the research objectives. In addition, there is often much evaluation apprehension on both sides: the practitioners are concerned that, wittingly or unwittingly, they will be evaluated by the research findings; the researchers fear that their peers will view their research as not being sufficiently well controlled to have any merit.

  AUDIENCES FOR RESEARCH

  There are several audiences for research: foundations and government agencies, executives and administrators who decide whether a CRI will take place in their organization, CR practitioners, and researchers who do one or more of the types of research described above. The audiences rarely have identical interests.

  Funding Agencies

  Our sense is that most private foundations are less interested in supporting research than they are in supporting pilot programs, particularly if such programs focus on preventing violence. Their interest in research is mainly oriented to evaluation and answering the question: Does it work? Many government agencies have interests that are similar to those of private foundations. However, some domestic agencies, such as the National Science Foundation and the National Institute of Mental Health, are willing to support basic and developmental research if the research is clearly relevant to their mission.

  Internationally, as humanitarian organizations integrate CRIs into their work, the need to evaluate CRIs for the purposes of reporting the results to funders of humanitarian organizations has become a significant and challenging aspect of CR work (Church and Shouldice, 2002; Culbertson, 2010; Hunt and Hughes, 2010). For example, while funders may be accustomed to evaluations of humanitarian programs that use immediate, concrete measures such as the number of people who participated in an initiative, a more accurate indicator of success for CRIs may be the long-term impact on the larger community. Working with funding agencies to reconcile the methods used to evaluate the short-term outcomes and long-term impacts of humanitarian-related CRIs can prove a challenging but worthy task.

  With respect to the type of evaluation research needed, we suggest that there is enough credible evidence to indicate that CRIs can have positive effects. The appropriate question now is under what conditions such effects are most likely to occur—for example, who benefits, how, as a result of participating in what type of initiative, what type of practitioner, under what kind of circumstance? That is, the field of conflict resolution has advanced beyond the need to answer the oversimplified question, “How does it work?” It must address the more complicated questions discussed in the section on types of research—particularly the questions related to developmental research.

  Executives and Administrators

  The executive and administrative audience is also concerned with the question of, “Does it work?” Depending on their organizational setting, they may have different criteria in mind in assessing the value of CRIs. A school administrator may be interested in such criteria as the incidence of violence, disciplinary problems, academic achievement, social and psychological functioning of students, teacher burnout, and cooperative relations between teachers and administrators. A corporate executive may be concerned with manager effectiveness, ease and effectiveness of introducing technological change, employee turnover and absenteeism, organizational climate, productivity, and the like.

  It is fair to say that with rare exceptions, CRI researchers and practitioners have not developed the detailed causal models that would enable them to specify and measure the mediating organizational and psychological processes linking CRIs to specific organizational or individual changes. Most executives and administrators are not much interested in causal models. However, it is important for practitioners and researchers to be aware that the criteria of CRI effectiveness often used by administrators—incidence of violence, academic achievement, employee productivity—are affected by many factors other than CRIs. They may, for example, be successful in increasing the social skills of students, but a sharp increase in unemployment, a significant decrease in the standard of living, or greater use of drugs in the students’ neighborhood may lead to deterioration of the students’ social environment rather than the improvement one can expect from increased social skills. The negative impact of such deterioration may counteract the positive impact of CRIs.

  One would expect executives and administrators to be interested in knowing not only whether CRIs produce the outcomes they seek but also whether it is more cost-effective in doing so than alternative interventions. Some research has evaluated the effectiveness of alternative dispute resolution procedures, such as mediation (see chapter 34) compared to adjudication, but otherwise little research has examined the cost-effectiveness of CRIs.

  Practitioners

  Conflict resolution practitioners often have questions about the degree to which their work successfully affects both individual and institutional change. With regard to each focus, practitioners have articulated a need to have measuring instruments that they can use to assess the effectiveness of their work. Such instruments could be of particular value to them in relation to funding agencies and policymakers. Practitioners often feel that the methods they use during their training and consulting, to check on the effects of their work, are more detailed and sensitive than the typical questionnaires used in evaluations. Their own methods may be more useful to them, even if these are less persuasive to funding agencies. Much of the general value could be gained from a study of the implicit theoretical models underlying the work of practitioners, as well as a study of how practitioners go about assessing the impact of what they are doing.

  Practitioners’ focus on individual change tends to be concerned with such issues as these:

  How much transfer of knowledge and skill is there from the conflict resolution training, workshop, or encounter to the participants’ other social contexts? How long do the effects of CRIs endure? What factors affect transfer and long-term outcomes?

  How can CRIs be responsive to individual differences among participants in personality, intelligence, age level, social class, ethnic group, gender, and religion?

  How important is similarity in sociocultural background between practitioner and participant in promoting effective CRIs? Are well-trained junior or student practitioners particularly effective in training other participants or students?

  What models of training are being employed among trainers?

  Can levels of expertise be characterized? How long and pervasive does training have to be for these levels?

  What selection and training procedures should be employed with regard to participants? With regard to trainers of trainers?

  At what age are the effects of CRIs most likely to take hold?

  The focus on institutional change is concerned with other questions:

  In schools and communities, what set of adults and other community members—for example, administrators, teachers, parents, staff, and guards—should participate in CRIs if students’
learning is to take hold? Must other community institutions be involved, such as the church, police, health providers, and other community agencies?

  What are the most effective models for institutionalizing CRIs in schools, universities, communities, and at the political level?

  What changes in a CRI’s structure, pedagogical approach, and culture are typically associated with a significant institutional change?

  What critical mass of community or political involvement is necessary for systemic change?

  It is evident that the issues raised by the practitioners are important but complex and not readily answerable by a traditional research approach. In addition, the complexity suggests that each question contains a nest of others that have to be specified in greater detail before they are accessible to research.

  Researchers

  Psychologically, other researchers are usually the most important audience for one’s research. If your research does not meet the standards established for your field of research, you can expect it to be rejected as unfit for publication in a respected research journal. This may harm your reputation as a researcher—and may make tenure less likely if you are a young professor seeking it. This may be true even if funding agencies, administrators, and practitioners find the research to be very useful to them.

  The research standard for psychology and many other social sciences is derived from the model of the experiment. If one designs and conducts an experiment ideally, one has created the best conditions for obtaining valid and reliable results. In research, as in life, the ideal is rarely attainable. Researchers have developed various procedures to compensate for deviation from the ideal in their attempt to approximate it. However, there is a bias in the field toward assuming that research that looks like an experiment (e.g., it has control groups and before- and after-intervention measurements) but is not, because it lacks randomization and has too few cases (more on this later), is inherently superior to other modes of approximation. We disagree. In our view, each mode has its merits and limitations and may be useful in investigating a certain type of research question but less so in another.

  We suggest three key standards for research: (1) the mode of research should be appropriate to the problem being investigated, (2) it should be conducted as well as humanly possible given the available resources and circumstances, and (3) it should be knowledgeable and explicit about its limitations.

  RESEARCH STRATEGIES

  Many factors make it very difficult to do research on the questions outlined in the previous sections, particularly the kind of idealized research that most researchers prefer to do (see chapter 42). For example, it is rarely possible to randomly assign students (or teachers, or administrators) to be trained (or not trained) by randomly assigned expert trainers employing randomly assigned training procedures. Even if this were possible in a particular school district, one would face the possibility that the uniqueness of the district has a significant impact on the effectiveness of training; no single district can be considered an adequate sample of some or all other school districts. To employ an adequate sample (which is necessary for appropriate statistical analysis) is very costly and probably neither financially nor administratively feasible.

  Given this reality, what kind of research can be done that is worth doing? Here we outline several mutually supportive research strategies of potential value.

  Experimental and Quasi-Experimental Research

  Experimental research involves small-scale studies that can be conducted in research laboratories, experimental classrooms, or experimental workshops. It is most suitable for questions related to basic or developmental research, questions specific as to what is to be investigated. Thus, such approaches would be appropriate if one sought to test the hypothesis that role reversal does not facilitate constructive conflict resolution when the conflict is about values (such as euthanasia) but does when it centers on interests. Similarly, it would be appropriate if one wished to examine the relative effectiveness of two different methods of training in improving such conflict resolution skills as perspective taking and reframing.

  This kind of research is most productive if the hypothesis or question being investigated is well grounded in theory or in a systematic set of ideas rather than when it is ad hoc. If well grounded, such research has implications for the set of ideas within which it is grounded and thus has more general implications than testing an ad hoc hypothesis does. One must, however, be aware that in all types of hypothesis-driven research, the results from the study may not support the hypothesis—even when the hypothesis is valid—because implementation of the causal variables (such as the training methods), measurement of their effects, or the research design may be faulty. Generally it is more common to obtain nonsignificant results than to find support for a hypothesis. Thus, practitioners have good reason to be concerned about the possibility that such research may make their efforts appear insignificant even though their work is having important positive effects.

  In good conscience, one other point must be made: it is very difficult and perhaps impossible to create a true or pure experiment involving human beings. The logic involved in true experiments assumes that complete randomization has occurred for all other variables except the causal variables being studied. However, human beings have life histories, personalities, values, and attitudes prior to their participation in a conflict workshop or experiment. What they bring to the experiment from their prior experience may not only influence the effectiveness of the causal variables being studied but also be reflected directly in the measurement of the effects of these variables. Thus, an authoritarian, antidemocratic, alienated member of the Aryan Nation Militia Group may not only be unresponsive to participation in a CRI but also, independent of this, score poorly on such measures of the effectiveness of the CRI as ethnocentrism, alienation, authoritarianism, and control of violence, because of his or her initial attitudes. Such people are also less likely to participate in CRIs than democratic, nonviolent, and nonalienated people. The latter are likely to be responsive to CRIs and, independent of this, to have good scores on egalitarianism, nonviolence, lack of ethnocentrism, and the like, which also reflect their initial attitudes.

  With appropriate “before” measures and correlational statistics, it is possible to control for much (but far from all) of the influence of initial differences in attitudes on the “after” measures. In other words, a quasi-experiment that has some resemblance to a true experiment can be created despite the prior histories of the people who are being studied.

  Causal Modeling

  Correlations by themselves do not readily permit causal inference. If you find a negative correlation between amount of exposure to CRIs and authoritarianism, as we have suggested, it may be that those who are authoritarian are less likely to expose themselves to CRIs or that those who have been exposed to CRIs become less authoritarian or that the causal arrow may point in both directions. It is impossible to tell from a simple correlation. However, methods of statistical analysis developed during the past several decades (and still being refined) enable one to appraise with considerable precision how well a pattern of correlations within a set of data fits an a priori causal model. Although causal modeling and experimental research are a mutually supportive combination, causal modeling can be employed even if an approximation to an experimental design cannot be achieved. This is likely to be the case in most field studies.

  Consider, for example, a study we conducted on the effects of training in cooperative learning and conflict resolution on students in an alternative high school (Deutsch, 1993; Zhang, 1994). Prior theoretical analysis (Deutsch, 1949, 1973; Johnson and Johnson, 1989), as well as much experimental and quasi-experimental research (see Johnson and Johnson, 1989, for a comprehensive review), suggested what effects such training could have and also suggested the causal process that might lead to these effects. Limitation of resources made it impossible to do the sort of extensive study of many schools required for
an experimental or quasi-experimental study or to employ the statistical analysis appropriate to an experiment. Therefore, we constructed a causal model that in essence assumed training in cooperative learning or conflict resolution would improve the social skills of a student. This in turn would produce an improved social environment for the student (as reflected in greater social support and less victimization from others), which would lead to higher self-esteem and greater sense of personal control over one’s fate. The increased sense of control would enhance academic achievement. It was also assumed that improvement in the student’s social environment and self-esteem would lead to an increased positive sense of well-being, as well as decreased anxiety and depression. The causal model indicated what we had to measure. Prudence suggested that we also measure many other things that potentially might affect the variables on which the causal model focused.

  The results of the study were consistent with our causal model. Although the study was quite limited in scope—having been conducted in only one alternative high school—the results have some general significance. They are consistent with existing theory and also with prior research conducted in very different and much more favorable social contexts. The set of ideas underlying the research appears to be applicable to students in the difficult, harsh environment of an inner-city school as well as to students in well-supported, upper-middle-class elementary and high schools.

 

‹ Prev