The Handbook of Conflict Resolution (3rd ed)

Home > Other > The Handbook of Conflict Resolution (3rd ed) > Page 154
The Handbook of Conflict Resolution (3rd ed) Page 154

by Peter T Coleman


  Nonexperimental field research may be exploratory research, testing of a causal model, or some combination of both. Exploratory research is directed at describing the relations and developing the set of ideas that underlie a causal model. Typically it is inappropriate to test a causal model with the data collected to stimulate its development. Researchers are notoriously ingenious in developing ex post facto explanations of data they have obtained, no matter how their studies have turned out. A priori explanations are much more credible. This is why nonexploratory research has to be well grounded in prior theory and research if it is to be designed to clearly bear on the general ideas embedded in the causal model. However, even if a study is mainly nonexploratory, exploratory data may be collected so as to refine one’s model for future studies.

  Survey Research

  This form of research is widely used in market research; preelection polling; opinion research; research on the occurrence of crime; and collection of economic data on unemployment, inflation, sales of houses, and so on. A well-developed methodology exists concerning sampling, questionnaire construction, interviewing, and statistical analysis. Unfortunately, little survey research has taken place in the field of conflict resolution. Some of the questions that could be answered by survey research have been discussed earlier, under the heading of consumer research. It is, of course, important to know about the potential (as well as existing) consumers of CRIs. Similarly, it is important to know about current CR practitioners: their demographics, their qualifications to practice, the models and frameworks they employ, how long they have practiced, the nature of their clientele, the goals of their work, and their estimation of the degree of success.

  Experience Surveys

  Experience surveys are a special kind, involving intensive in-depth interviews with a sample of people, individually or in small focus groups, who are considered to be experts in their field. The purpose of such surveys may be to obtain insight into the important questions needing research through the experts’ identification of gaps in knowledge or through the opposing views among the experts on a particular topic. In addition, interviewing experts, prior to embarking on a research study, generally improves the researcher’s practical knowledge of the context within which her research is conducted and applied and thus helps her avoid the mine fields and blunders into which naiveté may lead her.

  More important, experts have a fund of knowledge, based on their deep immersion in the field, that may suggest useful, practical answers to questions that would be difficult or infeasible to answer through other forms of research. Many of the questions mentioned earlier under the heading of field research are of this nature. Of course, one’s confidence in the answers of the experts is eventually affected by how much they agree or disagree.

  There are several steps in an experience survey. The first is to identify the type of expert to survey. For example, with respect to CRIs in schools, one might want to survey practitioners (the trainers of trainees), teachers who have been trained, students, or administrators of schools in which CRIs have occurred. The second step is to contact several experts of the type you wish to interview and have them nominate other experts, who in turn nominate other experts. After several rounds of such nominations, a group of nominees usually emerges as being widely viewed as experts. The third step is to develop an interview schedule. This typically entails formulating a preliminary one that is tried out and modified as a result of interviews with a half-dozen or so of the experts individually and also as a group. The revised schedule is formulated so as to ask all of the questions one wants to have answered by the experts, while leaving the expert the opportunity to raise issues and answer the questions in a way that was not anticipated by the researcher.

  Many years ago, Deutsch and Collins (1951) conducted an experience survey of public housing officials prior to conducting a study of interracial housing. The objective was to identify the important issues that could be the focus of a future study. It led to a study of the effects of the occupancy patterns: whether the white and black tenants were housed in racially integrated or racially segregated buildings in a given housing project. In addition, the survey created a valuable handbook of the various other factors that, in the officials’ experiences, affected race relations in public housing. It was a useful guide to anyone seeking to improve race relations in public housing projects.

  Although it is possible for the experts to be wrong—to have commonly held, mistaken, implicit assumptions—their articulated views are an important starting point as either constructive criticism or a guide to informed practice.

  Learning by Analogy

  Not only can the conflict resolution field learn from its experienced practitioners, it can also learn from the work done in other closely related areas. Many of the issues involved in CRIs have been addressed in other areas: transfer of knowledge and skills is of considerable concern to learning theorists and the field of education generally; communication skills have been the focus of much research in the fields of language and communication, as well as social psychology; anger, aggression, and violence have been studied extensively by various specialties in psychology and psychiatry; and there is an extensive literature related to cooperation and competition. Similarly, creative problem solving and decision making have been the focus of much theoretical and applied activity. Terms such as attitude change, social change, culture change, psychodynamics, group dynamics, ethnocentrism, resistance, perspective taking, and the like are common to CRIs and older areas. Although the field of conflict resolution is relatively young, it has roots in many well-established areas and can learn much from the prior work in these areas. The purpose of this Handbook is, of course, to provide knowledge of many of these relevant areas to those interested in conflict resolution.

  As an educational and social innovation, CRIs in the form of training, workshops, and intergroup encounters are also relatively young. There is, however, a vast literature on innovation in education and the factors affecting the success or lack of success in institutionalizing an innovation in schools. In particular, by analogy, cooperative learning could offer much useful experience for CR training in this regard. Cooperative learning, which is conceptually closely related to CR training, has accumulated a considerable body of experience that might help CR practitioners understand what leads to success or failure in institutionalizing a school program of CR training.

  RESEARCH EVALUATING CONFLICT RESOLUTION INITIATIVES

  In 1995, Deutsch wrote, “There is an appalling lack of research on the various aspects of training in the field of conflict resolution” (p. 128). The situation has been improving since then. For example, there is now much evidence from school systems of the positive effects of conflict resolution training on the students who were trained. Most of the evidence is based on evaluations by the students, teachers, parents, and administrators. In Lim and Deutsch’s international study (1997), almost all institutions surveyed reported positive evaluations by each of the populations filling out questionnaires. Similar results are reported in evaluations made for school programs in Minnesota, Ohio, Nevada, Chicago, New York City, New Mexico, Florida, Arizona, Texas, and California (see Bodine and Crawford, 1998; Johnson and Johnson, 1995, 1996; Lam, 1989; Flannery et al., 2003; Stevahn, Johnson, Johnson, and Schultz, 2002).

  While research evaluating CRIs may have begun primarily with research conducted on conflict resolution training, in the last fifteen years conflict resolution evaluation research has expanded to include the development of tools, methodologies, and research conducted on a range of initiatives, including interactive conflict resolution workshops involving politically influential parties from both sides in international conflicts (see d’Estree et al., 2001; Fisher, 1997; Kelman, 1995, 1998), interethnic encounter groups (see Abu-Nimer, 1999, 2004; Maoz, 2004, 2005; Bekerman and McGlynn, 2007), and peace-building activities (see Lederach, 1997; Zartman, 2007), to name a few. In this section, we offer a brief overview of some of the method
ologies and instruments developed and research conducted over the past few years. We begin with an example of an instrument created by d’Estree et al. (2001) to assess the short-, medium-, and long-term impacts of interactive conflict resolution and other similar initiatives.

  A Framework for Comparative Case Analysis of Interactive Conflict Resolution

  D’Estree et al. (2001) created a framework, grounded in theory and practice, designed to be used as a tool for evaluating CRIs. While the framework was developed to address interactive problem-solving workshops (see Kelman, 1995, 1998; Fisher, 1997), it can be modified to address the particular goals of other types of CRIs as well.

  The framework has four categories, and each category contains a set of criteria for assessing CRIs. The first category, changes in thinking, includes criteria regarding various types of new knowledge that participants may gain from an involvement in CRIs, such as the degree to which participants are able to attain deeper understanding of conflicts, expand their perspective of others, frame problems and issues productively, problem-solve, and communicate effectively. The second category, changes in relations, includes various indicators that the relationship between the parties in conflict has changed, such as the extent to which parties are better able to engage in empathetic behavior, validate and reconceptualize their identities, and build and maintain trust with the other side. The third category, foundations for transfer, includes criteria for assessing how well a CRI establishes a platform for transferring the learning to participants’ home communities once the CRI has ended. The criteria in this category include the extent to which participants have created artifacts (e.g., documents describing agreements, plans for future negotiations, joint statements) and put in place structures for implementing new ideas, and the extent to which the CRI has helped create new leadership. The foundations for outcome or implementation category include criteria that assess the extent to which the CRI contributed to medium- and long-term achievements that occur between the parties. Such criteria include the degree to which relationship networks have been created, reforms in political structures have occurred, new political input and processes have been created, and increased capacity for jointly facing future challenges can be demonstrated. It is important to note that the categories and accompanying criteria are interrelated, not mutually exclusive, and are not meant to be used in a linear fashion.

  The framework also includes a matrix that differentiates between temporal phases of impact and societal levels of intervention. The temporal phases of impact are the promotion phase, in which a CRI attempts to promote or catalyze certain effects (assessed during the CRI); the application phase, in which attempts are made to apply or implement the effects of the CRI in the parties’ home environments (assessed in the short term after the CRI takes place); and the sustainability phase, in which the medium- and long-term effects of the CRI are assessed. The societal levels of intervention enable evaluators to distinguish between effects that occur at the individual (micro) level, societal (macro) level, and the community (meso) level, in which the transfer of effects from the individual to the societal level often takes place. D’Estree et al. (2001) suggest using a variety of unobtrusive methods to collect data along the dimensions of their proposed frameworks, including interviews, surveys, observations, content analysis, and discourse analysis.

  The Action Evaluation Research Initiative

  Another methodology that has been developed to evaluate a wide range of CRIs is called action evaluation research (Ross, 2001; Rothman, 1997, 2005; Rothman and Friedman, 2005; Rothman and Land, 2004; Rothman and Dosik, 2011). Action evaluation research refers to a process of creating alignment and clarification about the goals of a CRI with a variety of stakeholders as a way of monitoring and assessing the successful implementation of a CRI. The action evaluation process centers on three main sets of questions: (1) What long- and short-term outcome goals do various stakeholders have for this initiative? (2) Why do the stakeholders care about the goals? What motivations drive them? For trainers or developers of the initiative, what are the theories and assumptions that guide their practice? (3) How will the goals be most effectively met? In other words, what processes should be used to meet the stated goals?

  These questions form the baseline, formative, and summative stages of the research. At the baseline stage, the action evaluator engages project members in a cooperative goal-setting process. He or she collects data from all members using online surveys and interviews and then feeds back the data to the group with the purpose of creating a baseline list of goals that all stakeholders can use to monitor and evaluate the success of the CRI over time.

  As the CRI is implemented, the action evaluation process enters the formative stage in which participants reflect on the action that has been taken so far, refine their goals as needed, and identify obstacles that need to be overcome in order to achieve the goals. The formative stage is an ongoing process of refinement and learning rather than a discrete, one-time process. The methods used at the formative stage include an online project log in which members can communicate with one another about important events, problems, and ideas; a shared journal in which participants communicate directly with the action evaluator about ideas and concerns; critical incident stories in which participants enter particularly positive or challenging events into a project database; and interviews conducted with participants. Once again, the action evaluator feeds back the collected data to the group members and works with them to continue clarifying the goals of the initiative, monitoring progress toward the goals, and directing future work. A progress report will be generated to compare the results thus far with the baseline stage goals. The report addresses questions such as, Toward what goals has observable progress been made? What new goals have emerged over time? Where have problems and obstacles occurred? The action evaluator helps participants assess the obstacles and make changes to address them as needed.

  The summative stage occurs as a CRI reaches its conclusion or another natural point at which it makes sense to more formally evaluate the results of the CRI. At this stage, participants use the goals created at the baseline and formative stages to establish criteria for retrospective assessment of the CRI. As participants review their goals and examine whether they have reached them, they identify what worked well and what they would do differently to improve other similar CRIs in the future.

  We now look at several research studies conducted to evaluate a variety of CRIs in different types of environments.

  Comprehensive Peer Mediation Evaluation Project

  The Comprehensive Peer Mediation Evaluation Project (CPMEP), conducted by Jones and her colleagues, involved twenty-seven schools with a student population of about twenty-six thousand, a teacher population of approximately fifteen hundred, and a staff population of about seventeen hundred (Jones, 1997). They employed a three-by-three design: three levels of schools (elementary, middle, and high school); each level of school split into three possible conditions (peer mediation only, which was called a “cadre program”; peer mediation integrated with a schoolwide intervention, which was called a “whole school program”; or no training at all, designated as the control group. The training and research occurred over a two-year period.

  The following draws on the report’s summary of general conclusions:

  Peer mediation programs yield significant benefit in developing constructive social and conflict behavior in children at all educational levels. It is clear that exposure to peer mediation programs, whether cadre or whole school, has a significant and lasting impact on students’ conflict attitude and behavior. Students who are direct recipients of program training benefit the most; however, students without direct training also benefit. Exposure to peer mediation reduces personal conflict and increases the tendency to help others with conflict, increases prosocial values, decreases aggressiveness, and increases perspective taking and conflict competence. These effects are significant, cumulative, and sustained for long periods,
especially for peer mediators. Students trained in mediation, at all educational levels, are able to enact and use the behavioral skills taught in training.

  Peer mediation programs significantly improve school climate. The programs had a significant and sustained favorable impact on teacher and staff perceptions of school climate for both cadre and whole school programs at all educational levels. The programs had a limited to moderately favorable effect on student perceptions of climate. There is no evidence that peer mediation programs affected overall violence or suspension rates.

  Peer mediation effectively handles peer disputes. When used, it is very effective at handling disputes. There is a high rate of agreement at all educational levels on satisfaction by both the mediator and disputants.

  The results do not support the assumption that whole school programs are clearly superior to cadre programs. The latter have a strong effect on students’ conflict attitudes and behaviors, and whole school programs have a strong impact in terms of school climate. Based on this evidence, schools that cannot afford a whole school approach may secure similar, or even superior, benefits with a cadre program that is well implemented.

  Peer mediation programs are effective at all educational levels.

  It is important to recognize that not only was this study well designed from a research point of view, but also the conflict resolution training was well designed and systematic. The trainings for the peer mediation only and peer mediation plus whole school conditions are outlined here.

 

‹ Prev