Book Read Free

Understanding Second Language Acquisition (2nd ed)

Page 42

by Rod Ellis


  Research on how task design variables affect interaction has continued apace since my 2003 review. There is further support for the importance of required information exchange and closed outcomes as a means of promoting the negotiation of meaning. Pica (2002) pointed out the limitations of the open-ended discussions she observed taking place in content-based classrooms with high-intermediate level learners:

  The discussions were interesting and meaningful with respect to subject-matter content. However, as open-ended communication activities, they drew attention away from students’ need for input and feedback containing negative evidence on crucial form-meaning relationships in their L2 development

  (Pica 2002: 16).

  We might ask what evidence is there that tasks that trigger attention to form through the negotiation of meaning actually result in acquisition? In other words, do task design features influence learning via the types of interaction they give rise to? In terms of Figure 11.1 this means examining the relationship between (1) task design and (3) acquisition.

  A number of studies that have attempted this were meta-analysed by Keck et al. (2006). Overall, this meta-analysis showed that tasks that give rise to interaction do result in acquisition gains for both lexis and grammar and also that these gains are maintained over time. This constitutes clear support for task-based learning. But did different types of tasks have different learning effects? Keck et al. compared the effect of two types of task—jigsaw (which involves optional information exchange) and information-gap (which involves required-information exchange). Both proved effective but—as predicted by the Interaction Hypothesis—information-gap tasks that trigger greater negotiation of meaning were more effective than jigsaw tasks. Keck et al. also compared the effects of two types of focused tasks. Drawing on Loschky and Bley-Vroman (1993), they distinguished tasks that made the use of the target feature(s) essential and tasks where the use of the target feature(s) was just useful. They reported little difference in the immediate effect of these two types of task, but a clear advantage for task essentialness when learning was measured in delayed tests. This meta-analysis, then, indicates that task design does effect acquisition and in ways that can be predicted on the basis of the theories that inform the design of tasks.

  Effects of task implementation variables on L2 production

  A task-based lesson can consist of a pre-task phase, the main task phase when learners perform the task, and a post-task phase. See Table 11.2 for a list of implementation variables that can be linked to the first two of these phases. For example, pre-task planning and task rehearsal involve the pre-task phase. Time pressure, focus-on-form strategies, and setting a post-task requirement are relevant to the main-task phase. The post-task phase can involve a variety of activities including explicit language instruction (see R. Ellis 2003). My concern here is only with those implementation variables that impact on how the task is performed—that is, those associated with the pre-task and main task phases.

  Pre-task implementation variables

  The variable that has attracted the greatest attention is pre-task planning. See, for example, chapters in Ellis (2005a) and Skehan (2014a). At stake here is whether giving learners an opportunity to plan before they perform the task affects the complexity, accuracy, and fluency of their production when they perform the task. The theoretical models we considered earlier make different predictions. According to Skehan’s Limited Resources Model, planning facilitates deeper conceptualization and thus—when learners perform the task—they are able to devote more attention to formulation with corresponding effects on complexity. In contrast, pre-task planning constitutes a resource-dispersing variable in Robinson’s Cognition Hypothesis and so is not predicted to result in joint increases in accuracy and complexity.

  In Ellis (2009) I reviewed 19 studies (mainly monologic tasks) that have investigated pre-task planning. I concluded that it has a clear effect on the way a task is performed. The main finding was that planning aids fluency irrespective of whether the task was performed in a laboratory or classroom setting but that it had no effect on fluency in a testing context. Thirteen of the studies reported that pre-task planning also has an effect on complexity (i.e. it resulted in more complex language especially grammatical). Planning also resulted in more accurate language in 13 of the studies. Interestingly, if the planning affected complexity it did not affect accuracy and vice versa. In other words, planning led to greater fluency and either greater complexity or accuracy.

  Pre-task planning is, of course, not a homogeneous activity. It can vary in a number of ways: for example, the amount of time allocated to it and whether it is unguided—learners are left to their own devices about what and how to plan—or guided—learners’ attention is directed to meaning, form, or to both. Individual learner factors are also likely to play a role. However, I was unable to come to any clear conclusions about how these various factors impacted on the effect of planning, as they have not been systematically investigated. I also noted that the studies were all cross-sectional and only investigated the effects of planning on performance, not acquisition. There is a clear need for longitudinal studies that can address what effect planning a task performance has on acquisition. There is also a need for studies that examine what learners actually do when they plan and whether this affects task performanceNOTE 6. Finally, as we will shortly see, pre-task planning interacts with online planning conditions.

  Another pre-task variable that has received attention is task-rehearsal. This involves asking learners to repeat a task one or more times. In this way, the performance of a task at one time prepares learners for a subsequent performance. In Ellis (2009) I reviewed three studies that have investigated rehearsal. I concluded that there was clear evidence that rehearsal has beneficial effects on the performance of the same task, especially for fluency and complexity, and sometimes also for accuracy. However, there was no transference of these effects to a new task even when the new task was of the same type as the original task. This suggests that repeating a task may not contribute to acquisition. However, as Bygate (2001) pointed out ‘massed’ repetition practice may be needed to bring about transfer of training. Shintani’s (2012) study lends support to this claim. It found that asking young beginner learners to repeat a task nine times did have a measurable effect on their acquisition of new words and one grammatical structure (plural -s).

  To sum up, the results of the planning and rehearsal studies give support to both Skehan’s and Robinson’s models. Pre-task planning and rehearsal do result in greater fluency and complexity, as Skehan predicts, but in some studies they also lead to greater accuracy, which his model does not predict. In the majority of the studies, pre-task planning affects either complexity or accuracy but not both. There is scant evidence of either pre-task planning or rehearsal facilitating acquisition through the performance of a task.

  Main task implementation variables

  If what learners do before they perform a task is important, the conditions under which they actually perform it are likely to be even more important. We can distinguish two kinds of main task implementation variables—those that impose external constraints on how the task is to be performed, and those that involve online intervention during the performance of the task.

  One kind of external constraint is time pressure. That is, learners can be asked to perform the task in their own time or they can be pressured to complete it within a fixed amount of time. I have suggested that this affects the nature of the online-planning that learners engage in (Ellis 2005b). When learners can engage in unpressured online-planning they have time to access their linguistic resources during formulation and also to monitor their utterance plans both prior to and after articulating them. I predicted that in this condition linguistic accuracy would benefit. In contrast, when learners are pressured to speak rapidly, accuracy would suffer. This prediction was supported by Yuan and Ellis (2003), who reported greater accuracy in the unpressured performance of a monologic narrative ta
sk by Chinese university students than in their pressured performance. Complexity also benefited to a lesser extent but, interestingly, there was no difference in fluency. Yuan and Ellis also investigated pre-task planning, reporting that it had an effect on complexity but not accuracy. One possible explanation for this is that the two types of planning have differential effects, with pre-task-planning primarily assisting fluency and complexity and online planning primarily accuracyNOTE 7.

  Another way of imposing an external constraint on the performance of a task is by setting some post-task requirement that learners know about before they start to perform the task. An example of a post-task requirement is informing learners that when they finish the task they will have to transcribe it. Foster and Skehan (2013) hypothesized that knowing this would induce learners to avoid error and thus enhance accuracy when they performed the task. In fact, the results of their study showed that this requirement had a general effect on form—i.e. on both complexity and accuracy.

  There is a rich literature dealing with how various kinds of online interventions affect both learners’ production and acquisition. These studies have drawn on cognitive-interactionist theories of L2 acquisition and concern in particular how techniques for focusing learners’ attention on form during meaning-focused interaction can assist acquisition. A number of the studies I considered in Chapter 7, in particular those investigating corrective feedback and modified output, indicate that focus-on-form during the performance of a task has clear effects on acquisition when this is measured in terms of increased accuracy in post-tests or in progression along an acquisition sequence.

  I will consider one further study (Samuda 2001) that investigated within-task intervention in some detail. Samuda was concerned with the role played by the teacher in a task-based lesson, noting that this must involve ‘ways of working with tasks to guide learners towards the types of language processing believed to support L2 development’ (p. 120). The lesson was based on a focused task designed to provide learners with communicative opportunities for using and learning epistemic modals (e.g. might and must). It began with an activity where learners were told the contents of a mystery person’s pocket and were asked to work together in groups, speculating about the person’s possible identity. However, the students failed to use the target modal forms in this stage of the lesson. In the following class discussion the teacher attempted to shift the students’ focus from meaning to form by interweaving the target forms into the interaction mainly by means of recasts. However, the students still failed to use the target structures. The teacher then resorted to direct explanation of the target feature. At this point, the students started to try to use the target forms and the teacher corrected them when they failed to use them or used them erroneously. This was not an experimental study but Samuda did provide some evidence to suggest that the task resulted in acquisition of the target feature. In this lesson, the teacher used a skilful amalgam of implicit and explicit focus-on-form techniques to draw attention to the target structure. Throughout, however, there was a primary focus on meaningNOTE 8.

  Some general comments on research involving tasks

  There is now a very rich research literature on task-based learning and I have only scratched the surface. Researchers have adopted two different approaches. In one approach, they have investigated how task design features and implementation conditions affect the way tasks are performed and then proposed how this will affect acquisition, but without demonstrating that it does. Theoretically-based taxonomies of design and implementation variables have been developed as a basis for making predictions about how specific variables—such as task complexity—affect CAF. There is now ample evidence to show that these variables do have differential effects on these different aspects of production contrary to the claims of sociocultural theorists who consider the activity that results from a task is unpredictable. However, this approach is not without its problems. As Skehan (2014b) pointed out ‘any task is likely to subsume a bundle of features’ (p. 6). Is it possible then to isolate the effect of specific variables as much of the research has attempted to do? The justification for attempting this is it allows for the testing and development of theories of task-based learning and also that it can contribute to empirically-grounded teaching. By and large, however, the research to date has not produced results that enable us to choose between competing theories (e.g. Skehan’s Trade-off Hypothesis and Robinson’s Cognition Hypothesis) and, given the fundamental nature of tasks as bundles of features, it is doubtful whether the research ever will.

  There is a further problem with the first approach. As Révész (2014) pointed out, researchers have generally failed to provide independent evidence of the key independent variable—task complexity. She commented that it was important to(1) not just assume that a complex task is more cognitively demanding, but to demonstrate that it is and (2) provide independent evidence of the the causal processes claimed to enhance learning when the task is performed, rather than just relying on the performance itself. For example, learners’ subjective rating scales could be used to measure learners’ perceptions of task complexity after they had performed a task. Cognitive effort can be measured by using dual tasks and measuring reaction time on the secondary task to provide an indication of the cognitive load imposed by the primary task.

  The second approach has involved investigating how tasks can promote acquisition. It entails examining how learners’ attention can be attracted to form in the interactions that tasks give rise to. Its theoretical premise is that for acquisition to take place, focus on form—viewed as both an instructional strategy and as a mental process—is needed for acquisition to take place. This approach typically involves designing focused tasks so that the effects of performing the task on acquisition can be investigated experimentally by means of pre-test and post-tests. The studies investigating input-based tasks I considered earlier are good examples of this approach. It is, however, more problematic to design focused, output-based tasks as learners are adept at avoiding the use of linguistic features they find difficult, although there are some successful examples of these (e.g. Samuda’s study and the studies considered in Chapter 7). This approach to investigating tasks has provided convincing evidence that incidental acquisition does take place when learners are primarily focused on meaning and when there is focus on form.

  Explicit vs. implicit instruction

  Task-based instruction has been subjected to a number of critiques, in particular from teacher educators who espouse the need for explicit instruction. Swan (2005), for example, accuses advocates of task-based teaching, such as Skehan, Robinson, and R. Ellis of ‘legislating by hypothesis’. He claimed there is no evidence that task-based instruction is more effective than explicit instruction (PPP) and that researchers were extrapolating unconvincingly from theory. We have seen, however, that there is in fact substantial evidence that task-based instruction does promote the kinds of performance likely to facilitate acquisition and also a growing body of evidence to show that it can result in acquisition. However, the question remains as to the relative effectiveness of explicit and implicit instruction. Is Swan right in claiming that, in an instructed context, explicit instruction and the intentional learning it fosters is more effective?

  One way to address this question is by a meta-analytic comparison of the two broad types of instruction. Norris and Ortega (2000) reported a clear advantage for explicit instruction in their meta-analysis of 29 studies involving implicit treatments and 69 involving explicit treatments. In fact, they considered this the single trustworthy finding regarding the effect of form-focused instruction. Spada and Tomita’s (2010) meta-analysis also compared the effectiveness of the two types of instruction. They reported that both types were effective for both simple and complex grammatical features, and that this was evident irrespective of whether learning was measured in controlled or free language production. As in Norris and Ortega, explicit instruction was found to be more effective than implicit in
struction. In a narrative review of instructed second language vocabulary learning, Schmitt (2008) concluded:

  Although research has demonstrated that valuable learning can accrue from incidental exposure, intentional vocabulary learning—i.e. when the specific goal is to learn vocabulary, usually with an explicit focus—almost always leads to greater and faster gains, with a better chance of retention and of reaching productive levels of mastery.

  (Schmitt 2008: 341)

  Thus, it would seem that for both grammar and vocabulary explicit instruction is superior.

  However, it is not quite as simple as that. For a start, both explicit and implicit instruction can take many different forms. As we saw in Chapter 10, not all forms of explicit instruction are successful, especially when learning is measured in free production. Also—as we have seen in this chapter—task-based instruction (as the principal type of implicit instruction) also differs in many ways. In addition, little is currently known about the role that individual difference factors such as language aptitude and age play in the efficacy of the two types of instruction. It would seem very likely that learners vary in their ability to benefit from implicit and explicit instruction. Analytically-minded learners may do better with explicit instruction, but functionally-minded learners may gain more from implicit instruction. Thus blanket comparisons of explicit and implicit instruction are of doubtful validity.

 

‹ Prev