effects of modeling rote versus varied responses on...

16
EFFECTS OF MODELING ROTE VERSUS VARIED RESPONSES ON RESPONSE VARIABILITY AND SKILL ACQUISITION DURING DISCRETE-TRIAL INSTRUCTION SEAN P. PETERSON PIER CENTER FOR AUTISM NICOLE M. RODRIGUEZ UNIVERSITY OF NEBRASKA MEDICAL CENTERS MUNROE-MEYER INSTITUTE TAMARA L. PAWICH FLORIDA INSTITUTE OF TECHNOLOGY AND THE SCOTT CENTER FOR AUTISM TREATMENT Despite its advantages, discrete-trial instruction (DTI) has been criticized for producing rote responding. Although there is little research supporting this claim, if true, this may be problem- atic given the propensity of children with autism to engage in restricted and repetitive behavior. One feature that is common in DTI that may contribute to rote responding is the prompting and reinforcement of one correct response per discriminative stimulus. To evaluate the potential negative effects of rote prompts on varied responding, we compared the effects of modeling rote versus varied target responses during the teaching of intraverbal categorization. We also evalu- ated the effects of these procedures on the efciency of acquisition of any one correct response. For all four children, any increase in varied responding was eeting, and for two participants, acquisition was slower in the variable-modeling condition. Key words: autism, discrete-trial instruction, rote responding, variability Despite the success of using discrete-trial instruction (DTI) to teach a variety of skills to children diagnosed with autism spectrum disor- der (ASD), this teaching method has sometimes been criticized for producing rote responding (Cihon, 2007). Although there does not appear to be any research supporting this claim, if true, this may be problematic given the propensity of children with ASD to engage in restricted and repetitive behavior (American Psychiatric Associ- ation, 2013). One potential reason for this criti- cism may stem from the fact that DTI trials are often arranged such that one correct response is repeatedly prompted and reinforced for a given discriminative stimulus. For example, a therapist may prompt and reinforce Im nefollowing the discriminative stimulus how are you?despite the fact that other responses such as good, thankswould also be socially acceptable. Thus, it is possible that the prompting and rein- forcement of one correct response narrows the range of responses that (a) are emitted and (b) contact reinforcement, potentially contribut- ing to the development of rote responding. One approach to counter the potential nega- tive effects of DTI procedures may be to This study was conducted in partial fulllment of the rst authors requirements for the doctoral degree at Uni- versity of Nebraska Medical Centers Munroe-Meyer Institute, under the supervision of the second author. All data collection was completed at the University of Nebraska Medical Center. This research was supported by internal funding from the University of Nebraska Medical Centers Chancellors Ofce. We thank committee mem- bers Kevin Luczynski, Brian Greer, and Wayne Fisher for their helpful comments on the dissertation. We also thank Andrea Clements, Shaela Bruce, Victoria Cohrs, Michael Aragon, and Cynthia Nava for their assistance with data collection or analyses. Correspondence concerning this article should be addressed to Nicole M. Rodriguez, Munroe-Meyer Insti- tute, University of Nebraska Medical Center, 985450 Nebraska Medical Center, Omaha, NE, 68198, nicole. [email protected] doi: 10.1002/jaba.528 JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2018, 9999, 116 NUMBER 9999 () © 2018 Society for the Experimental Analysis of Behavior 1

Upload: others

Post on 11-Mar-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

EFFECTS OF MODELING ROTE VERSUS VARIED RESPONSES ONRESPONSE VARIABILITY AND SKILL ACQUISITION DURING

DISCRETE-TRIAL INSTRUCTION

SEAN P. PETERSON

PIER CENTER FOR AUTISM

NICOLE M. RODRIGUEZ

UNIVERSITY OF NEBRASKA MEDICAL CENTER’S MUNROE-MEYER INSTITUTE

TAMARA L. PAWICH

FLORIDA INSTITUTE OF TECHNOLOGY AND THE SCOTT CENTER FOR AUTISM TREATMENT

Despite its advantages, discrete-trial instruction (DTI) has been criticized for producing roteresponding. Although there is little research supporting this claim, if true, this may be problem-atic given the propensity of children with autism to engage in restricted and repetitive behavior.One feature that is common in DTI that may contribute to rote responding is the promptingand reinforcement of one correct response per discriminative stimulus. To evaluate the potentialnegative effects of rote prompts on varied responding, we compared the effects of modeling roteversus varied target responses during the teaching of intraverbal categorization. We also evalu-ated the effects of these procedures on the efficiency of acquisition of any one correct response.For all four children, any increase in varied responding was fleeting, and for two participants,acquisition was slower in the variable-modeling condition.Key words: autism, discrete-trial instruction, rote responding, variability

Despite the success of using discrete-trialinstruction (DTI) to teach a variety of skills tochildren diagnosed with autism spectrum disor-der (ASD), this teaching method has sometimesbeen criticized for producing rote responding

(Cihon, 2007). Although there does not appearto be any research supporting this claim, if true,this may be problematic given the propensity ofchildren with ASD to engage in restricted andrepetitive behavior (American Psychiatric Associ-ation, 2013). One potential reason for this criti-cism may stem from the fact that DTI trials areoften arranged such that one correct response isrepeatedly prompted and reinforced for a givendiscriminative stimulus. For example, a therapistmay prompt and reinforce “I’m fine” followingthe discriminative stimulus “how are you?”despite the fact that other responses such as“good, thanks” would also be socially acceptable.Thus, it is possible that the prompting and rein-forcement of one correct response narrows therange of responses that (a) are emitted and(b) contact reinforcement, potentially contribut-ing to the development of rote responding.One approach to counter the potential nega-

tive effects of DTI procedures may be to

This study was conducted in partial fulfillment of thefirst author’s requirements for the doctoral degree at Uni-versity of Nebraska Medical Center’s Munroe-MeyerInstitute, under the supervision of the second author. Alldata collection was completed at the University ofNebraska Medical Center. This research was supported byinternal funding from the University of Nebraska MedicalCenter’s Chancellor’s Office. We thank committee mem-bers Kevin Luczynski, Brian Greer, and Wayne Fisher fortheir helpful comments on the dissertation. We also thankAndrea Clements, Shaela Bruce, Victoria Cohrs, MichaelAragon, and Cynthia Nava for their assistance with datacollection or analyses.Correspondence concerning this article should be

addressed to Nicole M. Rodriguez, Munroe-Meyer Insti-tute, University of Nebraska Medical Center, 985450Nebraska Medical Center, Omaha, NE, 68198, [email protected]: 10.1002/jaba.528

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2018, 9999, 1–16 NUMBER 9999 ()

© 2018 Society for the Experimental Analysis of Behavior

1

prompt and reinforce multiple correct responsesduring teaching, which is consistent with theresponse generalization strategies of teachingsufficient exemplars and training looselydescribed by Stokes and Baer (1977). However,simply broadening the contingency class byincreasing the range of responses eligible forreinforcement may not be sufficient to promotevaried responding. In fact, basic research hasshown that contingencies that allow but do notrequire varied responding produce invariant(rote) responding over repeated exposure to thecontingency (e.g., Antonitis, 1951; Neuringer,Kornell, & Olufs, 2001). For example, inExperiment 2, Neuringer et al. (2001) showedthe importance of reinforcement contingencieson varied responding with rats that wererequired to complete a three-response sequence.For rats in the variability group (Var), low-probability response sequences were reinforcedbased on an equation that took into accountthe recency of the last instance of the responsesequence. For rats in the yoked group (Yoke),the frequency and distribution of reinforcementwas yoked to the Var group, thereby allowingbut not requiring varied responding. Variedresponding was significantly higher in the Vargroup relative to the Yoke group, suggestingthat the reinforcement contingency and notextinction-induced variability was responsiblefor increases in varied responding. In Experi-ment 3 of the same study, there were twogroups: Var (described above) or Repetition(Rep; in which reinforcement was contingenton a single response sequence). A comparisonof varied responding across the Yoke group inExperiment 2 and the Rep group in Experi-ment 3 (Figures 4 and 7; Neuringer et al.)revealed comparable levels of varied responding.These results suggest that contingencies thatsimply allow but do not require varied respond-ing are likely to result in invariant respondingat comparable levels to when reinforcement iscontingent on one correct response.

Although some basic research suggests thatvaried responding will only increase when variedresponding is differentially reinforced, otherbasic studies suggest that there are at least someconditions under which varied responding mightbe observed when contingencies allow but donot require varied responding, such as whensuch contingencies are preceded by a history ofdifferential reinforcement of varied responding(e.g., Saldana & Neuringer, 1998). The abilityto produce varied responding under contingen-cies that allow but do not require variability haspractical implications if we presume that suchcontingencies are more commonly encounteredin the natural environment than contingenciesthat require varied responding. For example,Rodriguez and Thompson (2015) suggested thatnatural consequences for variant or invariantresponding (e.g., rolling one’s eyes after hearinga repeated joke) may be too subtle or intermit-tent to function as reinforcers or punishers forchildren diagnosed with ASD. Thus, researchaimed at exploring the conditions under whichvaried responding will be observed or main-tained under contingencies that allow but donot require variability is warranted.Like a history of reinforcement for varied

responding (Saldana & Neuringer, 1998), it ispossible that other supplemental strategies, suchas modeling varied responding, may be sufficientto bring multiple responses in contact with rein-forcement when reinforcement is provided forany correct response. To this point, severalresearchers have suggested the importance ofincluding varied models to increase variedresponding during social interactions (Weiss,LaRue, & Newcomer, 2009) and play (Jahr &Eldevik, 2002). However, to our knowledgeonly one study has evaluated the effects ofmodeling variability when prompting withinDTI. Carroll and Kodak (2015) compared twoprocedures for modeling variability during theteaching of intraverbal categorization to two par-ticipants diagnosed with ASD. In the first condi-tion, the experimenter prompted one of

SEAN P. PETERSON et al.2

20 different combinations of three of the fourprimary target exemplars (e.g., “train, bike, bus”in response to “tell me some vehicles”). The sec-ond condition was identical to the first condi-tion except that the experimenter presentedthree of four secondary target exemplars follow-ing the delivery of reinforcement for a correctresponse (e.g., “That’s right! Car, boat, and air-plane are vehicles too”). The delivery of second-ary targets to which the participant is notrequired to respond during the consequent eventof a trial is referred to as instructive feedback.Differential consequences were not provided forvaried responding in either condition. Thecumulative frequency of novel combinations ofcategory exemplars (e.g., “bike, bus, train” or“bus, train, bike”) and novel exemplars(e.g., “camper, tank, bus” would be counted astwo novel exemplars) across sessions increasedacross both variable- modeling and variable-modeling-plus-instructive-feedback conditionsduring an acquisition task for both participants.However, the variable-modeling-plus-instruc-tive-feedback condition resulted in higher levelsof varied responding across sessions, with agreater number of cumulative novel combina-tions for one participant and a greater numberof cumulative novel exemplars for both partici-pants. Although the main purpose of the studywas to evaluate the effects of instructive feed-back on varied and novel responses, the resultsalso suggest that varied models alone can pro-duce varied responding.The current study extended Carroll and

Kodak (2015) by comparing the effects ofvaried- and rote-model prompts on variedresponding during the teaching of intraverbalcategorization. As previously noted, DTI typi-cally involves the repeated prompting and rein-forcement of one response. A comparison ofthe effects of varied and rote modeling wouldallow for an evaluation of any potential unde-sirable effects of rote modeling on variedresponding. Such a comparison could alsoreveal any desirable effects of variable modeling

on varied responding. In other words, differen-tial levels of varied responding with rote modelsversus variable models would suggest that theprompting of one response during DTI maycontribute to rote responding, and providingvariable models may be one way to address thatlimitation. However, because prompting multi-ple correct responses may interfere with acquisi-tion, we also evaluated the efficiency of eachmodeling procedure.To answer our experimental questions, we

adopted a translational approach in which theskill we taught was relevant to our participants’early intervention goals, but the response tar-geted for variability was also aligned with basicresearch. Specifically, we sought to teach ourparticipants intraverbal categorization(e.g., responding “dog, cat, horse” wheninstructed, “tell me some animals”), while alsotargeting variability in the sequence in whichthe target exemplars were provided (e.g., “cat,horse, dog” vs. “horse, cat, dog”), with onlyone category targeted per condition. We tar-geted intraverbal categorization given theimportance of varied vocal-verbal behavior forthe development of strong vocal-verbal reper-toires, and given the tendency for individualswith ASD to engage in rote responding (seeRodriguez & Thompson, 2015, for a review ofresearch comparing levels of response variabilityacross individuals with and without ASD). Tar-geting variability in the sequence of exemplarsalso aligned with the type of varied respondingtypically evaluated in basic research and allowedus to equate the number of exemplars (three)that participants were exposed to across trialsand conditions, such that we could more easilyisolate the effects of different types of promptson varied responding and skill acquisition.

METHOD

Participants, Setting, and MaterialsParticipants included four children diagnosed

with ASD between the ages of 3 and 8 years,

3ROTE VERSUS VARIABLE MODELS

who were receiving early intervention services ata university-based center. Three participantsreceived 2 to 3 years of early intervention ser-vices prior to the start of the study; Rondareceived early intervention services for 4 monthsprior to her participation. Victor was a 5-year-old Caucasian boy, Ivan was an 8-year-old Afri-can American boy, Adam was a 5-year-old ArabAmerican boy, and Ronda was a 3-year-oldCaucasian girl. On the Verbal Behavior Mile-stones Assessment and Placement Program (VB-MAPP; Sundberg, 2008), Victor, Ivan, andRonda were primarily Level 2 learners(i.e., learners with a developmental skill levelcomparable to a typically developing child aged18 to 30 months), and Adam was a Level3 learner (i.e., learner with a developmental skilllevel comparable to a typically developing childaged 30 to 48 months). All participants usedshort sentences (i.e., at least three to four words)to mand and tact, and had a previous history ofintraverbal programming (e.g., feature, function,class) with multiple exemplars, although nonehad received DTI that specifically targeted var-ied responding.Sessions were conducted in either a 2.5-m

by 2.5-m private session room in which theparticipant received early intervention services(Victor, Adam, and Ronda) or a living area ofthe participant’s home (Ivan). The participantsat next to or across from the experimenter at achild-sized table. Each session consisted of10 trials and was approximately 5 to 10 min.Two to four sessions were conducted per day,up to 5 days per week. Materials included datasheets and reinforcers.

Response Measurement and InterobserverAgreementDuring each trial, paper-and-pen data were

collected on the participant’s vocal responsesequence verbatim (i.e., each word emitted bythe participant was recorded in the order inwhich it was stated), and whether the vocal

response occurred prior to or following thecontrolling prompt (i.e., independent orprompted responding, respectively). To evalu-ate differences in varied responding as well asthe rate of acquisition, data were summarizedto gather information on the following primarydependent variables for each session: (a) thepercentage of trials with correct responding,and (b) the number of unique responsesequences in a session. A correct response(independent or prompted) was defined as theparticipant providing three appropriate exem-plars for the category, regardless of a particularresponse sequence (e.g., “dog, cat, horse” or“cat, dog, mouse” as exemplars of animals).The participant had 5 s to initiate responding,and a response was considered complete if theparticipant ceased responding for 5 s. Forexample, if the participant gave two exemplars,s/he had 5 s after providing the second exem-plar to include a third and final exemplar, orthe response would be counted as an error. Aresponse was scored as independent orprompted depending on whether it occurredprior to or following the delivery of theprompt, respectively. For prompted respond-ing, the sequence emitted by the participantdid not need to match the sequence promptedby the experimenter to be scored as correct.The number of unique response sequences

was calculated by determining the number oftrials in which there was a correct response andthe order of exemplars differed from the orderof exemplars in response sequences in all pre-ceding trials in that session. Thus, if a partici-pant alternated between two correct sequences(e.g., “dog, horse, cat”; “cat, horse, dog”; “dog,horse, cat”; “cat, horse, dog” and so on) in asession, the number of unique responsesequences would be two. A response sequencecontaining a novel exemplar (i.e., an exemplarthat was never prompted in the study, such as“mouse” in the previous example) was alsoscored as unique; this happened only once withAdam. We also calculated the number of

SEAN P. PETERSON et al.4

cumulative unique sequences across sessions pertarget category.Finally, to further characterize the level of var-

ied responding and assist with interpretation ofthe results, we conducted additional post-hocdata analyses of both independent andprompted responses. For independent responses,we determined the relative frequency of thethree most common response sequences and theoccurrence of generative responding for eachcategory. The dominant response was defined asthe response sequence that occurred most fre-quently across sessions. A generative responsewas defined as the occurrence of either of thetwo response sequences that were never modeledthroughout the study (i.e., recombinative gener-alization; Axe & Sainato, 2010).A second, independent observer collected

data for 64% to 100% of sessions across partic-ipants. Interobserver agreement (IOA) was cal-culated for correct responding and responsesequence on a trial-by-trial basis. An agreementfor correct responding was defined as bothobservers scoring either correct or incorrect ona trial. An agreement for response sequence wasdefined as both observers scoring the samethree exemplars in an identical order for bothindependent and prompted responses. Bothdata collectors had to score the sameresponse(s) for correct responding, independentresponse sequence, and prompted responsesequence for a trial to be scored as an agree-ment. We calculated IOA by dividing the num-ber of trials with an agreement by the totalnumber of trials (10), and then converting thequotient to a percentage. Mean IOA was 97%(range, 40% to 100%) for Victor, 99% (range,90% to 100%) for Ivan, 99% (range, 70% to100%) for Adam, and 99% (range, 90% to100%) for Ronda.

Pre-assessmentWe conducted a pre-assessment to identify

unknown target responses to include in the

comparison. Three categories (e.g., fruits, ani-mals, and tools) were each presented four timesin a session (resulting in 12 trials per session),during which the experimenter provided thediscriminative stimulus, “tell me some [cate-gory].” If the participant responded incorrectlyor did not respond within 5 s of the discrimina-tive stimulus, we did not provide feedback andbegan the next trial. Any category for which aparticipant provided two or more exemplars(e.g., “peach” and “pear” for the category fruit)was excluded, and an alternate category wasassessed. This process continued until we identi-fied four categories for which no more than oneexemplar was provided for two consecutive ses-sions. Three of the four participants (Victor,Adam, and Ronda) did not emit any correctexemplars for the categories selected for teach-ing. Ivan emitted one correct exemplar(i.e., football) for one category (i.e., outsidesports). Exemplars that were emitted during thepre-assessment (or baseline) were not includedin the three exemplars that were modeled duringthe experiment. All correct responses were rein-forced with praise and a highly preferred edibleor tangible (identified via a one-trial multiple-stimulus-without-replacement preference assess-ment similar to DeLeon et al., 2001).Once we identified the target categories, we

conducted a brief echoic pre-assessment to con-firm that participants could echo each of theexemplars that would be modeled in the experi-ment. Twelve target exemplars (i.e., threeexemplars across four categories) were inter-spersed with nontarget words. The order ofpresentation was randomized, and praise wasdelivered for correct responding. The echoicpre-assessment for Victor’s Set 3 was conductedafter teaching Sets 1 and 2 and consisted of sixtarget exemplars interspersed with nontargetwords. Participants correctly echoed 100% ofthe target exemplars.Finally, we assessed whether the exemplars,

when presented by themselves, evoked speakeror listener responses (i.e., served as

5ROTE VERSUS VARIABLE MODELS

discriminated operants) prior to the start of ourstudy. This was done to ensure that responsesprompted during the experiment proper werenot being taught as a response chain of arbi-trary stimuli, particularly in the rote-modelingcondition. The majority of targets were evalu-ated as tacts. Specifically, a picture of eachexemplar was presented at least two timeswithin a six-trial session and was interspersedwith at least two other nontarget stimuli orstimuli from different categories. Differentialreinforcement was provided for correctresponses, and no consequence was providedfor incorrect responses. For exemplars forwhich we anticipated a picture might not exertprecise control over the target (e.g., headache,suntan, meow), we evaluated listener repertoires(i.e., selecting the target from an array of threestimuli contingent on the spoken word) orintraverbals (e.g., “What does a cat say?”). Allother procedures (e.g., number of trials per ses-sion, differential reinforcement) were the sameas the procedures for testing exemplars as tacts.If participants did not respond correctly on thefirst two presentations, we introduced a 0-sprompt delay for two sessions. We includedprompting to ensure that participants wereexposed to each exemplar as an independentoperant. Targets that required teaching wereequally distributed across categories. Victorcould tact all exemplars in Sets 1 and 2 butrequired exposure to prompting for all exem-plars in Set 3; Ivan was exposed to promptingfor 5 of the 12 exemplars (two exemplars foreach category in Set 1 and one exemplar forone category in Set 2); Adam was exposed toprompting for 7 of the 12 exemplars (one totwo exemplars per set); and Ronda was exposedto prompting for 4 of the 12 exemplars.

Experimental DesignWe used an adapted alternating treatments

design (Sindelar, Rosenberg, & Wilson, 1985)within a multiple probe design across sets of

categories. There were two categories per setwith three exemplars per category. Categories ineach set were matched based on the number ofsyllables for each exemplar and then randomlyassigned to either the rote- or variable-modelingcondition. There were, however, two occasionsin which the assignment of categories to condi-tions was not random but rather counterba-lanced with the assignment of conditions foranother participant. Table 1 lists the categoriesand exemplars for each condition and set perparticipant.

ProcedurePrior to each session, we conducted a one-trial

multiple-stimulus-without-replacement preferenceassessment (based on procedures described byDeLeon et al., 2001). The item selected was usedas a reinforcer throughout the session unless theparticipant requested an alternative item, in whichcase the requested item was delivered for theremainder of the session or until a different itemwas requested.Differential reinforcement baseline (DSR BL).

Each session consisted of 10 trials of one cate-gory. During each trial, the experimenterdelivered the discriminative stimulus “tell mesome [category]” and provided the participantthe opportunity to respond. Contingent on acorrect response (regardless of responsesequence), praise and the reinforcer weredelivered. Contingent on an incorrectresponse or no response within 5 s, the exper-imenter provided no feedback and proceededto the next trial. The criterion for movingfrom baseline to the modeling phase was sta-ble responding across both conditions usingvisual inspection.Progressive-prompt delay (PPD). This phase

was identical to baseline, except that (a) a PPDwas used in both conditions and (b) each cate-gory was assigned to either the rote- orvariable-modeling condition. The PPD phasestarted with at least two sessions at a 0-s

SEAN P. PETERSON et al.6

prompt delay, during which the experimenterimmediately modeled a correct response follow-ing the discriminative stimulus during eachtrial. Following two consecutive sessions withcorrect prompted responding at or above 90%,the experimenter provided a 2-s delay followingthe discriminative stimulus before modeling thecorrect response. Thereafter, the criterion toincrease the prompt delay was one session inwhich 50% or more of the errors consisted of ano response (omission errors; Kodak, Fucht-man, & Paden, 2012); the progression of theprompt delays was 2 s, 5 s, 10 s, and 20 s. Thecriteria for increasing the prompt delay wereheld constant across conditions such that eachcondition progressed through the changes inprompt delay independently. This was done toemulate how the PPD would be applied inpractice when the conditions are not beingcompared in a formal evaluation.

Following a correct independent response,the experimenter provided praise and a rein-forcer. If the participant provided an incorrectexemplar, did not respond, or provided anincomplete response, the experimenter provideda vocal model of a correct response. Followinga prompt, the participant had 5 s to initiateresponding. The trial ended when there was noresponse, a 5-s pause in responding, or threeexemplars were provided and reinforced.An additional manipulation was made for

Victor. After meeting criteria to increase theprompt delay to 5 s, the majority of Victor’serrors in the variable-modeling condition shiftedto errors of commission (e.g., exemplars fromthe rote-modeling condition). For this reason,we changed the contingencies for Set 1 of thevariable-modeling condition starting with session24 such that only praise was provided for correctprompted responding in this condition

Table 1List of Categories and Exemplars for each Condition and Set per Participant

Set 1 Set 2 Set 3

Participant Category Exemplars Category Exemplars Category Exemplars

Victor Little Animals FrogDogMouse

Things in the Fridge MilkAppleCheese

Inside Sports BowlingWrestlingHockey

Big Animals BearHorseWhale

Things in the Bathroom SinkKleenexSoap

Outside Sports TennisFishingSailing

Ivan Outside Sports TennisBaseballFishing

Tools in the Garden RakeLawn-mowerShovel

Inside Sports BowlingHockeyWrestling

Tools in the Kitchen Micro-waveBlenderSpoon

Adam Rhymes with Snowflake Earth-quakeHeadacheCupcake

Rhymes with Haystack BackpackKayakRacetrack

Rhymes with Batman ToucanSuntanDishpan

Rhymes with Allow SnowplowEyebrowMeow

Ronda Big Animals BearHorseWhale

Foods EggsSandwichCheese

Little Animals FrogDogMouse

Drinks MilkJuiceWater

Note. The top category for each participant and set was assigned to the rote-modeling condition and the bottom cate-gory was assigned to the variable-modeling condition.

7ROTE VERSUS VARIABLE MODELS

(i.e., differential reinforcement for independentresponding; Karsten & Carr, 2009).Mastery criterion was two consecutive ses-

sions with correct responding at or above 90%.However, we continued to conduct sessionsacross both conditions until mastery criterionwas met for both conditions for two reasons.First, we wanted to maintain the alternatingfashion in which conditions were conducted tominimize the likelihood that switching to con-ducting only one condition would increase thespeed of acquisition for that condition relativeto when it is alternated with another condition.The only exception to this rule was at the endof Set 1 for Victor. Second, we began to noticea pattern of varied responding in which thenumber of unique sequences increased but onlytemporarily. For this reason, we sometimesconducted additional sessions after meeting themastery criterion to allow for an evaluation ofthe persistence of varied responding acrosstargets.Rote-modeling condition. For the category in

the rote-modeling condition, the experimentermodeled the same response during each trial(e.g., “apple, orange, banana” on every trial).Variable-modeling condition. For the category

in the variable-modeling condition, the experi-menter modeled the same three correct exem-plars but in a different sequence during eachtrial (e.g., “dog, cat, horse” for the first trial,“dog, horse, cat” for the second trial, and “cat,horse, dog” for the third trial). To ensure thatparticipants were exposed to a range of possiblesequences during the variable-modeling condi-tion while also allowing for a test of recombina-tive generalization (i.e., the occurrence of aresponse sequence to which the participant hadnot been exposed; Axe & Sainato, 2010), fourof the six possible sequences were modeledthroughout this phase. The specific responsesequence modeled was based on a predeter-mined list in which the four possible responsesequences that were modeled during the experi-ment were pseudorandomized in sets of four to

increase the probability of equal exposure toeach of the four prompted sequences. Specifi-cally, if a model was necessary, the experi-menter modeled the next sequence on the listthat differed from the participant’s response inthe previous trial. For instance, if the childresponded “dog, cat, horse” on the first trialand erred or did not respond on the secondtrial, the experimenter modeled the nextsequence on the list that differed from theresponse on the preceding trial (e.g., “horse,dog, cat”; note that if “dog, cat, horse” wasnext on the list, it would have been skipped toensure that the model was different from theprevious response). Each programmed sequencewas crossed off after it was modeled, and theexperimenter continued with the remainder ofthe list.To isolate the effects of modeling varied

responding, differential consequences were notprovided for varied responding across rote- andvariable-modeling conditions (i.e., reinforce-ment was provided for any correct sequenceindependent of variability).

RESULTS

The first purpose of our study was to exam-ine whether incorporating multiple modelswould lead to increased varied responding. Theleft panel of Figures 1–3, and 4 show the num-ber of unique response sequences for Victor,Ivan, Adam, and Ronda, respectively. Duringthe DSR BL, none of the participants provideda correct response to any of the intraverbal cate-gories. Following the implementation of thePPD, an initial but temporary increase in thenumber of independent unique responsesequences was observed for at least the variable-modeling condition of the first set for each par-ticipant. For Victor (Figure 1), the number ofunique response sequences temporarilyincreased in the variable-modeling conditionbut not the rote-modeling condition for Set1. Because varied responding was not replicated

SEAN P. PETERSON et al.8

0123456

0123456

0123456

020406080

100

0

20

40

60

80

100

0

20

40

60

80

100

Num

ber

ofU

niqu

eR

espo

nse

Sequ

ence

s

Per

cen t

age

Cor

rect

Res

pons

es

Set1

Set1

Set 2

Set2

Set3

Set3

0123456

10 20 30 40 50 60 70 80 90 100 1100123456

020406080

100

10 20 30 40 50 60 70 80 90 100 1100

20

4060

80

100

Victor

DSRBL

0-sPD PPD

DSRBL

0-sPD PPD

Rote (Independent)

Sessions Sessions

Variable (Independent)Variable (Prompted)

Rote (Prompted)

0123456

0

20

40

60

80

100

* *

* *No SR+

*

*

*

*

*

*

*

*

*

*

PPD0-sPD

0-sPD PPD

Figure 1. Data for Victor. Left panels depict the number of independent unique response sequences (closed sym-bols) and unique response sequences emitted following prompts during prompted trials (open symbols). Right panelsdepict the percentage of correct independent responses (closed symbols) and correct prompted responses (open symbols;calculated out of the number of prompted responses). DSR BL = Differential reinforcement baseline. PD = Promptdelay. PPD = Progressive prompt delay. Asterisks denote when mastery criterion was met for each condition.

9ROTE VERSUS VARIABLE MODELS

with Set 2 for Victor, we evaluated a third set;however, as in Set 2, Victor independentlyemitted only one response sequence across bothconditions. Thus, the temporary increase invaried responding in the variable-modeling con-dition was not replicated across sets for Victor,potentially due to sequence effects in which thelack of differential contingencies for variedresponding during Set 1 affected respondingduring subsequent sets. For Ivan (Figure 2), thenumber of independent unique response

sequences temporarily increased, albeit onlyslightly, in only the variable-modeling condi-tion for Set 1; however, the number of inde-pendent unique response sequences increasedacross both conditions in Set 2. For Adam(Figure 3), the number of independent uniqueresponse sequences increased across both condi-tions for both sets, with a larger and more per-sistent increase in varied responding in thevariable-modeling condition for Set 2. ForRonda (Figure 4), an initial but temporary

10 20 30 40 50 60 70 80 90 1000

20

4060

80

100

020406080

100

020406080

100

01234567

01234567

01234567

10 20 30 40 50 60 70 80 90 10001234567

020406080

100

DSRBL

DSRBL

0-sPD

0-sPDProgressive-Prompt Delay Progressive-Prompt Delay

Variable (Prompted)Variable (Independent)

Rote (Independent)Rote (Prompted)

Ivan

Num

ber

ofU

niqu

eR

espo

nse

Sequ

ence

s

Set2

Set2

Set1

Set1

Pe r

cent

age

Cor

rect

Res

pons

es

Sessions Sessions

*

*

*

*

*

*

*

*

Figure 2. Data for Ivan. Left panels depict the number of independent unique response sequences (closed symbols)and unique response sequences emitted following prompts during prompted trials (open symbols). Right panels depictthe percentage of correct independent responses (closed symbols) and correct prompted responses (open symbols; calcu-lated out of the number of prompted responses). DSR BL = Differential reinforcement baseline. PD = Prompt delay.Asterisks denote when mastery criterion was met for each condition.

SEAN P. PETERSON et al.10

increase in the number of independent uniqueresponse sequences differentially occurred inthe variable-modeling condition, and only oneindependent unique response sequence wasemitted across both sets in the rote-modelingcondition. For all participants, any initial variedresponding that occurred decreased to only oneindependent unique response sequence by thetime the mastery criterion was met (asterisks in

the figures denote when mastery criterion wasmet for each condition).The second purpose of our study was to

examine whether incorporating multiplemodels within prompts would compromise effi-ciency during acquisition. The right panels ofFigures 1–3, and 4 show the percentage of cor-rect responses for Victor, Ivan, Adam, andRonda, respectively. Because we selected

0

20

4060

80

100

01234567

01234567

01234567

10 20 30 40 50 60 70 80 90 10001234567

0

20

40

60

80

100

0

20

40

60

80

100

10 20 30 40 50 60 70 80 90 1000

20

4060

80

100

Adam

Num

ber

of U

niqu

e R

espo

nse

Sequ

ence

sSe

t 2Se

t 1

Set 1

Set 2

Per

cent

age

Cor

r ect

Res

p ons

es

Variable (Prompted)

Rote (Prompted)

DSRBL

0-sPD Progressive-Prompt Delay

DSRBL

0-sPD Progressive-Prompt Delay

Rote (Independent)

Sessions Sessions

*

*

*

*

*

*

*

*

Variable (Independent)

Figure 3. Data for Adam. Left panels depict the number of independent unique response sequences (closed sym-bols) and unique response sequences emitted following prompts during prompted trials (open symbols). Right panelsdepict the percentage of correct independent responses (closed symbols) and correct prompted responses (open symbols;calculated out of the number of prompted responses). DSR BL = Differential reinforcement baseline. PD = Promptdelay. Asterisks denote when mastery criterion was met for each condition.

11ROTE VERSUS VARIABLE MODELS

intraverbal categories for which participantsprovided either no (Victor, Adam, Ronda) orone correct exemplar (Ivan for one category),there were no correct response sequences dur-ing DSR BL (also see Supporting Information1). Following the introduction of the PPD, oneof two general patterns of responding wasobserved across participants: (a) equal rates ofacquisition across both phases, or (b) sloweracquisition in the variable-modeling condition.

For Victor (Figure 1), whereas the rate ofacquisition was comparable across both condi-tions for Set 2, it took twice as many sessionsto reach the mastery criterion in the variable-modeling condition in Set 1, and threeadditional sessions to reach mastery in thevariable-modeling condition for Set 3. Follow-ing low levels of correct responding in the vari-able modeling condition for Set 1 aftercompleting teaching for Set 2, we reintroduced

10 20 30 40 50 60 70 80 90 100

0

20

40

60

80

100

0123456

Set

1

Set

2

DSRBL Progressive-Prompt Delay0-s PD

0123456

10 20 30 40 50 60 70 80 90 100

0123456

Variable (Independent)

Ronda

0

20

40

60

80

100

0

20

40

60

80

100

0

20

40

60

80

100

Num

ber

of

Uniq

ue

Res

ponse

Seq

uen

ces

Set

1S

e t2

Per

c ent a

ge

Corr

ect

Res

pon

ses

DSRBL 0-s PD Progressive-Prompt Delay

Variable (Prompted)Rote (Independent)Rote (Prompted)

SessionsSessions

Set

1

*

*

*

*

*

*

*

*

0123456

Figure 4. Data for Ronda. Left panels depict the number of independent unique response sequences (closed sym-bols) and unique response sequences emitted following prompts during prompted trials (open symbols). Right panelsdepict the percentage of correct independent responses (closed symbols) and correct prompted responses (open symbols;calculated out of the number of prompted responses). DSR BL = Differential reinforcement baseline. PD = Promptdelay. Asterisks denote when mastery criterion was met for each condition.

SEAN P. PETERSON et al.12

the 0-s prompt delay followed by the PPD toregain mastery level performance in the variable-modeling condition. For Ivan (Figure 2) andAdam (Figure 3, Set 2), acquisition rates weresimilar across conditions, suggesting that model-ing varied responses in the variable-modelingcondition did not interfere with acquisition.However, unlike Adam, Ivan’s continued correctresponding was differentially lower followingmastery in the rote-modeling condition for oneset (Set 2, Figure 2). For Ronda (Figure 4),acquisition was notably slower in the variable-modeling condition, taking nearly twice (Set 1)or more than twice (Set 2) as many sessions toreach mastery criterion in the variable-modelingcondition.

DISCUSSION

Our results suggest that although there maybe some initial varied responding when provid-ing variable modeling during teaching, sucheffects may be temporary. Thus, it seemsunlikely that prompting and reinforcing onetarget response per discriminative stimulus,which is typical of DTI, uniquely contributesto rote responding in individuals with ASD. Inother words, invariant responding may occurwhether or not therapists provide variablemodels. Further, providing variable modelingled to slightly slower acquisition for some par-ticipants, suggesting that there can be a disad-vantage to incorporating variable modelingcompared to rote modeling, particularly whenconsidered in light of the temporary effects ofvariable modeling on varied responding. Futureresearch could explore the most efficient proce-dures for teaching multiple correct responseswhile also promoting the occurrence and main-tenance of varied responding. For example, it ispossible that additional intervention compo-nents (e.g., instructive feedback; Carroll &Kodak, 2015) could be included with variablemodeling.

The temporary effects of the variable-modeling condition on varied responding maynot be surprising in light of what we knowabout the effects of reinforcement. That is,reinforcement increases the future probabilityof the response that precedes it. A within-session analysis of the response sequences pro-vided on each trial (see Supporting Information2) revealed that one dominant responsesequence emerged both within and across ses-sions for all participants, which supports theinterpretation that the direct effects of rein-forcement may have contributed to the devel-opment of invariant responding in the variable-modeling condition. Repeatedly engaging inone dominant response may also represent theleast effortful pattern of responding. Thus, itseems unlikely that there would be a sustainedincrease in varied responding in the absence ofreinforcement contingencies that require, ratherthan simply allow, varied responding, at leastwhen prompting is no longer provided. How-ever, additional research exploring the condi-tions under which varied responding will occurwhen reinforcement is provided for any correctresponse is needed, given the discrepancybetween our results and those of Carroll andKodak (2015) during the variable-modelingcondition.Whereas a temporary increase in the number

of unique response sequences was observedonly in the variable-modeling condition for atleast one set for Victor, Ivan, and Ronda, therewas an initial increase across both the rote- andvariable-modeling conditions for Set 2 for Ivanand both Sets 1 and 2 for Adam. Despiteefforts to reduce potential carryover effects byusing an adapted alternating treatment design,the fact that (a) the conditions were similar inall regards except for the categories and exem-plars targeted, and (b) the conditions were con-ducted in rapid alternation, may have led toindiscriminate responding. There may have alsobeen carryover from the rote-modeling to thevariable-modeling condition such that varied

13ROTE VERSUS VARIABLE MODELS

responding may have persisted longer hadvariable-modeling and rote-modeling sessionsnot been intermixed.Carryover effects may also explain the differ-

ences in the level of varied responding observedwithin the variable-modeling condition in Car-roll and Kodak (2015) and in the current study(5 to 10 cumulative response sequences vs. 3 to6 cumulative response sequences, respectively;see Supporting Information 1 for a summary ofour participants’ cumulative unique sequences).Exposure to the variable-modeling plus instruc-tive feedback condition may have increased thelevel of varied responding in the variable-modeling condition in Carroll and Kodak.Future research should assess the effects of vari-able modeling in isolation. However, it is worthnoting that there were other differencesbetween Carroll and Kodak and the currentstudy that may have contributed to the differ-ences in the level of varied responding duringthe variable-modeling condition across thestudies. Examples include the size of the poolof exemplars, the number of varied sequencesthat were modeled, and the number of catego-ries included in a session.It is worth noting that, unlike other partici-

pants who met criteria for increasing theprompt delay from 0 s to 2 s within two tofour sessions, Ronda required 12 sessions in thevariable-modeling condition for Set 1. Initially,during the 0-s prompt delay of the variable-modeling condition, Ronda consistently echoedonly a portion of the prompt correctly(e.g., “mouse, frog, mouse”). Because Rondawas able to echo three-part responses when theprompt remained constant in the rote-modelingcondition, an inability to echo a three-partresponse cannot account for these errors.Instead, variations in the sequence of exemplarsfrom trial to trial may have interfered with thestimulus control exerted by the prompt on agiven trial (the loss of stimulus control due tocompeting responses to the same stimuli is con-sistent with a behavioral interpretation of

forgetting; see Palmer, 1989). This alternativeexplanation is consistent with the types oferrors she emitted. Specifically, repeating anexemplar (e.g., “dog, mouse, dog”) accountedfor 51% of errors for prompted responses.As previously noted, we adopted a transla-

tional approach to answer our experimentalquestions. Evaluating the level of variedresponding in the sequence of exemplarsallowed us to control the number of exemplarspresented in each condition. Using responsesequence as a measure of varied responding alsoallowed for measurement of recombinative gen-eralization (Axe & Sainato, 2010). That is, afterparticipants learned three exemplars for anintraverbal category, they could presumablyemit the exemplars in any order without directteaching. Toward this potential outcome, wemodeled only four of the six possible responsesequences. At least one generative responsesequence occurred in at least one set for allparticipants.We have depicted multiple measures of var-

ied responding to highlight the different infor-mation provided by doing so. We used thenumber of unique response sequences as ourprimary dependent variable because it gave asummative depiction of varied responding thatoccurred initially while also showing that par-ticipants engaged in only a single responsesequence over time. As a supplementary mea-sure, we also depicted cumulative responsesequences for a subset of sessions, starting withthe introduction of the PPD until there werethree consecutive sessions with only one domi-nant response sequence (Supporting Informa-tion 2). This was done to show the limitedtrial-by-trial varied responding and the emer-gence of one dominant response sequence. Inthe absence of data on socially acceptable levelsof varied responding across types of responding(e.g., initiating a conversation), depicting variedresponding in multiple ways may prove usefulas researchers identify socially acceptable levelsof varied responding, continue to refine

SEAN P. PETERSON et al.14

methods of teaching varied responding, andidentify treatments that support the persistenceof varied responding in the absence of directreinforcement.Several limitations of our study are worth

noting. First, we did not start by teaching rep-ertoires that have been shown to facilitate theemergence of intraverbal categorization, such astacting exemplars, tacting categories, andlistener-discrimination training by item andcategory (e.g., Grannan & Rehfeldt, 2012;Miguel, Petursdottir, & Carr, 2005). Severalresearchers have suggested that emergentresponding is likely to be less rote. We chosenot to start by teaching these related reper-toires, as we were primarily interested in usingthe teaching of intraverbal categorization as ameans of evaluating rote versus variableprompts. Second, each session consisted of10 trials of the same category. We chose toinclude only one target question per conditionto maintain consistency with previous studiesthat have evaluated the effects of differentialreinforcement of variability on response vari-ability (e.g., Contreras & Betz, 2016; Dracobly,Dozier, Briggs, & Juanico, 2017; Susa &Schlinger, 2012). In addition, we anticipatedthat supplementary interventions such as a lagschedule may be required to increase variedresponding in subsequent evaluations if variablemodeling was not sufficient. However, it isimportant to note that massed trials of one tar-get do not allow one to ensure that respondingis under appropriate stimulus control of the tar-get question. This concern is somewhat miti-gated in our study because we continued toconduct both conditions until mastery wasachieved across both sets, and we observed cor-rect first-trial performance across both sets.Finally, we occasionally continued to con-

duct sessions beyond the mastery criterion,which may negatively impact the efficacy ofcontingencies such as lag schedules when subse-quently applied. Given the importance of intra-verbal categorization and variability in vocal

verbal behavior, future researchers should evalu-ate whether varied responding could beachieved despite a relatively lengthy history ofreinforcement of one rote response.Given the propensity of individuals with

ASD to engage in restricted and repetitivebehavior and the results of the current studyshowing that invariant responding is likely tooccur in the absence of reinforcement for variedresponding, it would be useful to evaluatewhether the temporary effects of variablemodeling would also be observed in children oftypical development. That is, it is unknownwhether the fleeting effects of variable modelingduring contingencies that allow but do notrequire variability are unique to individualswith ASD. A group comparison across individ-uals with ASD and peers matched for age andskill level would allow for an evaluation of thegenerality of our findings across populations.Future research should also be aimed at identi-fying the conditions under which variedresponding will occur (and maintain) in theabsence of differential reinforcement of variedresponding.

REFERENCES

American Psychiatric Association. (2013). Diagnostic andstatistical manual of mental disorders (5th ed.). Arling-ton, VA: American Psychiatric Publishing.

Antonitis, J. J. (1951). Response variability in the whiterat during conditioning, extinction, and recondition-ing. Journal of Experimental Psychology, 42, 273-281.https://doi.org/10.1037/h0060407

Axe, J. B., & Sainato, D. M. (2010). Matrix training ofpreliteracy skills with preschoolers with autism. Jour-nal of Applied Behavior Analysis, 43, 635-652. https://doi.org/10.1901/jaba.2010.43-635.

Carroll, R. A., & Kodak, T. (2015). Using instructivefeedback to increase response variability during intra-verbal training for children with autism spectrum dis-order. The Analysis of Verbal Behavior, 31, 183-199.https://doi.org/10.1007/s40616-015-0039-x

Cihon, T. (2007). A review of training intraverbal reper-toires: Can precision teaching help? The Analysis ofVerbal Behavior, 23, 123-133.

Contreras, B. P., & Betz, A. M. (2016). Using lag sched-ules to strengthen the intraverbal repertoires of

15ROTE VERSUS VARIABLE MODELS

children with autism. Journal of Applied BehaviorAnalysis, 49, 3-16. https://doi.org/10.1002/jaba.271

DeLeon, I. G., Fisher, W. W., Rodriguez-Catter, V.,Maglieri, K., Herman, K., & Markhefka, J. M.(2001). Examination of relative reinforcement effectsof stimuli identified through pretreatment and dailybrief preference assessments. Journal of Applied Behav-ior Analysis, 34, 463-473. https://doi.org/10.1901/jaba.2001.34-463

Dracobly, J. D., Dozier, C. L., Briggs, A. M., &Juanico, J. F. (2017). An analysis of procedures thataffect response variability. Journal of Applied BehaviorAnalysis, 50, 600-621. https://doi.org/10.1002/jaba.392

Grannan, L., & Rehfeldt, R. A. (2012). Emergent intra-verbal responses via tact and match- to sampleinstruction. Journal of Applied Behavior Analysis, 45,601-605. https://doi.org/10.1901/jaba.2012.45-601

Jahr, E., & Eldevik, S. (2002). Teaching cooperative playto typical children utilizing a behavior modelingapproach: A systematic replication. BehavioralInterventions, 17, 145-157. https://doi.org/10.1002/bin.117

Karsten, A. M., & Carr, J. E. (2009). The effects of dif-ferential reinforcement of unprompted responding onthe skill acquisition of children with autism. Journalof Applied Behavior Analysis, 42, 327-334. https://doi.org/10.1901/jaba.2009.42-327

Kodak, T., Fuchtman, R., & Paden, A. (2012). A com-parison of intraverbal training procedures for childrenwith autism. Journal of Applied Behavior Analysis, 45,155-160. https://doi.org/10.1901/jaba.2012.45-155

Miguel, C. F., Petursdottir, A. I., & Carr, J. E. (2005).The effects of multiple-tact and receptive discrimina-tion training on the acquisition of intraverbal behav-ior. The Analysis of Verbal Behavior, 21, 27-41.https://doi.org/10.1007/BF03393008

Neuringer, A., Kornell, N., & Olufs, M. (2001). Stabilityand variability in extinction. Journal of ExperimentalPsychology: Animal Behavior Processes, 27, 79-94.https://doi.org/10.1037/0097- 7403.27.1.79

Palmer, D. C. (1989). A behavioral interpretation ofmemory. In L. J. Hayes & P. N. Chase(Eds.),

Dialogues on verbal behavior (pp. 261-279). Reno,NV: Context Press.

Rodriguez, N. M., & Thompson, R. H. (2015). Behav-ioral variability and autism spectrum disorder. Journalof Applied Behavior Analysis, 48, 167-187. https://doi.org/10.1002/jaba.164

Saldana, R. L., & Neuringer, A. (1998). Is instrumentalvariability abnormally high in children exhibitingADHD and aggressive behavior? Behavioural BrainResearch, 94, 51-59. https://doi.org/10.1016/S0166-4328(97)00169-1

Sindelar, P. T., Rosenberg, M. S., & Wilson, R. J.(1985). An adapted alternating treatments design forinstructional research. Education and Treatment ofChildren, 8, 67-76.

Stokes, T. F., & Baer, D. M. (1977). An implicit technol-ogy of generalization. Journal of Applied BehaviorAnalysis, 10, 349-367. https://doi.org/10.1901/jaba.1977.10-349

Sundberg, M. L. (2008). Verbal behavior milestones assess-ment and placement program. Concord, CA: AVBPress.

Susa, C., & Schlinger, H. D. (2012). Using a lag scheduleto increase variability of verbal responding in an indi-vidual with autism. The Analysis of Verbal Behavior,28, 125-130.

Weiss, M. J., LaRue, R. H., & Newcomer, A. (2009).Social skills and autism: Understanding and addres-sing the deficits. In J. Matson (Ed.), Applied behavioranalysis for children with autism spectrum disorders(pp. 129-144). New York: Springer.

Received April 25, 2016Final acceptance March 3, 2018Action Editor, Tiffany Kodak

SUPPORTING INFORMATION

Additional Supporting Information may befound in the online version of this article at thepublisher’s website.

SEAN P. PETERSON et al.16