language non-selective activation of orthography during spoken word processing in hindi–english...

23
Language non-selective activation of orthography during spoken word processing in Hindi–English sequential bilinguals: an eye tracking visual world study Ramesh Kumar Mishra Niharika Singh Ó Springer Science+Business Media Dordrecht 2013 Abstract Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi–English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either in L1 or L2. We tracked participant’s eye movements as they looked at an array of written words that contained a phonological neighbor of the translation equivalent of a simultaneously presented spoken word. Partici- pants quickly oriented their visual attention towards the phonological neighbor of the translation equivalent compared to the distractors, suggesting an activation of the spelling of the non-target lexicon via translation leading to further spreading activation of related words. Further, this parallel activation of the non target lexicon was seen in both L1–L2 and L2–L1 direction. These results suggest that different bilinguals can automatically activate the orthographic forms of the non-target lex- icon via translation equivalents even when the languages in question do not share cognates and use different scripts. Keywords Bilingualism Á Translation equivalent activation Á Non cognate Á Eye tacking paradigm Á Hindi Introduction Much Psycholinguistic research on the organization of the bilingual lexical memory system has shown that it is largely language non- selective (Ameel, Malt, Storms, & R. K. Mishra (&) Á N. Singh Centre of Behavioral and Cognitive Sciences (CBCS), Allahabad University, Allahabad 211002, UP, India e-mail: [email protected] URL: http://cbcs.ac.in/people/fac/30-r-mishra; http://facweb.cbcs.ac.in/rkmishra 123 Read Writ DOI 10.1007/s11145-013-9436-5

Upload: niharika-singh

Post on 11-Dec-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Language non-selective activation of orthographyduring spoken word processing in Hindi–Englishsequential bilinguals: an eye tracking visual world study

Ramesh Kumar Mishra • Niharika Singh

� Springer Science+Business Media Dordrecht 2013

Abstract Previous psycholinguistic studies have shown that bilinguals activate

lexical items of both the languages during auditory and visual word processing. In

this study we examined if Hindi–English bilinguals activate the orthographic forms

of phonological neighbors of translation equivalents of the non target language

while listening to words either in L1 or L2. We tracked participant’s eye movements

as they looked at an array of written words that contained a phonological neighbor

of the translation equivalent of a simultaneously presented spoken word. Partici-

pants quickly oriented their visual attention towards the phonological neighbor of

the translation equivalent compared to the distractors, suggesting an activation of

the spelling of the non-target lexicon via translation leading to further spreading

activation of related words. Further, this parallel activation of the non target lexicon

was seen in both L1–L2 and L2–L1 direction. These results suggest that different

bilinguals can automatically activate the orthographic forms of the non-target lex-

icon via translation equivalents even when the languages in question do not share

cognates and use different scripts.

Keywords Bilingualism � Translation equivalent activation � Non cognate �Eye tacking paradigm � Hindi

Introduction

Much Psycholinguistic research on the organization of the bilingual lexical memory

system has shown that it is largely language non- selective (Ameel, Malt, Storms, &

R. K. Mishra (&) � N. Singh

Centre of Behavioral and Cognitive Sciences (CBCS), Allahabad University,

Allahabad 211002, UP, India

e-mail: [email protected]

URL: http://cbcs.ac.in/people/fac/30-r-mishra; http://facweb.cbcs.ac.in/rkmishra

123

Read Writ

DOI 10.1007/s11145-013-9436-5

Van Assche, 2009; Dijkstra & Van Heuven, 2002; Dimitropoulou, Dunabeitia, &

Carreiras, 2011; Duyck, 2005; Finkbeiner, Forster, Nicol, & Nakumura, 2004; Gollan,

Forster, & Frost, 1997; Grainger, 1993; Grainger & Frenck-Mestre, 1998;

Lagrou, Hartsuiker, & Duyck, 2011; Schoonbaert, Hartsuiker, & Pickering, 2007;

Schoonbaert, Duyck, Brysbaert, & Hartsuiker, 2009; Schulpen, Dijkstra, Schriefers, &

Hasper, 2003; Sunderman & Kroll, 2006). Bilinguals unconsciously and uninten-

tionally activate both conceptual and phonological structures of the non-target

language during speaking words (Colome, 2001; Costa, Albareda, & Santesteban,

2008; Costa, La Heij, & Navarrete, 2006), listening (Blumenfeld & Marian, 2007),

and during visual word recognition (de Groot & Nas, 1991; Grainger & Frenck-

Mestre, 1998) in any one language. When words in the bilingual’s two languages are

related in form, such as the English word ‘marker’ and the Russian word ‘marka’,

there is activation of one when processing the other (Marian and Spivey, 2003a, b).

However, so far there is no evidence of activation of spelling of the non-target

language during listening to words in one of the languages in the literature, at least in a

cross-modal situation. We wondered if a Hindi–English bilingual hears the Hindi

word bandar (monkey) she also activates the spelling of the English translation

monkey or words that are related to it phonologically, i.e. money. In this study, we

examined if bilinguals automatically activate the orthographic structures of words in

the non-target lexicon while processing spoken words in one language. We used the

visual world eye-tracking paradigm with Hindi–English bilinguals where the

languages in question do not share orthography. We explored the time-course of

such activation in both Hindi–English and English-Hindi directions to see if there is

any discrepancy.

This issue is important since most studies so far have investigated language non-

selective activation in bilinguals during visual word recognition. Thus, these studies

do not provide any information about what happens when a bilingual simulta-

neously processes speech and written material that belong to different languages.

Every day we watch TV programs in one language while we may be reading a book

in another language as bilinguals. This situation is particularly interesting when both

the languages of the bilingual do not share scripts or have any commonality in

phonology, as is the case with Hindi–English bilinguals or say Chinese-English

bilinguals. An understanding of how bilinguals manage this scenario is important

for any interactive model of bilingual lexical organization, since most bilinguals in

the world use languages that do not share orthography and often they learn to read

and write in different scripts from early on, as is the case with Hindi–English

bilinguals. Eye tracking studies with auditory words and pictures have shown cross-

language activation of phonology (Marian, Blumenfeld, & Boukrina, 2008;

Weber & Cutler, 2004). The cross modal nature of bilingual lexical activations

has been previously observed with the priming paradigm (Schulpen et al., 2003).

Studies with monolinguals have shown that orthographic forms are accessed

during spoken language comprehension (Ziegler & Ferrand, 1998). Learning to read

and write can establish strong connections between spoken phonological forms and

orthographic forms of words (Grainger & Ferrand, 1996; Ziegler & Ferrand, 1998).

Thus, when one hears a word there is automatic activation of its spelling. However,

it is not clear how these dynamics work for bilinguals who use both the languages

R. K. Mishra, N. Singh

123

constantly in both spoken and written forms. It seems reasonable to expect that

bilinguals who are proficient readers and writers in both their languages would have

developed strong connections between phonological and orthographic structures

across the lexicons. Therefore, one would expect parallel activation of the

orthography in both the languages during listening.

Bilingual processing models such as the Bilingual Interactive Activation model

(BIA?) (Dijkstra, Timmermans, & Schriefers, 2000; Dijkstra and Van Heuven,

2002) predict the activation of words in both the languages in a spreading activation

manner during bilingual language processing. The BIA? model predicts that words

that are orthographically close in both languages will be activated automatically.

However, it is not clear what the scenario would be if the languages in question do

not have any orthographic competitors and use qualitatively different scripts. Moon

and Jiang (2012) in a phoneme monitoring study observed activation of cross-

language phonological and orthographic information in different-scripts bilinguals.

Here we use eye tracking methodology to examine the same issue with Hindi–

English bilinguals. Eye tracking has the advantage over other behavioural methods

being able to capture the automatic nature of cognitive processing. It also gives us

very detailed information regarding time-course of activation of different

information.

English is an alphabetic language whereas Hindi is an alpha- syllabary one. In

fact, evidence from masked priming studies with different script bilinguals suggest

that even such bilinguals seem to have an integrated phonological lexicon (Hoshino,

Midgley, Holcomb, & Grainger, 2010; Nakayama, Sears, Hino, & Lupker, 2012).

Thus, it is not clear if bilinguals activate orthographic information during the

auditory processing of words in both their languages.

Thierry and Wu (2010) examined if Chinese-English bilinguals activate the

spellings of the L1 translation equivalents using implicit priming, and recorded

ERPs. Participants judged the relatedness of pairs of English words that sometimes

contained a spelling or sound repetition of their Chinese translation. The authors

found evidence of activation of cross-language phonology and not spelling.

Chinese-English bilinguals did not show any sensitivity towards the manipulation in

spellings both in the auditory and visual modalities. The study was different in

design compared to earlier studies since it exclusively tested participants in English

and used experimental manipulations that did not give any clue to the participants

that they were tested on in a bilingual experiment. Previously, Thierry and Wu

(2007) had found evidence for unconscious translation in Chinese-English

bilinguals during foreign language comprehension. However, they did not find

any activation of the spelling of the non-phonological competitor language.

Nevertheless, Thierry and Wu (2010) noted, ‘‘Since auditory perception of words

can be influenced in spelling in monolinguals, it remains theoretically plausible that

listening to words in L2 may be associated with implicit activation of the spelling of

L1 translation’’ (p. 7650). These Chinese-English subjects did not activate spellings

may be because they did not learn to read and write in English since childhood and

only attended their proficiency in English later while staying in UK and as adult

students. Thus, it is necessary to examine this issue with a pair of languages that do

not share scripts and do not contain cognates or cross-language homophones as in

An eye tracking visual world study

123

the case with Hindi–English bilinguals and who have acquired the reading and

writing abilities early on in both these languages.

In contrast to these ERP results, most eye tracking studies so far have shown

activation of cross- language phonology in pairs of languages that have shared

orthography and thus have a significant amount of phonological similarities

(Blumenfeld & Marian, 2007; Ju & Luce, 2004; Marian et al., 2008; Marian &

Spivey, 2003a; Marian, Spivey, & Hirsh, 2003; Weber & Cutler, 2004). For

example, Russian-English high proficient bilinguals have been shown to activate

word form level competitors i.e. ‘marka’ in Russian and ‘marker’ in English while

listening to either one of them (Marian et al., 2003). Blumenfeld and Marian (2007)

found German-English bilinguals could activate both cognate as well as non-

cognate cross language translation pairs. These studies have shown that bilinguals

access the phonology of non-target language during spoken word processing albeit

to different degrees depending on their relative fluency in the languages concerned.

Hindi–English bi-scriptal bilinguals learn English as a second language through

formal education and many of them maintain high fluency in the second language.

Hindi and English use different orthography and have different phonological system

and there are no cognates between the languages. Thus it provides an optimal

situation to examine if there is an activation of cross-language orthography for such

a bilingual population during spoken word processing in either of the languages.

Considering previous findings with bi-scriptal and relatively unbalanced bilinguals’

performance with visual word recognition, in this study, we explicitly tested the

hypothesis that Hindi–English bilinguals (with a clear L1 dominance but with fair

L2 proficiency) can automatically activate the orthographic forms of cohorts of

translation equivalents in the non-phonological competitor language during spoken

word processing.

In an early study on Hindi–English bilinguals, Kirsner, Brown, Abrol, Chadha,

and Sharma (1980) did not find any evidence of parallel activation of lexicons. In a

lexical decision task, Kirsner et al. (1980) observed facilitation when words were

repeated in the same language but did not find any facilitation when languages were

different. In contrast to this, Sunderman and Priya (2012) examined if fluent Hindi–

English bilinguals need to access words in the other language through activation of

translation equivalents like same script bilinguals. It was observed that during a

translation recognition task, participants faced interference when the critical word

was a phonological cohort of the translation equivalent. The results suggested that

different script bilinguals, particularly highly proficient bilinguals, show evidence of

automatic translation. It is important to note here that Sunderman and Priya (2012)

emphasized a role of the orthography and the script in triggering automatic

translation. The similarities and differences of orthography and their sound to

spelling consistency and other properties could independently affect bilingual word

processing apart from language proficiency. Kroll and Stewart (1994) found another

contrast to the prediction of the RHM model that even highly proficient bilinguals

showed a reliance on translation equivalents in word processing. Similar to

Sunderman and Priya (2012), we tested highly proficient Hindi–English bilinguals

and expected automatic translation from the spoken word leading to further

activation of spelling in the non-phonological competitor language.

R. K. Mishra, N. Singh

123

Most bilinguals studied until date in different linguistic and cultural contexts

happen to be unbalanced bilinguals. The issue of L2 proficiency is important in the

discussion of cross—language activation of orthographic information. The Revised

Hierarchical Model of bilingual language processing (Kroll, Van Hell, Tokowicz, &

Green, 2010) predicts that its only bilinguals with low L2 proficiency who translate

L2 words into their L1 words to access meaning. Thus, one would expect

asymmetry in cross-language activation in such bilinguals. However, recent studies

seem to suggest that even unbalanced bilinguals seem to be engaging into bi-

directional translation in equal magnitude (Dimitropoulou et al., 2011). In fact, the

participants of Sunderman and Priya (2012) who were Hindi–English participants,

with very high English proficiency showed higher translation in the L1–L2

direction. It was something that RHM does not predict. This is particularly

important to our study, since we use Hindi–English bilinguals who are living in

India and have high L2 proficiency.

Since we are interested in examining the mapping from spoken word to

orthography in the non-target language, presumably mediated through the activation

of translation equivalents, we adapted the visual world eye-tracking paradigm while

using written words in place of pictures. Eye tracking methodology is particularly

suitable in examining the time-course of cognitive processing. In our study, we were

interested if the time-course of activation of the translation differed as a function of

the language direction. Thus, eye movement tracking during listening and looking

can provide data related to the earliest time points in activation of linguistic

information (see Mishra, Olivers, & Huettig, 2013 for more discussion). Eye

movements to the visually presented objects during simultaneous processing of

spoken language has been shown to be highly time locked to moment by moment

nature of speech perception (Allopenna, Magnuson, & Tanenhaus, 1998; Mishra,

2009; Huettig, Singh, & Mishra, 2011). Further, it is a more natural method to study

several aspects of cognitive processing (Rayner, 2009). It has recently been shown

that it is possible to map online activation of phonology with written words in place

of pictures using the visual world eye-tracking paradigm (Huettig & McQueen,

2007, 2011). These studies have shown that participants can quickly orient their

visual attention towards a written word among a set of distractors if this written

word represents a phonological cohort or matches at the level of semantics or shape.

Written words do not pose the ambiguity that sometimes pictures can pose, since

different subjects may perceive the names of pictures differently. We adapted a

similar design, presented written words in place of pictures, and tracked the

participant’s eye movements as they heard spoken words. In the display, one of the

written words was a phonological cohort of the translation equivalent of the spoken

word in the other language along with three unrelated distractors. Given previous

evidence of automatic activation of translation equivalents in highly proficient

Hindi–English bilinguals (Sunderman & Priya, 2012), we expected that Hindi–

English bilinguals would immediately shift their attention towards the written word

that is the cohort competitor of the translation equivalent of the spoken word

compared to distractors. This prediction will be in the lines of BIA? model which

does predict spreading activation of both phonologically and orthographically

related words in the non-phonological competitor language during bilingual word

An eye tracking visual world study

123

processing. We assumed that a significant orientation of attention towards these

phonologically related words of the translation equivalents in the non-phonological

competitor script should be possible only if there is an automatic translation of

spoken words. If there is no automatic translation of the auditory words in the non-

phonological competitor language, then we should not observe any difference in

looks between the phonological competitors and distractors. However, considering

the discrepancy in previous results regarding the issue of directionality and

asymmetry of magnitude in automatic translation, we expected some difference

between the language directions since our bilinguals were not balanced.

Methods

Participants

Forty Hindi–English bilinguals (30 males and 10 females, mean age = 19.9 years,

SD = 2.0 years) participated in the main eye tracking experiment. All the

participants had acquired English as a second language at school through formal

medium of instruction. The mean age of acquisition of English was 4.7 years

(SD = 1.6 years). All the participants were from the Allahabad University student

community. All participants provided informed consent for their participation and

the ethics committee of Allahabad University passed the study.

Participants’ proficiency in their two languages was assessed using a language

background questionnaire that had questions on the native language, languages

known, age of acquisition of L1 and L2, percentage of time exposed currently to L1

and L2, and daily usage of L1 and L2 in both work and non-work related activities.

We also administered listening comprehension tests in both L1 and L2 (Table 1).

The tests were administrated by one of the authors who herself is a fluent bilingual.

Participants filled up a self-rating performa that had questions on proficiency in

both the languages (L1 and L2) for writing, reading, speaking fluency, and listening

ability on a seven-point scale ranging from poor (1) to excellent (5). The t-tests

revealed that the participants differed significantly in their rated proficiencies in

reading, writing, speaking, and listening for Hindi and English (Table 2).

Table 1 Demographic data and daily uses of L1 and L2 (in hours) along with scores in comprehension

test

Mean SD Range

Age in years 19.9 2.0 17–25

Age of acquisition of L2 4.7 1.6 2–8

Years of education 15.2 3.0 10–21

No. of hours for work related activity in L1 3.3 2.3 0–8

No. of hours for work related activity in L2 4.0 2.8 0–8

Passage score in L1 (out of 6) 5.1 0.91 3–6

Passage score in L2 (out of 6) 3.2 1.3 1–5

R. K. Mishra, N. Singh

123

Material and stimuli

Forty common Hindi nouns were selected that had clear and unambiguous English

translations. Fifteen different Hindi–English bilinguals, who did not participate in

the main eye tracking study, performed a translation agreement task to make sure

that the pairs are the correct and unique translations of each other. Similarly, forty

common English words were selected and participants were asked to judge the

accuracy and acceptability of their Hindi translations. Participants were given the

translation of Hindi and English phonological competitors alongside the actual

words in different scripts and were asked to report whether they agreed with the

given translations or not. The average translation agreement for Hindi phonological

competitor words was 100 %, while for English was 99.5 %.

Further, we created phonological competitors of these translation equivalents by

changing only the first syllable of each word. For example, if the translation

equivalent was ‘ ’ (bandook, gun) of the English word ‘‘gun’’, then

(bandar, monkey) was considered as a phonological cohort. We rated the translation

equivalents and their corresponding phonological competitors for the degree of

phonological overlap. Ten different Hindi–English bilinguals, who did not partic-

ipate in the main eye tracking study, were asked to judge the phonological similarity

between the translation equivalent of the critical word and the phonological

competitor on a seven point scale, with ‘‘7’’ representing ‘highly similar sounding’

and ‘‘0’’ representing ‘‘highly dissimilar sounding’’. The phonological similarity

between the translation equivalents of Hindi words and their phonological

competitors was 5.93(0.03). Similarly, the phonological similarity between trans-

lation equivalents of English words and phonological competitors was 5.8 (0.37).

Since each display had four unrelated distractors along with the phonological

competitor, it was necessary to make sure that the distractors were sufficiently

different in sound from the critical words. We asked the participants to rate the

similarity between the phonological competitors against each of the distractor on a

7-point scale. The ratings revealed that the phonological competitors of the

translation equivalents for the English words were sufficiently different sounding

compared to the distractors (Mean = 0.16, SD = 0.06). This was as well the case for

Hindi words compared to the distractors (Mean = 0.17, SD = 0.19).

Each spoken word was combined with a display that had four written words on it

appearing at the centre of four equal sized quadrants (Fig. 1). One of the words was

Table 2 Mean (SD) self rating of proficiency in L1 and L2

Measure Hindi (L1) (means and SDs) English (L2) (means and SDs)

Speaking ability 4.7 (0.5) 2.8 (1.1)**

Auditory comprehension 4.7 (0.6) 3.4 (1.6)**

Writing ability 4.3 (0.91) 3.6 (0.88)**

Reading ability 4.6 (0.5) 3.9 (0.94)**

5 Point Likart scale(1 = poor and 5 = excellent)

** Significant differences i.e. p \ .01

An eye tracking visual world study

123

a phonological competitor of the translation equivalent of the spoken word and the

other three were unrelated distractors. The phonological competitors appeared in

each quadrant with equal probability in a pseudo random manner. English written

words were presented in Arial FONT whereas Hindi words were presented in

Krutidev FONT. The size of each quadrant was 512 9 384 pixels and each word

occupied 120 9 50 pixels approximately in the centre of each quadrant. Words

were written in the Devnagari script in the English-Hindi direction and were in

Roman script in Hindi–English direction. For each language direction, the critical

spoken word was embedded in a neutral carrier sentence. Sentences were recorded

on Goldwave by a female native speaker of Hindi.

Examples of trials

1. L1(Hindi) to L2(English)direction: ‘‘Woh mazboot khambha hai’’. (That is apillar strong) [the critical word in the auditory sentence is ‘‘khambha’’(pillar)

which was paired with a display containing written word pillow as the

phonological competitor of pillar].

2. L2(English) to L1(Hindi) direction: ‘‘The gun is very old’’. [the critical word is

gun(bandook) the critical word in the auditory sentence is ‘‘gun’’(bandook)

which was paired with a display containing written word bandar as the

phonological competitor of bandook].

Fig. 1 Sample trial sequence in the L2–L1 direction with the auditory target word ‘parrot’ paired with adisplay containing ‘rksi-top(tank)’ as a cohort competitor of translation equivalent ‘rksrk-tota’ along withthree other distarctors

R. K. Mishra, N. Singh

123

These recordings were saved as wave files and were sampled at the rate of

4.41 k Hz mono channels. The mean onset time of the critical words in the sentence

for the L1–L2 block was 621.2 ms (SD = 202.1) and for the L2–L1 block was

741.3 ms (SD = 401.6). The statistical difference between the onset of critical word

for both the language direction was not significant, t (78) = 1.6, p [ .05.

Procedure

Participants were seated at a comfortable distance from a 17’ LCD colour monitor

with 1,024 9 768 pixel resolution and with a screen refresh rate of 75 Hz. Eye

movements were recorded with a SMI High speed eye–tracking system (Sensomo-

toric Instruments, Berlin) running with a sampling rate of 1,250 Hz. The

Experiment began after a successful calibration at 13 different locations on the

screen. For each point to be successfully calibrated participants had to fixate at least

for 400 ms. At first, a fixation cross appeared at the centre of the screen for 750 ms

followed by the visual display that had four written words on it. Simultaneously

with the onset of the display, an auditory sentence containing the critical word was

presented through speakers placed on both the sides of the monitor. The display

continued until 2,000 ms after the sentence offset. Participants were given both

written as well as verbal instructions to listen to the sentence carefully and look at

the display. Participants’ eye movements were recorded as they watched the display.

We used a simple look and listen task as used in other similar eye tracking visual

world studies (Huettig et al., 2011). Participants were especially instructed not to

take their eyes off the computer screen at any time. They were told, ‘Please listen

the sentences carefully and you can look at any word you may like. However, please

do not take your eyes off the computer monitor.’

Each experimental session consisted of two blocks of trials. One block of trials

had spoken words in Hindi and a display contain English written words and the

other block had spoken words in English and a display containing written words in

Hindi. For each participant, the presentation of the order of trials in the block was

random and we varied the order of blocks for participants. Each participant was

given ten sample trials from each block as for practice. They were explained about

the experimental paradigm and that they had to pay attention to the spoken sentence

and look at the words. They were not specifically asked to read the words on the

screen as such.

Results

Fixations and saccades were extracted from the recorded eye tracking data using the

BGaze analysis software (Sensomotoric Instruments, Berlin). Following a velocity

criterion, the movement of the eyes 30 degrees/s from the current location in any

direction was considered a saccade. Viewing was binocular but data from the right

eye was considered for analysis. The data from each participant was analyzed and

coded in terms of fixation, saccades, and blinks. Blinks were not considered as part

of fixations and were excluded from the analysis. For calculation of fixations, each

An eye tracking visual world study

123

display was divided into four equal quadrants. Each quadrant containing a written

word was considered as an AOI (area of interest) for calculation of fixations. The

blank area around each written word was approximately 512 9 384 pixels in size.

Fixations on the AOIs were counted from the onset of the critical word in the spoken

sentence until 1,000 ms. We divided this time range into four windows of 200 ms

each (Huettig et al., 2011) with the rational that the first time window serves as the

baseline. Further, each window was divided into 20 ms bins and a fixation to each

quadrant for each bin was calculated.

In order to rule out that the appearance of the phonological competitor in any

particular quadrant biased the eye movements in any manner we first did a

preliminary analysis of fixations to it as a function of the quadrant in which it

appeared. The quadrant bias was tested for four different time windows (0–200,

201–400, 401–600 and 601–800 ms). It was found that difference in proportion of

fixation to the phonological competitors of the translation equivalents was not

significantly different for any of the time windows [0–200 ms: F (3, 237) = 1.3,

p [ .05; 201–400 ms: F (3, 237) = 1.8, p [ .05; 401–600 ms: F (3, 237) = 1.5,

p [ .05; 601–800 ms: F (3, 237) = .16, p [ .05.] as a function of the quadrants. It

shows that participants did not preferentially look at the word representing the

phonological competitor appearing in any particular quadrant.

Proportion of fixations

For the statistical analysis, we computed the fixation proportions to the phonological

competitor of the translation equivalent of the spoken word and averaged distractors

for five consecutive time windows, each spanning 200 ms. Comparing proportion of

fixations to the phonological competitors with respect to the averaged distractors in

later time windows against a baseline allows one to see any change in the attentional

bias over time when information from the critical word starts arriving. We followed

this timeline keeping it consistent with other previous visual world studies (Huettig

et al., 2011). It is an implicit assumption in this paradigm that the eye movements

within the first 200 ms would reflect initial baseline activity. Since it takes about

100 ms for programming of a saccade (Altman, 2011) during language-mediated

shifts in attention, we assumed that eye movements in this time window would not

reflect processing of the information from the critical spoken word itself. However,

we believed that the 200–400 ms time window would contain eye movements

generated from the information available from the critical word. Later time

windows were added to see any activation in the end as the word unfolded.

For analyzing the bias in visual orientation toward the phonological competitors

compared to the distractors, we calculated the ratio between the proportion of

fixations to the phonological competitors and the sum of fixations to all words

(Huettig et al., 2011). A ratio greater than 0.5 indicates that the phonological

competitors were capable of attracting more than half of fixations out of the total

fixations that occurred in a particular trial indicating a significant bias. We

compared the phonological competitor/distractor ratio for the L1–L2 and L2–L1

directions by conducting a two way repeated measure ANOVA, both by subject (F1)

and by item (F2), with time windows (0–200, 200–400, 400–600, 600–800 and

R. K. Mishra, N. Singh

123

800–1,000 ms) and language direction (L1–L2 and L2–L1) as within subject

factors. The main effect of time-window on the phonological competitor/distractor

ratio was found to be significant, F1 (4, 156) = 4.9, p = .001; F2(4, 144) = 2.6,

p = .03, suggesting a gradual increase in the phonological competitor/distarctor

ratio from baseline until 1,000 ms. However, the main effect of language direction

did not have a significant effect on the phonological competitor/distractor ratio,

F1(1, 39) = 0.67, p = .41; F2(1, 36) = 0.03, p = .95. Similarly, the interaction

between the language direction and time-windows was also not significant,

F1(4, 156) = 0.172, p = .96; F2(4,144) = 0.501, p = .73.

Further, we analyzed the ratio data by subject (t1) and by item (t2) comparing

them to the baselines for each condition separately. Three trials were excluded from

the analysis from the L1–L2 direction and two from L2 to L1 condition because of

faulty recording. The mean phonological competitor/distractor ratios for time

window starting from 0–200, 200–400, 400–600, 600–800, and 800–1,000 ms, were

calculated. The time window from 0 to 200 ms was taken as a baseline, as it is

assumed that proportion of fixations to any object during the baseline is not biased

as this time period is used for programming a saccade. The mean ratio of each

window was compared to the baseline window (See Table 3, Fig. 2).

For the L1–L2 direction, paired t tests showed that the phonological competitor/

distractor ratio during the baseline (0.49) did not differ significantly from the

phonological competitor/distractor ratio for the 200–400 ms time window (0.50),

mean difference = 0.01, 95 % CI: 0.034–0.009, t1(39) = 1.13, p = .26; t2(36) =

0.79, p = .43. However, the phonological competitor/distractor ratio during

400–600 ms time window (0.52) differed significantly from the baseline, mean

difference = 0.036, 95 % CI: 0.014–0.066–0.005, t1(39) = 2.4, p = .02;

t2(36) = 1.4, p = .14. This difference remained statistically significant during the

600–800 ms time window (0.53), mean difference = 0.037, 95 % CI: 0.069–0.005,

t1(39) = 2.3, p = .02; t2(36) = 1.76, p = .08, and during the 800–1,000 ms time

window (0.54), mean difference = 0.047, 95 % CI = 0.018–0.08, t1(39) = 2.6,

p = .01; t2(36) = 2.08, p = .045.

However, for the L2–L1 direction, the competitor/distractor ratio during the

baseline (0.49) differed significantly from the phonological competitor/distractor

ratio during the 200–400 ms (0.52), mean difference = 0.027, 95 % CI:

0.051–0.003, t1(39) = 2.3, p = .02; t2(38) = 1.8, p = .06, suggesting an early

Table 3 Comparison of mean fixation ratio for the two language conditions

Bins (ms) Ratio Statistics

(L1–L2 direction) (L2–L1 direction) (t1 df = 78, t2 df = 74)

Baseline 0.49 0.49 t1 = 0.114, p = .91; t2 = 0.07, p = .05

200–400 0.50 0.52 t1 = 0.76, p = .44; t2 = 0.71, p = .477

400–600 0.52 0.54 t1 = 0.89, p = .37; t2 = 0.62, p = .51

600–800 0.53 0.54 t1 = 0.54, p = .58; t2 = 0.19, p = .91

800–1,000 0.54 0.55 t1 = 0.35, p = .72; t2 = 0.51, p = .60

An eye tracking visual world study

123

attentional bias towards the phonological competitor. The phonological competitor/

distractor ratio continued to be significant during the 400–600 ms window (0.54),

mean difference = 0.044, 95 % CI: 0.079–0.008, t1(39) = 2.5, p = .01;

t2(38) = 2.2, p = .03, and during the 600–800 ms window (0.54), mean differ-

ence = 0.047, 95 % CI: 0.09–0.0006, t1(39) = 0.047, p = .04; t2(38) = 1.4,

p = .15. Similarly, the phonological competitor/distractor ratio during the

800–1,000 ms (0.55) also differed significantly from the baseline, mean differ-

ence = 0.054, 95 % CI: 0.10–0.002, t1(39) = 2.1, p = .04; t2(38) = 0.81, p = .42.

Saccade latency

It was important to know if there was any asymmetry in the activation of the

orthographic information related to the phonological competitor of the translation

equivalent between the language directions. For this purpose, we calculated the

saccade latency to the phonological competitor for each language direction with the

auditory onset of the critical spoken word. Saccade latency is the temporal gap

between the onset of the phonological competitor word and the first correct saccade

made to the written word. Saccade latencies less than 80 ms (anticipatory) were

excluded from further analysis. Independent t test revealed that there was no

significant difference in the saccadic latency for L1–L2 direction (682.9 ms,

SD = 100.9) and L2–L1 direction (700.0 ms, SD = 111.0), t(78) = 0.72, p = .47.

Fig. 2 Proportion of fixations to the cohort competitor of translation equivalents and the distractors forboth the language conditions (L1–L2 and L2–L1)

R. K. Mishra, N. Singh

123

Discussion

The study examined if Hindi–English bilinguals automatically access the ortho-

graphic information of words in the non-target language while listening to words in

any one language. Participants saw four written words on a display, one of which

was a phonological competitor of the translation equivalent of the spoken word in

the other language along with unrelated distractors. Eye tracking data revealed that

participants quickly oriented their attention towards the written word, which was a

phonological competitor of the translation equivalent of the spoken word in the

other language with the arrival of acoustic information from the critical word.

Furthermore, this activation was equally robust in both the language directions.

However, there was no difference in the saccadic latency toward the phonological

competitor for the two language directions, suggesting that this spontaneous

activation of orthographic information via the activation of translation was equally

strong in both the language directions. Eye movements towards the competitor

emerged because of activation of translation of equivalents. The results extend

previous findings that suggest language non- selective access of orthographic

information in bilinguals where the languages are different in terms of their

phonological and orthographic structures (Lagrou et al., 2011; Thierry & Wu,

2010).

Our results expand earlier findings with monolinguals that have shown the

activation of written words during auditory word processing with bilinguals. Our

task was similar to earlier monolingual eye tracking visual word studies like

Salverda and Tanenhaus (2010) and Huettig and McQueen (2011) and used spoken

words with written words on the display and thus was conducive to examine online

orthographic activation. Our results also replicate previous eye tracking studies in

bilinguals (Blumenfeld & Marian, 2007; Ju & Luce, 2004; Marian & Spivey, 2003a,

b; Weber & Cutler, 2004) showing that one can see parallel activation of semantics

in bilinguals using written words in place of pictures (Perre & Ziegler, 2008, see

also Ziegler & Ferrand, 1998) Using ERPs it was observed that listeners can access

spelling inconsistencies of spoken words within the first 200 ms of the auditory

word onset. Earlier, Seidenberg & Tanenhaus (1979) had shown that rhyme

judgment for word pairs that were orthographically dissimilar was delayed.

Similarly, Pattamadilok, Morais, De Vylder, Ventura, and Kolinsky (2009) has

shown that orthographic information from spoken words is extracted immediately

on the presentation of the word. More recently using the eye -tracking visual world

paradigm, Salverda & Tanenhaus (2010) as well as Huettig & McQueen (2011)

have shown that listeners quickly activate orthographic information during spoken

word processing and use this information later to map spoken words onto written

words. Salverda & Tanenhaus (2010) specifically examined if mapping of the

spoken words on to written words use phonological or orthographic information by

manipulating the orthographic and phonological overlaps between target and

competitor words differently. It was observed that looks to the targets and

competitors were influenced by the degree of orthographic overlap and not

phonological overlap. Most importantly, these findings suggested that there is

virtually no processing delay between listening of spoken words and activation of

An eye tracking visual world study

123

their orthographic information. Specifically Salverda & Tanenhaus (2010) claimed

that listeners activate visual forms of words i.e. spellings during listening to spoken

words and use this information to identify a written word. Such activation of

orthographic structure in these studies appears to be rather automatic.

Orthographic activation during spoken language processing has been linked to

the acquisition of reading and writing. In case of bilinguals who have learnt a

second language through formal instructional medium and who have extensive

practice of reading and writing in L2 for many years, it is expected that they activate

cross-language orthographic units during processing of spoken words in any one of

the languages. For example Hindi–English bilinguals have learnt to read and write

in English from an early age and this should influence their spoken word processing

significantly. In such bilingual populations orthography plays a central role in their

language acquisition and also in their later overall language proficiency. However, it

is important to note that bilingual’s L2 proficiency can modulate the extent to which

their spoken language processing is affected by activation of orthographic

information cross-linguistically. The results of the current study demonstrate that

bilinguals indeed access orthographic information during spoken word processing

and such access is language non-selective.

Interactive activation models of bilingual’s visual word processing have shown

that bilinguals activate both phonological and orthographic information in both of

their languages during processing of words in any one language (Dijkstra & Van

Heuven, 2002; Dijkstra, Van Heuven, & Grainger, 1998). However, so far there is

no experimental evidence that shows bilinguals activate orthographic information in

the non-target language during spoken word processing. Our study is the first to

demonstrate that even relatively proficient bilinguals do activate translation

information in the non-target language automatically and in a cross-modal situation.

Our results go beyond recent studies on this issue in showing that language

proficiency does not affect activation of translation equivalents to a significant

extent. Further, with these results we also extend the findings of Salverda and

Tanenhaus (2010) with the bilingual population showing that during spoken word

processing listeners do activate orthography and also we argue that such activations

are orthographic and not phonologically driven. Especially in our task, this

orthographic activation had to happen through the activation of translation

equivalents where scripts differ and words are non-cognate. This kind of automatic

activation in Hindi–English bilinguals shows the tight links between cross-language

semantics and orthographic units. These activations cannot be phonological since

Hindi and English words and their translations do not have common phonology.

Our results also are in consonance with the findings of Veivo and Jarvikivi (2012)

who have shown that L2 learners spoken word recognition is modulated by their

orthographic knowledge in both L1 and L2. Veivo and Jarvikivi (2012) studied

spoken word recognition in L2 in Finnish-French bilinguals using the cross-modal

masked priming method. Relevant to our study, in their experimental 2, it was found

that written word primes in L1 that were orthographically related to L2 words

significantly facilitate L2 spoken word recognition. However, in this study this

facilitation was modulated by L2 proficiency.

R. K. Mishra, N. Singh

123

The results add to the growing body of recent evidence that show parallel and

language non selective access of lexical information in bilinguals who use different

scripts and languages (Gollan et al., 1997; Jiang & Forster, 2001, Voga & Grainger,

2007). Importantly, these results suggest that bilinguals can activate phonological

and orthographic information in the irrelevant language during auditory processing

of words. These results further extend earlier eye tracking findings with bilinguals

that have as well found non- selective activation of cross language phonology

(Blumenfeld & Marian, 2007; Ju & Luce, 2004; Marian & Spivey, 2003a, b; Weber

& Cutler, 2004). We can assume that higher looks towards the phonological

competitors of the translation equivalents compared to the distractors was due to

participants translating the spoken words instantly. If this would not have been the

case, then fixations to these phonological competitors of translation equivalents

would not have differed significantly from other unrelated distractors.

Our results have some bearing on the issue of language proficiency affecting

lexical access in bilinguals. Our participants were unbalanced Hindi–English

bilinguals with reasonable L2 proficiency, though their dominant language was

Hindi. Thus, this evidence of automatic translation in such different script bilinguals

is similar to the findings of Sunderman and Priya (2012) in many ways. However,

since our bilinguals were unbalanced and had lower proficiency in L2 compared to

L1, the access to translation during auditory word processing is in consonance with

the assumptions of RHM model, which claims that only bilinguals with low L2

proficiency indulge in translation for accessing meaning in their L2 words.

Sunderman and Kroll (2006) suggested that, low proficient bilinguals always keep

the L1 translation equivalents activated in order to comprehend words in L2.

However, we have found activation in both the forward and the backward

directions, which is not predicted by the RHM model for unbalanced bilinguals.

The saccadic latencies for the two language directions did not differ significantly.

Interestingly, when we compared the proportion ratios between language directions,

it was found that for the L2–L1 direction, this ratio during the 200–400 ms time

window was significantly different from the baseline. This was however not the case

with the forward direction. This measure suggests that our bilinguals activated the

translation equivalents somewhat faster in the backward direction compared to the

forward direction. Taken together, these findings are in line with studies that have

found bi-directional translation effects in bilinguals with different proficiency levels

and different scripts (Dimitropoulou et al., 2011; Duyck, 2005; Duyck & Warlop,

2009; Gollan et al., 1997; Grainger & Frenck-Mestre, 1998; Schoonbaert et al. 2007,

Schoonbaert, Holcomb, Grainger, & Hartsuiker, 2010).

The immediate activation of orthographic forms of translation equivalents could

be because of the way these Hindi–English bilinguals have acquired their two

languages. English in India is always acquired through formal instruction. These

bilingual thus spend a significant amount of time in reading and writing in English

apart from speaking. It is a possibility that the activation of orthographic

information in a language non-selective manner has something to do with the high

level of acquaintance of the subjects with the scripts from an early age (Sunderman

& Priya, 2012). In contrast to this situation, we conjecture that, where bilinguals

have acquired their L2 more naturally through the spoken medium only, without

An eye tracking visual world study

123

explicit instruction, one may not find immediate activation of orthography. This

however remains a hypothesis and needs further research with different sets of

bilinguals in different scripts and different written language acquisition histories.

Therefore, one way to explain our results is to assume this long practice of reading

and writing in both the languages that have strengthened their orthographic-

phonological systems in both the languages. Thus, the manner in which one acquires

L2 seems to be a significant factor in bilingual lexico- semantic representation.

It is important to note that our subjects were sequential bi-literates since they

acquired reading and writing skills in English later than Hindi. Previous brain

imaging studies with simultaneous and sequential Hindi–English bi-literate

bilingual subjects (Das, Padakannaya, Pugh, & Singh, 2011) have shown that

simultaneous bilinguals have distinct orthography specific cortical networks for

processing Hindi and English orthographies. In contrast to this, the late sequential

bilinguals (subjects similar to ours and more or less from similar educational and

cultural backgrounds from Northern India) had a single network for reading both

Hindi and English. However, it is not clear how this neural fact could be useful in

explaining the language non-selective activation of cross language phonology or

orthographic information as we saw in our study. Since our subjects were sequential

bilinguals who acquired reading and writing of English at a later age than Hindi, we

assume that they were using the same cortical network for processing both Hindi

and English phonology. Thus, the need to translate from one language to the other

for the purpose of comprehension is probably linked to this single network. On the

other hand, if subjects are simultaneously bi-literate, who develop native like

competence in L2, and who have distinct cortical networks for each language, it

may be the case that they will not indulge in automatic translation, since the

processing routes are different. The current discussions in the psycholinguistic

models of bilingual lexical memory organization do not give much attention to the

literacy background of the bilinguals. This it is important to investigate in future

research how simultaneous and sequential bi-literate bilinguals differ in their

language non -selective activation of cross-language phonology and orthography.

Which bilingual language-processing model can explain the pattern results

obtained in this study? Currently there is no model dealing with the bilingual

memory organization, which can explain the cross language activations seen during

simultaneous processing of auditory and visual information. However, we have

already explained how the pattern of data is compatible with the prediction

of the RHM model, particularly the finding that low proficient bilinguals activate

the translation equivalents for processing L2 words. Although, we pointed out that

the appreciable amount of activation seen in the L2–L1 direction is not predicted

by this model. On the other hand, the BIA? model predict the significant amount of

language non-selective activation in both the language direction. This model being

interactive in nature predicts the simultaneous activation of both phonological and

orthographic information in both the languages from input in any one language in a

spreading activation manner (Dijkstra et al., 1998). Importantly, we have shown that

with auditory presentation of words, bilinguals activate lexical information in the

irrelevant language where the scripts do not match in their basic patterns or in

phonology. Since these two dominant models were developed primarily to explain

R. K. Mishra, N. Singh

123

language production and visual word recognition data respectively, with increasing

amount of eye tracking research on bilingual lexical activation, it is important to

extend the models to accommodate this cross-modal activation.

Finally, we would like to consider if our design in any manner influenced the eye

movements or induced any strategy in the participants. One might argue that the

participants found out that one of the written words is somehow related to the

translation of the spoken word among the four words and thus looked at it

preferentially. However, this has been the practice in all eye tracking visual world

studies on this and other issues to present a set of objects i.e. pictures or words and

track eye movements. This has been referred to as the closed set problem in visual

world paradigm where it is a possibility that the small number of objects presented

may constrain the type of the ocular response seen (Trueswell & Tanenhaus, 2005).

It is also possible that participants developed a strategy in shifting the eyes towards

the phonological competitors since they could already read the four words before

the critical word arrived. One way to rule this out would be to present single words

in place of sentences and see how the eye movement patterns change. In a later eye

tracking study (Singh & Mishra, submitted) used single auditory words and

presented pictures to bilingual participants. Even in this case, without a preview, the

data show quick activation of translation within the first 200 ms. Nevertheless; this

is an important technical issue that future eye tracking studies may look at.

The other limitation could be that we did not give any explicit task to the

participants. Since our primary dependent measure had to do with an oculomotor

response, a following manual response would not have indicated much. Further, our

design is similar to many other previous visual world eye tracking studies on

monolinguals where no tasks were given and convincing results have been obtained

(Huettig & Altmann, 2005; Huettig & McQueen, 2007). Thus, it is unlikely that the

present sets of results are affected by these possibilities, since the display first did

not contain any direct referent of the spoken word in any manner and only had a

word that was related in phonology to the translation equivalent. Future research on

bilinguals with this technique may address these issues.

Conclusion

The results of this study with Hindi–English subjects were novel in many ways and

extend previous findings. First, it showed that even relatively proficient bilinguals

activate translation equivalents during cross-modal word processing and do so in

both language directions. Secondly, it shows that during spoken word processing

such bilinguals automatically access orthographic information in the other language

and this is mediated through activation of translation equivalents. Finally and most

importantly, this study is first one that explicitly tested cross-modal activation of

orthography using eye tracking whereas most other previous studies looked at visual

word recognition or even some eye tracking studies that looked at phonological

activation during picture processing only. Future studies should examine the nature

of this phenomena in different bilinguals and in developmental populations and also

with bilinguals who use different types of scripts. We conclude that the observed

An eye tracking visual world study

123

activation of orthography is a result of high level of training in reading and writing,

as Hindi–English bilinguals have learnt their L2 formally. Therefore future studies

on bilingualism should contrast bilinguals who have acquired their L2 formally as

opposed to those who have grown up in a L2 environment on tasks exploring cross-

language activations.

Acknowledgments Niharika Singh was supported with a junior research fellowship on a Cognitive

Science initiative grant awarded to Ramesh Kumar Mishra by the Department of Science and

Technology.

Appendix 1

R. K. Mishra, N. Singh

123

Appendix 2

An eye tracking visual world study

123

References

Allopenna, P. D., Magnuson, J. S., & Tanenhaus, M. K. (1998). Tracking the time course of spoken word

recognition using eye movements: Evidence for continuous mapping models. Journal of Memoryand Language, 38(4), 419–439.

Altman, G. T. M. (2011). Language can mediate eye movement control within 100 millisecond regardless

of whether there is anything to move eye to. Acta Psychologica, 137, 190–200.

Ameel, E., Malt, B. C., Storms, G., & Van Assche, F. (2009). Semantic convergence in the bilingual

lexicon. Journal of Memory and Language, 60, 270–290.

Blumenfeld, H., & Marian, V. (2007). Constraints on parallel language activation in bilingual spoken

language processing: examining proficiency and lexical status using eye- tracking. Language andCognitive Processes, 22, 633–660.

Colome, A. (2001). Lexical activation in bilinguals’ speech production: Language specific or language-

independent? Journal of Memory and Language, 45, 721–736.

R. K. Mishra, N. Singh

123

Costa, A., Albareda, B., & Santesteban, M. (2008). Assessing the presence of lexical competition across

languages: Evidence from the Stroop Task. Bilingualism: Language and Cognition, 11, 121–131.

Costa, A., La Heij, W., & Navarrete, E. (2006). The dynamics of bilingual lexical access. Bilingualism:Language and Cognition, 9, 137–151.

Das, T., Padakannaya, P., Pugh, K. R., & Singh, N. C. (2011). Neuroimaging reveals dual routes to

reading in simultaneous proficient readers of two orthographies. NeuroImage, 54, 1476–1487.

De Groot, A. M. B., & Nas, G. L. (1991). Lexical representation of cognates and noncognates in

compound bilinguals. Journal of Memory and Language, 30, 90–123.

Dijkstra, T., Timmermans, M., & Schriefers, H. (2000). Onbeing blinded by your other language: Effects

of task demands on interlingual homograph recognition. Journal of Memory and Language, 42,

445–464.

Dijkstra, T., & Van Heuven, W. J. B. (2002). The architecture of the bilingual word recognition system:

From identification to decision. Bilingualism: Language and Cognition, 5, 175–197.

Dijkstra, T., Van Heuven, W. J. B., & Grainger, J. (1998). Simulating cross-language competition with

the bilingual interactive activation model. Psychologica Belgica, 38, 177–196.

Dimitropoulou, M., Dunabeitia, J. A., & Carreiras, M. (2011). Two words, one meaning: Evidence of

automatic co-activation of translation equivalents. Frontiers in Psychology, 2, 188.

Duyck, W. (2005). Translation and associative priming with cross-lingual pseudohomophones: Evidence

for nonselective phonological activation in bilinguals. Journal of Experimental Psychology:Learning, Memory, and Cognition, 31, 1340–1359.

Duyck, W., & Warlop, N. (2009). Translation priming between the native language and a second

language: New evidence from Dutch-French bilinguals. Experimental Psychology, 56, 173–179.

Finkbeiner, M., Forster, K. I., Nicol, J., & Nakumura, K. (2004). The role of polysemy in masked

semantic and translation priming. Journal of Memory and Language, 51, 1–22.

Gollan, T. H., Forster, K. I., & Frost, R. (1997). Translation priming with different scripts: Masked

priming with cognates and noncognates in Hebrew-English bilinguals. Journal of ExperimentalPsychology: Learning, Memory, and Cognition, 23, 1122–1139.

Grainger, J. (1993). Visual word recognition in bilinguals. In R. Schreuder & B. Weltens (Eds.), Thebilingual lexicon (pp. 11–25). Amsterdam: John Benjamins.

Grainger, J., & Ferrand, L. (1996). Masked orthographic and phonological priming in visual word

recognition and naming: Cross-task comparisons. Journal of Memory and Language, 35, 623–647.

Grainger, J., & Frenck-Mestre, C. (1998). Masked translation priming in bilinguals. Language andCognitive Processes, 13, 601–623.

Hoshino, N., Midgley, K. J., Holcomb, P. J., & Grainger, J. (2010). An ERP investigation of masked

cross-script translation priming. Brain Research, 1344, 159–172.

Huettig, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation: Semantic

competitor effects and the visual world paradigm. Cognition, 96, 23–32.

Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic, and shape

information in language-mediated visual search. Journal of Memory and Language, 54, 460–482.

Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases

during language-mediated visual search. Memory & Cognition, 39, 1068–1084.

Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and

high literates. Frontiers in Language Sciences, 2, 285.

Jiang, N., & Forster, K. I. (2001). Cross-language priming asymmetries in lexical decision and episodic

recognition. Journal of Memory and Language, 44, 32–51.

Ju, M., & Luce, P. A. (2004). Falling on sensitive ears. Psychological Science, 15, 314–318.

Kirsner, K., Brown, H., Abrol, S., Chadha, N., & Sharma, N. (1980). Bilingualism and lexical

representation. Quarterly Journal of Experimental Psychology, 32, 585–594.

Kroll, J. F., & Stewart, E. (1994). Category interference in translation and picture naming: Evidence for

asymmetric connections between bilingual memory representations. Journal of Memory andLanguage, 33, 149–174.

Kroll, J. F., Van Hell, J. G., Tokowicz, N., & Green, D. W. (2010). The revised hierarchical model: A

critical review and assessment. Bilingualism: Language and Cognition, 13, 373–381.

Lagrou, E., Hartsuiker, R. J., & Duyck, W. (2011). Knowledge of a second language influences auditory

word recognition in the native language. Journal of Experimental Psychology: Learning, Memory,and Cognition, 37, 952–965.

Marian, V., Blumenfeld, H. K., & Boukrina, O. V. (2008). Sensitivity to phonological similarity within

and across languages. Journal of Psycholinguistic Research, 37, 141–170.

An eye tracking visual world study

123

Marian, V., & Spivey, M. (2003a). Bilingual and monolingual processing of competing lexical items.

Applied Psycholinguistics, 24, 173–193.

Marian, V., & Spivey, M. (2003b). Competing activation in bilingual language processing: Within and

between-language competition. Bilingualism: Language and Cognition, 6, 97–115.

Marian, V., Spivey, M., & Hirsch, J. (2003). Shared and separate systems in bilingual language

processing: Converging evidence from eye tracking and brain imaging. Brain and Language, 86,

70–82.

Mishra, R. K. (2009). Interface of language and visual attention: Evidence from production and

comprehension. Progress in Brain Research, 176, 277–292.

Mishra, R. K., Huettig, F., & Olivers, C. N. (2013). Automaticity and conscious decisions during

language-mediated eye gaze in the visual world. Progress in Brain Research, 202, 135–149.

Moon, J., & Jiang, N. (2012). Non selective lexical access in different-script bilinguals. Bilingualism:Language and Cognition, 15, 173–180.

Nakayama, M., Sears, C. R., Hino, Y., & Lupker, S. J. (2012). Cross-script phonological priming for

Japanese–English bilinguals: Evidence for integrated phonological representations. Language andCognitive Processes, 27, 1563–1583.

Pattamadilok, C., Morais, J., De Vylder, O., Ventura, P., & Kolinsky, R. (2009). The orthographic

consistency effect in the recognition of French spoken words: an early developmental shift from

sublexical to lexical orthographic activation. Applied Psycholinguist, 30, 441–462.

Perre, L., & Ziegler, J. C. (2008). On-line activation of orthography in spoken word recognition. BrainResearch, 1188, 132–138.

Rayner, K. (2009). The thirty fifth sir Frederick Bartlett lecture: Eye movements and attention during

reading, scene perception, and visual search. Quarterly Journal of Experimental Psychology, 62,

1457–1506.

Salverda, A. P., & Tanenhaus, M. K. (2010). Tracking the time course of orthographic information in

spoken-word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition,36(5), 1108.

Schoonbaert, S., Duyck, W., Brysbaert, M., & Hartsuiker, R. J. (2009). Semantic and translation priming

from a first language to a second and back: Making sense of the findings. Memory & Cognition,37(5), 569–586.

Schoonbaert, S., Hartsuiker, R. J., & Pickering, M. J. (2007). The representation of lexical and syntactic

information in bilinguals: Evidence from syntactic priming. Journal of Memory and Language, 56,

153–171.

Schoonbaert, S., Holcomb, P. J., Grainger, J., & Hartsuiker, R. J. (2010). Testing asymmetries in

noncognate translation priming: Evidence from RTs and ERPs. Psychophysiology, 48, 74–81.

Schulpen, B. J. H., Dijkstra, A. F. J., Schriefers, H. J., & Hasper, M. (2003). Recognition of interlingual

homophones in bilingual auditory word recognition. Journal of Experimental Psychology: HumanPerception and Performance, 29, 1155–1178.

Seidenberg, M. S., & Tanenhaus, M. K. (1979). Orthographic effects on rhyme monitoring. Journal ofExperimental Psychology: Human Learning & Memory, 5, 546–554.

Singh, N., & Mishra, R. K. (Submitted). Automatic activation of translation equivalents in Bilinguals

leads to interference in a visual task: A visual world eye tracking study with Hindi–English

bilinguals.

Sunderman, G., & Kroll, J. F. (2006). First language activation during second language lexical

processing: An investigation of lexical form, meaning, and grammatical class. Studies in SecondLanguage Acquisition, 28, 387–422.

Sunderman, G. L., & Priya, K. (2012). Translation recognition in highly proficient Hindi–English

bilinguals: The influence of different scripts but connectable phonologies. Language and CognitiveProcesses, 27(9), 1265–1285.

Thierry, G., & Wu, Y. J. (2007). Brain potentials reveal unconscious translation during foreign language

comprehension. The Proceedings of the National Academy of Science of the United States ofAmerica, 104, 12530–12535.

Thierry, G., & Wu, Y. J. (2010). Chinese-English bilinguals reading English hear Chinese. The Journal ofNeuroscience, 30, 7646–7651.

Trueswell, J. C., & Tanenhaus, M. K. (Eds.). (2005). Processing world-situated language: Bridging thelanguage-as-action and language-as-product traditions. Cambridge, MA: MIT Press.

Veivo, O., & Jarvikivi, J. (2012). Proficiency modulates early orthographic and phonological processing

in L2 spoken word recognition. Bilingualism: Language and Cognition, 1(1), 1–20.

R. K. Mishra, N. Singh

123

Voga, M., & Grainger, J. (2007). Cognate status and cross-script translation priming. Memory &Cognition, 35, 938–952.

Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. Journal ofMemory and Language, 50, 1–25.

Ziegler, J. C., & Ferrand, L. (1998). Orthography shapes the perception of speech: The consistency effect

in auditory word recognition. Psychonomic Bulletin & Review, 5, 683–689.

An eye tracking visual world study

123