domingo, 27 de novembro de 2011

Ultimate Question In Management

Harvar Business School - Harvard University
Published: November 3, 2011

Is the Ultimate Question, Why?

What might be called the "ultimate question exercise" this month yielded a number of interesting responses. First, nearly everyone was willing to play the game in suggesting their own favorite ultimate questions for managers. Nominees took on several forms. But many proposed questions were clustered around inquiries about willingness to recommend, trust, and authenticity.
Among suggested questions that managers might ask of themselves were: "What positive, relevant impact can I make?" (from Clinton Coker), "How do you continuously inspire the passion and confidence for every employee to perform their role at the expected level or better?" (from Barry Cohen), "Will the world be a better place if I do my job well?" (from Mo Bjornestad), "Do you want this position so you can serve—or so you can be served?" (from David Witt), "What are you doing to enhance (your) … company's, (your) … team's, (your) … organization's development"? (constructed from comments by Marlis Krichewsky), and "What are you doing for others?" (from Dennis Hopwood). One of the more interesting questions of this type was proposed by Pete DeLisi: "Do you make your people feel like they want to be a better man or woman?"
Favorite "ultimate" questions that managers could ask of their reports included several adherents to "Do you trust your boss?" However, Peter Bowie observed that "fundamental to trust is integrity so I would build the foundation on integrity." Other favorites included "the extent to which I feel my manager or my organization is being real with me (… doing what it says on the tin)" (from Jackie Le Fevre), "Does our value system determine the behavior of management or does the behavior of management determine our value system? (from Athan Sunderland), "Is (our) … organization a place where balanced risks can be taken without causing too much career retardation?" (from Yadeed Lobo), and "Is my manager characterized by authenticity and passion?" (implied by a response from Harry Abrikian). Steve Sheinkopf would alter the process to insure that he receives feedback only from his "A players, not by the Bs (or Cs)."
The coauthors of The Ultimate Question 2.0 weighed in as well with some background on their choice of "How likely is it you would recommend us to a friend?" as the ultimate question. Fred Reichheld put it this way in describing the question behind the question: "our candidate for the ultimate question in business (and in life) is: Have we treated others … with dignity and honor—in a manner consistent with the Golden Rule?" And Rob Markey pointed out that "Trust does lie at the heart of someone's likelihood to recommend …" He went on to say that "we have always advocated pairing the likelihood to recommend question with what is perhaps even more important: 'Why?'" So is the ultimate question really "Why?"? What do you think?

Original Article

The publication this month of The Ultimate Question 2.0 (revised from an earlier edition) provides us with an opportunity to ask ourselves just what is the ultimate question in management.
In their book, Fred Reichheld and Rob Markey remind us of the simplicity of the Net Promoter Score (NPS). It's the product of answers to one question, "How likely is it you would recommend us to a friend?" The NPS has become so popular that, as a customer, you quite likely have been asked that question in the past couple of months. Those replying with a 9 or 10 (the most positive) on an 11-point scale (0 to 10) are "promoters"; a 7 or 8 labels you as a "passive"; and anything from a 0 to a 6 makes you a "detractor." Subtract the proportion of detractors from the proportion of promoters and you get a "net promoter score" that can range anywhere from +100 to -100.
And that's it. Tracking the net promoter score, according to the authors, can lead to improvements in both management and performance.
As managers and students of management, we have a tendency to want to simplify things. Evidence of this is the plethora of management books with single word titles such as AccountabilityTransparency, and Teamwork. We search for the one key to management success. Based on recent research, I have my own candidate for that "one key thing:" trust. (There's precious little trust in government, Wall Street, and business in general these days.) I found a strong correlation between trust, loyalty, engagement, and "ownership" among employees in a sample of organizations I examined. Respondents in the study made a convincing case that trust was absolutely essential to the successful implementation of policies and practices necessary to implement any strategy. For example, several managers testified to the importance of the relationship between trust and the ability to achieve speed in getting things done. It's a topic that Stephen M. R. Covey wrote persuasively about several years ago in his book, The Speed of Trust. So for me one candidate "ultimate question" would be "Do you trust your manager?" or "Do you trust your organization?"
My study led to an exploration of the underpinnings of trust, as suggested by related survey data. One major determinant is whether a manager or the organization does what it says it will do, whether it lives up to "the deal" on things important to an employee, whether it meets that employee's expectations. So another "ultimate question" might well be "Does your manager do what she says she will do?" or "Does your organization do what it says it will do?"
What is the ultimate question in management? Or do you object to playing this game?--The Net Promoter Score certainly has its detractors. All of these are efforts to provide simple guideposts in a very complex process. Performance measurement can be a confusing process, leading to inaction or, worse yet, inappropriate action. Can an "ultimate question" have a useful management function? If so, what's yours? What do you think?

To read more:

Stephen M. R. Covey with Rebecca R. Merrill, The Speed of Trust: The One Thing That Changes Everything (New York: Free Press, 2006).
Fred Reichheld and Rob Markey, The Ultimate Question 2.0 (Revised and Expanded Edition): How Net Promoter Companies Thrive in a Customer- Driven World (Boston: Harvard Business Press, 2011).
Jim Heskett's latest book,The Culture Cycle, was published in September. 

sexta-feira, 25 de novembro de 2011

Walking Through Doorways Causes Forgetting

The Quarterly Journal Of Experimental Psychology
Published: May 10, 2011



Previous research using virtual environments has revealed a location-updating effect in which there is a decline in memory when people move from one location to another. Here we assess whether this effect reflects the influence of the experienced context, in terms of the degree of immersion of a person in an environment, as suggested by some work in spatial cognition, or by a shift in context. In Experiment 1, the degree of immersion was reduced by using smaller displays. In comparison, in Experiment 2 an actual, rather than a virtual, environment was used, to maximize immersion. Location-updating effects were observed under both of these conditions. In Experiment 3, the original encoding context was reinstated by having a person return to the original room in which objects were first encoded. However, inconsistent with an encoding specificity account, memory did not improve by reinstating this context. Finally, we did a further analysis of the results of this and previous experiments to assess the differential influence of foregrounding and retrieval interference. Overall, these data are interpreted in terms of the event horizon model of event cognition and memory.

Keywords

Work on event cognition has revealed a location-updating effect, which is the finding that when people pass through a doorway to move from one location to another, they forget more information than if they do not make such a shift (Radvansky & Copeland,2006; Radvansky, Tamplin, & Krawietz, 2010). In this work, the environments people moved through were virtual ones. This effect is in line with other research in text comprehension that shows that memory declines when there has been a shift in location (e.g., Curiel & Radvansky, 2002; Morrow, Greenspan, & Bower, 1987; Radvansky & Copeland, 2010; Radvansky, Copeland, & Zwaan, 2003; Rinck & Bower, 1995). Essentially, a shift at an event boundary introduces a need to update one's understanding of the ongoing events, and this updating process is effortful. Also, this finding is consistent with work on event segmentation theory (Kurby & Zacks, 2008; Swallow, Zacks, & Abrams, 2009; Zacks, Speer, & Reynolds, 2009). Specifically, as people parse events, information that was present prior to an event boundary, such as a shift in location, becomes less available after the shift. The aim of the current study is to explore whether this finding is dependent on how the environments are experienced.
In Radvansky and Copeland's (2006) original work, people progressed through a multiroom virtual environment using a large 66″ diagonal display screen, with people sitting about 1 metre away, to provide a high degree of immersion. Each room had one or two tables. A person walked towards the table, set down one object (coloured solids, such as a red cube or a blue wedge), picked up another, walked to the next table, which was either across the large room or in another room, and so on. The critical comparison was whether there was a shift to a new room or not. Note that the distance travelled was held constant, regardless of room change.
At critical points, people were probed with the name of an object (e.g., red cube) either halfway across a large room (no-shift condition) or just after having entered a new room (shift condition). Following Glenberg, Meyer, and Lindem (1987), positive responses were to be made if the probe was either the object that was being carried or the one that was just set down. Negative responses were to be made to probes that were recombined object and colour names from the two positive objects. So, if the object set down was a yellow cone, and the carried object was a blue wedge, a negative probe could be either “yellow wedge” or “blue cone”. Note that the objects could not be seen when probed. The carried object disappeared when it was picked up, and people turned “their backs” from the object that had been set down. Moreover, the probes did not occur at every possible location, which decreased the degree to which people could anticipate them.
The results showed a location-updating effect in which people made more errors if they had moved to a new room. This accuracy difference was also supported by a response time difference, with people responding slower to probes in the shift than in the no-shift condition. Thus, event model updating compromised memory.
Recently we have been developing the event horizon model of event cognition and memory. This model has five components: (a) Events can be segmented, and different event models are created with people processing one at a time; (b) information in the current event that is being actively processed is foregrounded; (c) there is retrieval facilitation for noncompetitive retrieval; (d) there is retrieval interference for competitive retrieval; and (e) there is the storage of causal relations among events. Aspects of this model can be drawn upon to explain the location-updating effect. Of particular concern here are the first, second, and fourth components.
First, event segmentation occurs when an event boundary is encountered, such as a person moving from one room to another, and a new event model may be created and stored in memory (e.g., Kurby & Zacks, 2008; Swallow et al., 2009; Zacks et al.,2009). The event model for the prior event then declines in availability. Note that, for simplicity, we assume that there can only be one event model active at one time. Once an event model becomes deactivated, it will decline in availability until it reaches some background level of activation.
Second, the event model that is currently active in working memory is foregrounded, and it is easier to retrieve information from that event (e.g., Glenberg et al., 1987; Zwaan, 1996). This is to say that the current event model occupies working memory, and available processing capacity is directed to it.
Finally, and particularly important in these experiments, memory for the objects was assessed using a recognition probe. Thus, a person must choose one memory trace to verify that information. When people move an object from one location to another, it is now associated with two locations—the one that it was picked up in, and the one where it was carried. Thus, there may be two event models that contain the target information, which compete with one another at retrieval, producing interference, and making retrieval slower and more error prone (e.g., Radvansky, 1999). In essence, this is a kind of a fan effect (Anderson,1974), and the more locations an object is in, the harder it is to retrieve one particular event model (Radvansky, 19981999,20052009; Radvansky & Copeland, 2006; Radvansky, Spieler, & Zacks, 1993; Radvansky & Zacks, 1991), even though in this case they would all support the same response.
An alternative account of the location-updating effect is that it is an artefact of the degree of immersion a person is experiencing while navigating the environment. First, it may be that this effect requires a higher degree of immersion. This has been referred to the degree of presence that is derived from a mediated experience (Lombard & Ditton, 1997). For example, a radio broadcast or a book would provide lower degrees of presence than a television show or a movie. Moreover, virtual-reality technologies would provide an even higher degree of presence. This would be consistent with some research on mental maps showing differences between memory after learning a layout of an environment while being immersed in it or learning it from a map (e.g., McNamara, Altarriba, Bendele, Johnson, & Clayton, 1989), although it should be noted that this work involved different perspectives as well as different degrees of immersion. From this view, a greater sense of immersion provides a richer representation of the environment, thereby allowing cognition to be more influenced by structural characteristics. Experiment 1 addressed whether decreasing the degree of immersion, by using standard computer monitors, would impact the presence of a location-updating effect.
Alternatively, it may be that virtual environments, while more immersive than those experienced with standard computer monitors, are not complete substitutes for reality. As such, the virtual environment may still be distinct from processing event information in a real situation (e.g., McNamara et al., 1989). As such, in Experiment 2, we increased the degree of immersion by making the situation maximally immersive by testing for a location-updating effect in actual reality.
Finally, Experiment 3 addressed an alternative interpretation of the location-updating effect. Previous research has shown that environmental factors affect memory. The encoding specificity phenomenon posits that information learned in one environment is retrieved better when retrieval occurs in the same context (Thomson & Tulving, 1970). We tested whether the location-updating effect was due to the updating of event models (i.e., the number of rooms entered) or the encoding specificity phenomenon (i.e., whether encoding and retrieval occurred in the same context).
The aim of Experiment 1 was to assess what sort of impact reducing the degree of immersion would have on the location-updating effect. If the effect requires a higher degree of immersion, perhaps because event updating requires that a person experience the event more directly as a part of the event and the concomitant influences of structural characteristics environment, then reducing screen size would reduce the feeling of immersion and the location-updating effect. Alternatively, if the location-updating effect is due to the tracking of information across events, regardless of whether one is embedded in that environment, then the location-updating effect would still be evident. In Experiment 1, we reduced the degree of immersion by using standard 17″ diagonal monitors rather than a 66″ diagonal screen. With the smaller monitors, the virtual environment does not fill the visual field as much as the larger monitors. Larger displays allow for greater immersion, leaving the impression of being contained and present within the virtual world (e.g., Bystrom, Barfield, & Hendrix, 1999).

Method

Participants

Fifty-five people (31 female) were recruited from the University of Notre Dame participant pool and were given partial course credit for their participation.

Materials and apparatus

The virtual spaces were created using the Valve Hammer program (Vale Software, 2003), which is used to create environments for the Half-Life video game. For this experiment the displays were standard 17″ diagonal monitors. The virtual space was a 55-room environment that had rooms that were two possible sizes, with larger rooms being twice as long as small rooms. This difference in room size allowed for the distance travelled to be equated in the shift and no-shift conditions. Included in each room were one or two rectangular tables. Each table was placed along a wall. For the small rooms there was only a single table, whereas for the larger rooms there was a table in each half of the room. At one end of the table was the object to be picked up. The other half of the table was empty for the object carried from the previous room to be set down. Additionally, the two doorways in a room were never on the same wall so participants did not repeat any part of the path.
The objects were combinations of colours and shapes. The colours used were: red, orange, yellow, green, blue, purple, white, grey, brown, and black. The shapes were: cube, wedge, pole, disc, cross (X), and cone. All combinations of colours and shapes were possible, and they were all used, although some were not probed for. A given shape–colour combination did not repeat as an object in the experiment. People moved through the virtual environment using the left hand to press the arrow keys on the computer keyboard.

Procedure

After signing an informed consent form, people were seated about 0.67 metres from the display. They were told that the task was to pick up an object from the table, move to the next one by either walking across a large room (no shift) or passing through a doorway to the next room (shift), place the object on the empty part of the table, pick up the next object, and so forth. Picking up and putting down objects was done by touching the appropriate end of the table.
To ensure that people progressed through the rooms in the required order, after a person had entered a room, the door behind the person closed. The door to the next room did not open until the person put the object they were carrying on the table and picked up the new object.
There were 48 probe trials. Thus, not every possible location was accompanied by a probe. For the probe trials, immediately upon either travelling halfway across a large room or entering a new room, people were presented with a probe that appeared in the middle of the screen. People were to respond “yes” if the probe was either the object that was currently being carried or the one that had just been set down. They were to respond “no” to all others. Negative probes were generated by recombining the object and colour name for the two positive objects. For example, if the carried object was a white cube, and the set-down object was a red wedge, a negative probe might be “red cube”. Responses were made by pushing one of two buttons on a computer mouse held in the right hand. Half of the probes in each condition occurred after a spatial shift, and half did not. There were 24 positive probes and 24 negative probes. The experimental procedure typically lasted between 15 and 20 minutes.

Results and discussion

The error rate and response time data are summarized in Table 1. These data were submitted to two-way (shift vs. no-shift) repeated measures analyses of variance (ANOVAs). Because they involve different types of responses, the positive and negative data were analysed separately. For the error rate data, there were significant effects of shift, F(1, 54) = 25.18, MSE = .006, p < .001, and F(1, 54) = 33.86, MSE = .018, p < .001, for positive and negative responses, respectively, with people remembering less after having moved to a new location. Thus, updating an event model following a spatial shift, even for less immersive events, resulted in a memory disruption.
Data table

Table 1. Error rate and response time results with standard errors for Experiment 1, which used the small computer displays

Treating responses to the positive items as hits and responses to negative items as false alarms, we calculated A′ signal detection measures. This analysis revealed that memory was better in the no-shift (M = .97, SE = .01) than the shift condition (M = .93, SE = .01), F(1, 54) = 32.13, MSE = .001, p < .001.
For the response time analysis, any errors were excluded. We also trimmed the data by first removing response times faster than 200 ms and slower than 10,000 ms as being impossibly fast and slow. Then, the data were submitted to the van Selst and Jolicoeur (1994) trimming procedure, which is based on the number of observations per cell. This resulted in 6% of the response time data being dropped. There was no effect of event updating for the response time data for the positive trials, F < 1, but a marginally significant effect for negative trials, F(1, 54) = 3.86, MSE = 84, p = .06.
Thus, overall, even with a smaller degree of immersion, there was a location-updating effect. Memory was worse following a location shift. This is consistent with the idea that updating an event model can disrupt memory. In light of the error rate data, the absence of an event updating effect in the response time data is ambiguous. This result cannot be interpreted as reduced forgetting because the pattern of error rates is unchanged. A likely possibility is that the smaller display size reduced the amount of the visual angle (24° vs. 80°) that needs to be actively monitored, thereby speeding response time overall (358 ms faster on average in the current study) and reducing our ability to observe a response time difference.
The aim of Experiment 2 was to assess whether it would be possible to observe the location-updating effect if one were in a real environment that was maximally immersive. Another way of stating this is to say that a real experience is nonmediated compared to the mediated experiences derived from viewing something on computer screen (Lombard & Ditton, 1997). There is some evidence that the impoverished nature of virtual environments, relative to real environments, can produce deficits in cognition that are tied to that environment (e.g., Richardson, Montello, & Hegarty, 1999; Waller, Loomis, & Haun, 2004). Specifically, a real environment provides a broader range of cues that allow more accurate performance. It may be that it is this paucity of spatial cues that makes the location shift more disruptive in our virtual environments. In contrast, according to an event cognition view, the need to monitor and update an event model should operate in real situations as well.
For the virtual environments of our prior work, people moved through a series of rooms, one after the other, and never returned to a previous room. However, this is not practical in the real world, as it would be difficult to find an environment with over 50 rooms that would allow a person to go from location to location. To adapt the principles of this paradigm to the real world, we did the following. First, we used three larger rooms from our laboratory. There were three location shifts in which a person moved from one room to another. There was a no-shift condition within each room in which a person first did a task at one table and then crossed the room to do the next task. For practical reasons, half of the participants ended their last trial by returning to the original room. To allow for an adequate number of observations, rather than moving a single object each time, there were six objects moved on each trial.

Method

Participants

Sixty people (28 female) were recruited from the University of Notre Dame and were given partial course credit for their participation, with 15 people in each movement condition (see below).

Materials and procedure

A three-room environment was used. The room sizes were: Room 1, 4.4 m × 1.5 m; Room 2, 2.1 m × 3.6 m; and Room 3, 5.2 m × 3.5 m. There were four movement conditions. Half of these began with a no-shift condition and half with a shift condition. Within each of those, one path moved in one direction through the three rooms, and the other moved in the other direction. A plan view of the lab, and the relative positions of the six testing locations, is provided in Figure 1. A set of coloured blocks was used. The block shapes were: cube, sphere, disc, wedge, cross, and cylinder. The colours were: black, white, red, yellow, green, and blue. On each trial, each set of blocks included one block of each shape, and each colour was represented only once.



View larger version(22K)
Figure 1 A plan view of the lab and the relative positions of the six testing locations.


At the beginning of each trial, people approached a table that had an inverted black box, covering the objects underneath it. They lifted the box revealing the six coloured blocks and picked up each object and put them into the box. They covered the box with a lid and were directed to the next table on the path where they set the box down. There, the experimenter gave them a laptop computer with the recognition test on it. Prior to the recognition test itself, people were given three-digit maths problems (e.g., 254 + 742 = ?) for two minutes to serve as a distractor and encourage some forgetting. Unlike Experiments 1 and 3 where there was continuous movement through the environment, the blocked presentation of objects may have allowed chunking that would be more difficult in those experiments, hence the motivation to include the distractor task. Note that when a person was seated at a table, their back was turned away from any other tables.
For the recognition test, people saw a series of object names (e.g., “red cube”) and indicated whether the object was in the box that they had just carried. Responses were made by pressing one of two buttons on a computer mouse. The left button was marked with a “Y” for “Yes, this object is in the box”, and the right button was marked with an “N” for “No, this object is not in the box”. On each of the six trials, there were 12 recognition probes. Six were positive (the objects were actually in the box), and 6 were negative. The negative probes were colour and object combinations such that each object shape and colour appeared only once on a given test, and any given colour object combination could appear only once as a negative probe across all of the trials for a given participant.
After the recognition test, the person lifted the next box and began the next trial. This continued until all six trials were complete. The experimental procedure typically lasted between 15 and 20 minutes.

Results and discussion

The error rate and response time data are presented in Table 2. The response time data were trimmed using the same procedure as that in Experiment 1. The data for the positive and negative responses were submitted to two-way (no-shift/shift) repeated measures ANOVAs. For the error rate data, people made more errors following a spatial shift, F(1, 59) = 5.91, MSE = .011, p = .02, and F(1, 59) = 3.69, MSE = .009, p = .06, for the positive and negative responses, respectively. This finding parallels the location-updating effect observed with virtual environments. Note that the larger error rates in this experiment are probably due to the increased levels of proactive interference that people experienced with the increased number of objects on each trial.
Data table

Table 2. Error rate and response time results with standard errors for Experiment 2, which used a real environment

Again, treating responses to the positive items as hits and responses to negative items as false alarms, we calculated A′ values. This analysis revealed that memory was better in the no-shift (M = .86, SE = .01) than in the shift condition (M = .82, SE = .02),F(1, 59) = 5.28, MSE = .007, p = .03.
For the response time data, although people were slower following a location shift, this effect did not reach significance, F(1, 59) = 1.34, MSE = 160,662, p = .25, and F < 1, for the positive and negative responses, respectively. Again, this is not much of a concern as the primary measure here is accuracy, and the response times are not inconsistent with the error rate data.
Overall, Experiment 2 demonstrated a location-updating effect in a real-world environment similar to what has been observed in virtual environments. People forgot more following a spatial shift than when they simply moved across a room. This result supports an event cognition view because the need to update one's event model following a change in location brought about a cost in memory performance. Information associated with a prior location became less available even though it continued to be task relevant.
Another interpretation of the location-updating effect is that this forgetting is due to a difference in environmental context at retrieval compared to encoding. The different rooms are different contexts, and memory may be poorer when the environmental context differs from the original context because there are fewer retrieval cues available (e.g., Smith, Glenberg, & Bjork, 1978; Smith & Vela, 2001). Thus, the location-updating effect would be little more than another demonstration of the encoding specificity phenomenon (Thomson & Tulving, 1970). At the outset this seems unlikely because (a) encoding specificity effects are more reliably observed with recall, rather than recognition as was done here (Smith et al., 1978); (b) encoding specificity generally requires more forgetting to occur for the environmental context to have an effect as a memory cue; and (c) when a carried object was moved from one room to the next, it was then associated with both the original and the new location, rather than just a single context as in typical encoding specificity work. Still, an encoding specificity account does have some plausibility, so we put it to the test.
The aim of Experiment 3 was to assess this alternative explanation. According to accounts of environmental context-specific memory, memory declines when the context changes, making it more difficult to access information in memory. Moreover, and more importantly, when a person returns to the original context, memory should improve as there are now more retrieval cues available to access the information. Thus, in Experiment 3, we added a third condition, the return condition, in which a person, after making a spatial shift, returned to the original location. According to an encoding specificity account, memory should be better in this condition.
One notable aspect of the return condition is that there are two spatial shifts: one when the person moves from the original room to the new room, and then again when a person moves from the new room back to the original one. To parallel this double movement, but not have a return to the original room, we had a double shift condition. In this condition, people moved from the original room to a new room. Then, half-way through the new room, they were told to continue on to yet another room. Thus, people made two spatial shifts, as in the return condition; however, rather than returning to the original room, they were in a new room when they were probed. Thus, these two conditions parallel each other in terms of the distance travelled and the number of spatial shifts, and they only differed in the final location reinstating the learning context or not.
In comparison to an encoding specificity account, for the event horizon model, performance will primarily reflect the number of rooms a person has been in. In the no-shift condition, there is only one room, in the shift and return conditions, there are two rooms involved (thereby producing interference at retrieval), and in the double shift condition there are three rooms involved (increasing the amount of interference experienced). Note that the account is not the number of shifts that disrupt memory (which would predict similar performance in the return and double shift conditions) or that memory disruption occurs after a single shift, but then does not increase (which would predict similar performance in the shift, return, and double shift conditions), but that it is the number of event models (based on rooms in this case) that are involved in retrieval (e.g., Bower & Rinck, 2001; Radvansky, 1999; Radvansky & Zacks, 1991).

Method

Participants

Forty-eight people (28 female) were recruited from the University of Notre Dame and were given partial course credit for their participation.

Materials and procedure

Unlike Experiments 1 and 2, but like our previous work (Radvansky & Copeland, 2006; Radvansky et al., 2010), the virtual environments were presented in a 66″ diagonal Smartboard, with people sitting about 1 metre from the display, to give a fairly high degree of immersion in the virtual environment. For Experiment 3, the virtual space was an 88-room environment. Like Experiment 1, the rooms were one of two sizes, with larger rooms being twice as long as small rooms. For the return and double shift conditions, the intermediate room (where they got the message to go back or to continue on) was always a small room.
Included in each room (except the intermediate rooms, which had no tables) were one or two rectangular tables. Each table was placed along a wall in the room. For the small rooms there was only a single table, whereas for the large rooms there was a table on each side of the room. At one end of the table was the object the participant was to pick up. The other half of the table was empty. This is where the object being carried from the previous room was to be put down. Each room had a different pattern on the walls to emphasize that there was a change in location. Finally, the doorways in the room were never on the same wall.
If the room was one that a person was to return to on a return trial, then there was a third door that would be passed through the second time the room was entered. To reduce the predictability of motion to some degree (i.e., knowing that a room with three doors would be one that would be returned to), there were a number of additional doors throughout the area that never opened.
The procedure was like that of Experiment 1, with a few modifications to accommodate the return and double shift conditions. For the return and double shift conditions, a message was presented to instruct people where to go next. For the return condition, this message told people to turn around and go back, whereas for the double shift condition this message told them to continue on. This message was always presented half-way through the room so that the distance travelled would be similar in the two cases. There were 64 memory probe trials. Thus, not every spatial shift was accompanied by a memory probe. The experimental procedure typically lasted between 15 and 20 min.

Results and discussion

The error rate and response time data are presented in Table 3. The response time data were trimmed using the same procedure as that in Experiments 1 and 2, resulting in 2% of the data being dropped. The data were first submitted to one-way ANOVA with four levels (no-shift/shift/return/double-shift) repeated measures ANOVAs, followed by tests of simple effects. Again, the positive and negative responses were analysed separately. For the error rate data, there was a main effect of condition, F(3, 141) = 6.84, MSE = .011, p < .001, and F(3, 141) = 3.94, MSE = .015, p = .01, for the positive and negative responses, respectively. Simple effects tests revealed that people made marginally significantly fewer errors in the no-shift than the shift condition, F(1, 47) = 2.28, MSE = .007, p = .08, although this was not significant for the negatives, F(1, 47) = 1.51, MSE = .013, p = .23, and significantly fewer than in the return, F(1, 47) = 5.94, MSE = .010, p = .02, and F(1, 47) = 5.10, MSE = .010, p = .03, for positives and negatives, respectively, and double-shift conditions, F(1, 47) = 16.48, MSE = .013, p < .001, and F(1, 47) = 11.19, MSE = .015,p = .002, respectively. Moreover, response errors in the shift condition were similar to those in the return condition, both Fs < 1, but were significantly fewer than in the double-shift condition, F(1, 47) = 6.27, MSE = .015, p = .02, and F(1, 47) = 3.40, MSE = .020, p = .07, for positives and negatives, respectively. Finally, people made fewer errors in the return than the double-shift condition, F(1, 47) = 5.34, MSE = .009, p = .03, although this was not significant in the data for the negatives, F(1, 47) = 1.73, MSE = .011, p = .20. Overall, we replicated the location-updating effect. These effects were weaker in responses to the negative probes; however, this is of only secondary concern as responses to these probes were for items that did not exist, along with the other additional cognitive processes that are known to complicate negative responses. Of primary interest, there was no evidence that returning to the original room improved performance, as suggested by an encoding specificity account. Finally, it should be noted that the large number of errors in the double-shift condition suggests that it is not the number of spatial shifts that disrupt memory, but the number of new areas entered, consistent with the competitive retrieval aspects of the event horizon model.
Data table

Table 3. Error rate and response time results with standard errors for Experiment 3, which used a virtual environment

Again, treating responses to the positive items as hits and responses to negative items as false alarms, we calculated A′ values. There was a main effect of condition, F(3, 141) = 10.18, MSE = .005, p < .001. Simple effects revealed that memory was better in the no-shift (M = .94, SE = .01) than in the shift (M = .92, SE = .01), return (M = .90, SE = .01), and double-shift conditions (M = .86,SE = .02), F(1, 59) = 5.22, MSE = .003, p = .03, F(1, 59) = 11.86, MSE = .003, p = .001, and F(1, 59) = 18.24, MSE = .008, p < .001, respectively. Moreover, memory in the shift condition was similar to that in the return condition, F(1, 59) = 1.68, MSE = .003, p = .20, but better than that in the double-shift condition, F(1, 59) = 9.66, MSE = .008, p = .003. Finally, memory was better in the return condition than in the double-shift condition, F(1, 59) = 5.93, MSE = .007, p = .02. Again, the location-updating effect was replicated, and there was no evidence of encoding specificity. Performance was guided more by the number of new rooms entered than the number of spatial shifts.
For the response time data, the main effect of condition was significant, F(3, 141) = 6.49, MSE = 147,724, p < .001, and F(3, 141) = 9.43, MSE = 105,284, p < .001, for positives and negatives, respectively. Simple effects showed that, although the difference between the no-shift and shift conditions was not significant for the positives, F(1, 47) = 1.92, MSE = 121,662, p = .18, it was for the negatives, F(1, 47) = 13.61, MSE = 97,376, p = .001. Moreover, people responded faster in the no-shift condition than in the return, F(1, 47) = 5.47, MSE = 167,254, p = .02, and F(1, 47) = 18.15, MSE = 156,677, p < .001, for positives and negatives, and double-shift conditions, F(1, 47) = 17.86, MSE = 147,662, p < .001, and F(1, 47) = 9.87, MSE = 76,170, p = .003, respectively. Moreover, responses in the shift condition were not significantly different from those in the return condition, F(1, 47) = 2.00, MSE = 111,747, p = .16, and F(1, 47) = 2.37, MSE = 120,937, p = .13, for positives and negatives, respectively, but were faster than those in the double-shift condition, F(1, 47) = 7.09, MSE = 183,518, p = .01, although not for the negatives, F < 1. Finally, responses to the return condition were only marginally significantly faster than those in the double-shift condition, F(1, 47) = 2.89,MSE = 154,498, p = .10, and showed the reverse pattern for the negatives, F(1, 47) = 7.56, MSE = 88,856, p = .008, although it is unclear at this point why this occurred. Overall, the response time data largely paralleled the analyses of the error rates. Of most importance here was that there was no improvement by returning to the original context. Thus, we can confidently reject a context-based account of the location-updating effect.

GENERAL DISCUSSION

The three experiments reported here further assessed whether the location-updating effect (Radvansky & Copeland, 2006; Radvansky et al., 2010), in which people show poorer memory for objects after a shift from one room to another. Experiments 1 and 2 assessed whether this effect is influenced by the degree of immersion. In general, this does not play a major role. In prior work, a large display was used that took up most of the visual field. When this was reduced in Experiment 1, a location-updating effect was observed in the accuracy measure. Moreover, virtual environments are not as immersive as real environments. Experiment 2 showed that people in a real environment also showed a location-updating effect. So, a location-updating effect occurred both after increasing and after decreasing the degree of immersion. Thus, the need to update one's event understanding can disrupt memory. Moreover, Experiment 3 explored whether the location-updating effect might be just another manifestation of the encoding principle. However, having people return to an earlier room did not improve memory. Furthermore, this experiment revealed that performance was affected by how many shifts to new rooms there were, not the number of spatial shifts made, consistent with predictions of the event horizon model.
One possible interpretation of the updating effect is that the forgetting of object information during room shifts is due to the disruption of visual–spatial processing in working memory. That is, when one is moving from one room to another, visual–spatial processing that occurs in earlier rooms impedes the visual–spatial processing of the upcoming room(s), thus overloading and adding more and more information to working memory. While this explanation is plausible given the data reported here, results of another study by Radvansky et al. (2010) that used a similar paradigm provides evidence against this processing disruption account. In this study, people were given sets of word pairs to remember (which were unrelated to the object probes) as they navigated the virtual environment and performed the same basic task. A disruption of memory was found following an event shift from one room to another even for these unlinked verbal materials. Thus, the updating effect found in this experiment is unlikely to be due to an exclusively visual–spatial effect.
According to the event horizon model, there are three aspects of performance that are driving the observed pattern of results. First, people are parsing the stream of action into events based on the event boundaries (e.g., Kurby & Zacks, 2008; Swallow et al., 2009; Zacks et al., 2009), which are the shifts from one room to another in the context of the current experiments. This is one reason why information is less available following a spatial shift. Second, information that is being actively processed in the current event is foregrounded and is more available (e.g., Glenberg et al., 1987). Finally, the third aspect of the event horizon model that accounts for the pattern of performance is the idea that there is interference during competitive performance (e.g., Bower & Rinck, 2001; Radvansky, 1999; Radvansky & Zacks, 1991). In this task, when an object is moved from one room to another, it is now represented in two event models, one for the room it was picked up in, and one for the current room. As such, when a person gets a recognition memory probe, both event models are activated. Because recognition is trying to select out a single memory trace, these two event models interfere with one another, thereby increasing the error rates and slowing retrieval time, even though the two models are consistent with the same result. In essence, this is a kind of fan effect (Anderson, 1974; Radvansky, 1998199920052009; Radvansky et al., 1993; Radvansky & Zacks, 1991).
So, event parsing, foregrounding, and competitive retrieval, as outlined by the event horizon model, influence performance. These can be separated by assessing performance on the positive probes as a function of whether (a) there was an event shift, (b) the object was either the associated object (the one currently being carried) or the dissociated object (the one just set down), and (c) there were one or two event models involved in retrieval.
The influence of the event shift is to make knowledge about the current event more available and that about the prior event less available as attention moves from one event model to the next (e.g., Morrow, Bower, & Greenspan, 1989; Morrow et al., 1987; Radvansky & Copeland, 2010; Rinck & Bower, 1995). When there is no location shift, there is only one event model involved, and it is for the current location. In comparison, when there is a location shift, then there are two event models involved for the associative information—namely, the room where the object was picked up and the room where the object is being carried.
The influence of foregrounding is primarily on whether the object is currently being carried. Consistent with Glenberg et al.(1987), associated objects are foregrounded in the event model, whereas dissociated objects have been removed from the foreground. This foregrounding results in a higher activation level for those objects. However, when the information moves out of the foreground, its activation level diminishes, making it hard to retrieve such information.
Finally, the influence of competitive retrieval reflects whether there has not been a spatial shift (one event model and, hence, no competition) or there has been a spatial shift (two event models and, hence, retrieval competition). When there is competition, it is expected that there will be an increase in error rates and response time as retrieval becomes more difficult (Bower & Rinck,2001; Radvansky, 1999; Radvansky & Zacks, 1991).
The predicted influence of the combination of various event model processing components and the associate/dissociated object and spatial no-shift/shift manipulation is shown at the top of Table 4. In the prediction row of the table, + indicates a retrieval benefit, and – indicates either no influence or a retrieval cost. The first symbol is for the influence of event parsing and the movement from one event to another, the second is for the influence of foregrounding, and the third is for the influence of retrieval competition. In this way, we consider how each of these components plays out by looking at the four individual conditions that appear across multiple studies.
Data table

Table 4. The influence of foreground and competitive retrieval on performance, along with error rate data for five experiments

According to the event horizon model, for the no-shift/associated condition, there has been no shift in location, so the event model where the objects were interacted with is still the one being actively processed. Second, because these are associated objects, they are currently being carried by the person and so are in the foreground of the event model and are more activated. Finally, because there has not been an event shift, there is only a single model involved during retrieval when the memory probe is presented, and so there is no retrieval interference. So, for the no-shift/associated probes, the designation is + /+ / +.
For the shift/associated condition, there is a shift in location, so the event model where the objects were picked up is less available, but because this is an associated condition, the objects have been transported to the new location, which is being actively processed. Second, again, because these are associated objects, they are currently being carried and are in the foreground of the event model. Finally, because there has been an event shift, there are two models involved during retrieval when the memory probe is presented, and so there is retrieval interference. So, for the shift/associated probes, the designation is + /+ / –.
For the no-shift/dissociated condition, there was no shift in location, so the event model where the objects were interacted with is still the active one. Second, because these are dissociated objects, they have been moved out of the event model foreground and so are less available. Finally, because this is a no-shift condition, there is only a single model involved during retrieval. So, for the no-shift/dissociated probes, the designation is + / – / +.
Finally, for the shift/dissociated condition, the event model where the objects were interacted with is less available, and because this is a dissociated condition, the objects were not moved to the new location, and so the event model they are associated with is not being actively processed. Second, these are dissociated objects that are not in the event model foreground and so are less available. Finally, although there has been an event shift, there is still only one model involved during retrieval because these objects were not moved to the new location and so would not be retrieval interference. Thus, for the shift/dissociated probes, the designation is – / – / +.
In Table 4, below the event horizon model predictions are the data from five experiments in which associative/dissociative and no-shift/shift were factorally combined. This includes Experiment 2 from the Radvansky and Copeland (2006) study (Experiment 1 did not have a no-shift condition), both experiments from the Radvansky et al. (2010) study, and Experiments 1 and 3 from the current study (Experiment 2 did not have an associative/dissociative manipulation).
As can be seen, the error rate data, particularly when one averages across the experiments, are consistent with the event horizon model. Specifically, performance is best when the objects are associated, and there has not been a spatial shift. However, when the objects have been dissociated or there has been a spatial shift, then performance is compromised. Thus, this suggests that both aspects of the event updating process are influencing memory performance as one dynamically moves through space. Finally, when there has been an event shift, and the objects were dissociated, there are multiple influences worsening performance. This is also true of all of the individual experiments, except for Experiment 3 of the current study. There is no clear reason why there would be a deviation in the shift/dissociated condition here, and so we tentatively attribute this to random variation.
So, overall, the event horizon model adequately accounts for the availability of objects in an environment that a person is navigating. This availability is influenced by (a) whether there has been an event shift, (b) the foregrounding of currently relevant information, and (c) the presence or absence of retrieval interference.
In sum, walking through doorways serves as an event boundary, thereby initiating the updating of one's event model. This updating process can reduce the availability of information in memory for objects associated with the prior event. Here, we were able to show that this effect extends to different degrees of immersion and is not a result of an encoding specific problem. Finally, an analysis across multiple studies shows that the parsing of the flow of experience into events, foregrounding, and competitive retrieval all combine to influence processing as a function of whether an object is associated or dissociated and whether there has been an event shift. Thus, overall, it is quite clear that memory for recently experienced information is affected by the structure of the surrounding environment.

Acknowledgments

We would like to thank Dan Blakely, Mark Bohay, Abbi Daugherty, Erica Nason, Patrick O'Keefe, Jenny Walls, Megan Cefferillo, and Brittany Gragg for their assistance in collecting the data. We would also like to thank Jeff Smith and Mike Villano for their programming expertise. This research was supported in part by a grant from the Army Research Institute, ARMY-DASW01-02-K-0003 and funding from J. Chris Forsythe of Sandia National Laboratories.

REFERENCES