knowledge-intensive work in the oil and gas industry: a ... · knowledge-intensive work in the oil...

129
Knowledge-Intensive Work in the Oil and Gas Industry: A Case Study Thesis for the degree of Philosophiae Doctor Trondheim, November 2012 Norwegian University of Science and Technology Faculty of Information Technology, Mathematics and Electrical Engineering Department of Computer and Information Science Torstein Elias Løland Hjelle

Upload: lyhuong

Post on 03-May-2018

217 views

Category:

Documents


3 download

TRANSCRIPT

Knowledge-Intensive Work inthe Oil and Gas Industry:A Case Study

Thesis for the degree of Philosophiae Doctor

Trondheim, November 2012

Norwegian University of Science and TechnologyFaculty of Information Technology,Mathematics and Electrical EngineeringDepartment of Computer and Information Science

Torstein Elias Løland Hjelle

NTNUNorwegian University of Science and Technology

Thesis for the degree of Philosophiae Doctor

Faculty of Information Technology,Mathematics and Electrical EngineeringDepartment of Computer and Information Science

© Torstein Elias Løland Hjelle

ISBN 978-82-471-4017-8 (printed ver.)ISBN 978-82-471-4018-5 (electronic ver.)ISSN 1503-8181

Doctoral theses at NTNU, 2012:345

Printed by NTNU-trykk

Du er som en åpen bok, Torstein – en åpen lydbok!

G.A.H.

3

Abstract This thesis examines collaborative work practices within a large international oil and gas company (OGC). The work is founded on the introduction of a new standardised and integrated collaboration infrastructure based on Microsoft SharePoint technology, which was intended to improve knowledge sharing across both disciplinary and geographical boundaries within the company and with external partners. Using alongitudinal case study, the thesis investigates how the introduction of this new solution has been received in different organisational contexts.

The work is inspired by social studies of information systems (IS) and seeks to explain the role of the new collaboration infrastructure in an enterprise context with actors and stakeholders that have different interests, experiences and expectations. The focus of the thesis was to investigate how the collaboration infrastructure has been received and has become an integral part of the users’ daily work.

Previous research has shown that the introduction of different information systems doesnot adequately account for local, established practices and thus results in systems that are not used optimally. In many cases, users have to establish informal practices to work around the limitations of the system. In this research, we investigate this potential divergence between the intended and actual usage of these large information systems.Based on our empirical findings, we argue that in knowledge-intensive, interdisciplinary work such as oil and gas production, an integrated collaboration solution does not ensure knowledge sharing and a collaborative work environment. In fact, such a system plays a surprisingly small role in supporting the daily work. Other factors, such as astrong community, an open and inclusive atmosphere, work locations and expert tools, play significantly stronger roles.

The thesis does not exclusively focus on the new collaboration solution that is based on Microsoft SharePoint. Rather, it seeks to understand how workers use available toolsand systems to do their jobs. In such a setting, the solution based on Microsoft SharePoint is only one of a number of different tools and systems available to workers. In contrast to other studies that have suggested that work is mostly a local endeavour, we show that knowledge-intensive work requires workers to shift between local and global contexts and that they require systems and tools that can do the same. We identify tactics and strategies that are used by workers to navigate different contexts andsystems to collaborate and do their jobs.

In summary, this thesis examines socio-technical work practices within a large, heterogeneous organisation and contributes to the research on how information systems

4

serve as social constructs and should be understood and interpreted. It also providespractical implications for IT professionals and managers who are interested in developing and implementing information systems in complex settings.

5

Preface This thesis is submitted to the Norwegian University of Science and Technology (NTNU) for partial fulfilment of the requirements for the degree of Philosophiae Doctor. This doctoral work was performed at the Department of Computer and Information Science, NTNU, Trondheim, Norway.

6

Acknowledgements Many people have helped, encouraged and supported me and my work on this thesis in various ways over the past few years.

First, I would like to thank my supervisors, Eric Monteiro and Vidar Hepsø, for including me in their research community as well as providing invaluable help and feedback throughout the process. I would also like to thank the Faculty of Information Technology, Mathematics and Electrical Engineering at NTNU for my Ph.D. grant.

I would like to thank Gasparas Jarulaitis for being my co-pilot as we have navigated the complicated and unknown worlds of both OGC and NTNU. Having someone to work this closely with has been extremely important. Trondheim is less fun now that you are not here, but I know that you have moved on to a better place. Drammen, of course! See you at Marienlyst stadium soon!

Next, there is no way to deny the impacts that Glenn “Pumpel” Munkvold, Hans Augustinus “Pilt” Hysing Olsen and Håvard Gustad have had during these years. Without you, I would definitely not have learned as much about OGC as I have, nor would I have won as many ping-pong matches!

Birgit R. Krogstie should also take credit for never giving up on trying to get me to grow up; at times, she has nearly sacrificed both her house and family. I would also like to thank Thomas “Naturlig kul” Østerlie, Kirst E. Berntsen, Gro Alice ”Lille søtnos” Hamre, Mikhail Fominykh, Geir Kjetil ”Tanna” Hanssen, Vigdis Heimly and Anca Deak, who in their individual ways have made it a pleasure to go to work (almost) every day.

I thank the remaining members of Forskerfabrikken for insightful comments and discussions, and I thank the faculty and administration at the Department of Computer and Information Science for providing such a friendly work environment. At the same time, I would like to thank the OGC and its employees for being the subject of my research. I would especially like to thank “my” little gang of engineers, who allowed me to tag along, ask stupid questions and learn. It has been extremely interesting! I will also thank Nord-Trøndelag University College for giving me the opportunity to finish this thesis.

Lastly, I would like to thank my friends and family who, even though they don’t really understand exactly what I do, still recognise that I do something I enjoy and feel passionately about and support me. Thank you!

7

Table of Contents Abstract............................................................................................................................. 3

Preface .............................................................................................................................. 6

Acknowledgements .......................................................................................................... 7

Abbreviations and Glossary............................................................................................ 10

1 Introduction ............................................................................................................ 11

1.1 Motivation........................................................................................................ 11

1.2 Research Questions.......................................................................................... 12

1.3 Theoretical Approach ...................................................................................... 13

1.4 Research Setting and Approach....................................................................... 13

1.5 The Structure of the Thesis .............................................................................. 14

1.6 Contributions ................................................................................................... 15

2 Collaborative Work: Theoretical Background ....................................................... 16

2.1 Communities of Practice.................................................................................. 16

2.2 Networks of Practice........................................................................................ 18

2.3 Common Information Spaces .......................................................................... 19

2.4 Information Infrastructure................................................................................ 21

2.5 Framework for Analysing Collaboration ......................................................... 23

3 Case Study .............................................................................................................. 24

3.1 Oil and Gas Company...................................................................................... 24

3.2 Collaboration Infrastructure within OGC ........................................................ 25

3.3 Research Setting .............................................................................................. 28

3.3.1 Technology and Research......................................................................... 28

3.3.2 Oil and Gas Production ............................................................................ 30

4 Research Method .................................................................................................... 33

4.1 Research Approach .......................................................................................... 33

4.2 Negotiating Access .......................................................................................... 34

4.3 Data Collection ................................................................................................ 35

4.4 Data Analysis ................................................................................................... 37

5 Results .................................................................................................................... 39

8

5.1 P1: Changing Large-Scale Collaborative Spaces: Strategies and Challenges . 42

5.2 P2: Information Spaces in Large-Scale Organisations .................................... 43

5.3 P3: The Introduction of a Large Scale Collaboration Solution: A Sense-Making Perspective..................................................................................................... 43

5.4 P4: Tactics for Producing Actionable Information.......................................... 44

5.5 P5: Joining a Community: Strategies for Practice-Based Learning................. 44

6 Implications ............................................................................................................ 46

6.1 Implications for Information Systems Research.............................................. 46

6.2 Implications for the Method ............................................................................ 47

6.3 Implications for ICT Management................................................................... 48

6.4 Implications for Users...................................................................................... 49

7 Concluding Remarks .............................................................................................. 50

7.1 Limitations ....................................................................................................... 50

7.2 Future Work..................................................................................................... 50

References ...................................................................................................................... 52

Appendix: The Papers and statements from co-authors ................................................. 55

9

Abbreviations and Glossary Abbreviation MeaningCIS Common information spacesCoP Community of PracticeCRM Customer relationship managementCSCW Computer-supported cooperative workERP Enterprise resource planningFAST ESP Fast Search & Transfer Enterprise Search PlatformHR Human resourcesIO Integrated operationsIS Information systemsMSSP Microsoft SharePointNOKOBIT Norwegian Conference for Organisation’s Use of Information

Technology (Norsk konferanse for organisasjoners bruk av informasjonsteknologi)

NoP Network of PracticeNPV Net present valueNYSE New York Stock ExchangeOGC Oil and gas company, a pseudonym for the company at which the

research was conducted.SOX Sarbanes–Oxley ActT&R Technology and Research, a pseudonym for a business unit within

OGC.

10

1 Introduction

1.1 Motivation The oil and gas industry is an extremely knowledge-intensive industry. As with all industries, it is important to maximise profits, which is usually achieved by cutting costs and increasing earnings. Because the development of new oil and gas fields and drillingof new wells are expensive and uncertain, it is important for the oil and gas industry to produce as much profit out of each well as possible. That is, it is important to reduce the likelihood of drilling dry wells (i.e., wells that do not produce oil or gas) and increase production from producing wells.

To do this, the oil and gas industry developed the concept of Integrated Operations (IO). The idea behind IO is to use collaborative work practices to make better decisions. Through better collaboration between onshore and offshore personnel, as well as amongthe actors across the entire oil and gas value chain, it is believed that better decisions will be made using IO, resulting in increased profit.

In 2006, the Norwegian Oil Industry Association published a report entitled “Potential Value of Integrated Operations on the Norwegian Shelf” (OLF 2006), which estimated that the realisation of Integrated Operations had the potential value of NOK 250 billion (NPV) over the 10-year period from 2005 to 2015. The report suggested that the main reasons for this increased value would be accelerated production and cost reductions.

Various ICT solutions are central in such strategies. For instance, sensors and gauges that monitor different aspects of the production lines are considered to be relevant because the workers will be able to make better decisions if they have more accurateinformation available to them. Other integrated systems are also considered to be attractive because they seek to eliminate fragmentation of information by establishing a single data repository (Davenport 1998). Such systems have been used since the 1960sin the manufacturing industry, where they were developed for inventory control and management of complex logistics. Integrated systems have since evolved, and today large-scale organisations rely heavily on integrated enterprise-wide systems, such as Enterprise Resource Planning (ERP) for accounting, HR management, inventory control management, Customer Relationship Management (CRM), collaborative systems, records management systems and numerous other systems.

Prompted by several major corporate and accounting scandals, a new law, called The Sarbanes-Oxley Act (SOX), came into effect in 2002. This act requires companies that are listed on the New York Stock Exchange (NYSE) to systematically provide control of information, openness and accountability. Integrated systems are considered to be

11

important in achieving such compliancy. To gain competitive advantages, technology vendors quickly began to promote their enterprise systems as a means of achieving SOX compliance. One of the central requirements of SOX compliance is that all departments within a company contribute effectively through a common record system, which requires increased information integration. To maximise the value of information, the companies need to integrate various types of content. As such, integrated systems have become the de facto standard for medium and large organisations (Pollock, Williams et al. 2003).

Empirical evidence from research on the socio-technical aspects of the design, implementation and use of integrated systems indicates that challenges and problems are often more likely to arise than quickly obtainable benefits. As such, research on enterprise systems has been strongly influenced by studies that have empirically illustrated that various technologies do not have deterministic powers but rather are socially constructed (Mackenzie and Wajcman 1999). Based on such studies, Information Systems researchers are critical of standardisation efforts and claim that systems have to be adjusted to the different uses (Orlikowski 2000; Suchman 2007; Vaast and Walsham 2009). Because every context is unique, use of the same technology results in different interpretations and different work practices.

The summarising, standardisation, centralisation and control that enterprise systems seek to promote do not sufficiently consider the importance of context. Consequently, integrated enterprise systems will be less successful than anticipated and will produce problems that will require workarounds (Soh, Kien et al. 2000; Boudreau and Robey 2005).

1.2 Research Questions Despite research that suggests that integrated systems are suboptimal because they do not consider organisational diversity, there is a widespread consensus throughout industry to exploit integrated systems. Thus, the overall topic of this thesis is:

to investigate how the introduction of a generic, enterprise-wide integrated collaboration infrastructure unfolds in a large organisation and to characterise its implementation and use.

Specifically, the following research questions are asked:

RQ1: How does a global collaborative information infrastructure support work practices within a local context?

12

RQ2: How do users navigate and adapt to the possibilities and limitations of information systems to facilitate their daily work?

The first theme of this thesis seeks to investigate how the new collaborative information infrastructure was intended to be implemented within the organisation. We examine theproblems and challenges encountered and how they were resolved. Here, we focus on the generic usage of the system by examining how it is used in a setting where people work loosely together. In the second part of the thesis, we examine the usage of the system within a group of people that work together closely in their everyday work. Overall, the thesis focuses on explaining collaborative work practices in a large-scale heterogeneous organisation.

1.3 Theoretical Approach The work in this thesis is motivated by social studies of IS and focuses on determininghow different groups of users have constructed a common understanding of the same system by using various theories of group dynamics and collaboration.

Community of practice (CoP) emphasises the importance of belonging to a tight-knit group to understand how the members of the group work together. For instance, people with similar educational backgrounds and work experiences are thought to be able to work better together. Network of practice (NoP) broadens this idea by suggesting that people do not have to interact physically in their daily work to work efficiently together. It is possible to work closely with someone you have never met if you have similar areas of focus and goals. Common information space (CIS) recognises that different users have different experiences and that although people cannot have a completely common understanding of the system, they are still able to work together. The important factor is that they have enough in common, i.e., they have common goals. Information Infrastructure (II), in contrast, focuses on factors that are common throughout the entire organisation. II addresses what all members of the organisation already know and understand, such as what previous experiences they all have.

Using elements from these theories, we have tried to create a framework to support understanding of collaborative work at different levels within a large-scale organisation.

1.4 Research Setting and Approach The research presented in this thesis draws from a longitudinal interpretive case study that is grounded in recent efforts to establish better collaborative solutions within an international oil and gas company (dubbed OGC). The new system replaces acollaborative system from the 1990s and promises improved collaboration and knowledge sharing through better integrated tools, a corporation-wide search engine and

13

document archiving. The primary aim of this thesis is to investigate the implementation, reception and usage of this new solution, which is based on Microsoft SharePoint technologies. Using interviews, observations and document analysis, we have collected qualitative data that cover the following themes: i) technology development and management within OGC, ii) technology use within a broad business unit, and iii) work practices and the use of technology in oil and gas production at the field level.

1.5 The Structure of the Thesis The thesis consists of five papers and an additional introductory paper. The introductory paper presents the motivation for the research and outlines a theoretical framework that was developed through the doctoral work. A case study is then presented along with the methodological approach. Subsequently, the findings of the research are presented, and the thesis ends with a discussion and conclusions. The following five published or submitted papers are included as appendices:

1. Hjelle, T. & Jarulaitis, G. (2008). Changing Large-Scale CollaborativeSpaces: Strategies and Challenges. Paper presented at the 41st HawaiiInternational Conference on System Sciences, Hawaii, USA.

2. Hjelle, T. (2008). Information Spaces in Large-Scale Organizations. Paper presented at the 8th International Conference on the Design of Cooperative Systems, Carry-le-Rouet, Provence, France.

3. Hjelle, T. (2010). The Introduction of a Large Scale Collaboration Solution: A Sensemaking Perspective. Presented at the NOKOBIT1 conference, Gjøvik, Norway

4. Hjelle, T. & Monteiro, E. (2011). Tactics for Producing Actionable Information.Presented at the 2nd Scandinavian Conference on Information Systems, Turku, Finland.

5. Hjelle, T. & Østerlie, T (2013). Joining a Community: Strategies for Practice-Based Learning. Submitted to the 46st Hawaii International Conference on System Sciences, Hawaii, USA.

The remainder of the thesis is organised as follows. Chapter 2 outlines the theories and perspectives used during this research. Chapter 3 describes the oil and gas company, the new collaborative infrastructure and the research settings. We outline our research methods and data collection activities in Chapter 4, whereas Chapter 5 presents the results based on the five papers. Chapter 6 presents the implications of the research, whereas Chapter 7 finishes the thesis with the conclusions.

1 NOKOBIT: Norwegian Conference for Organization’s Use of Information Technology

14

1.6 Contributions Using empirical evidence, this thesis provides rich insights into how groups of people construct a working understanding of an integrated collaboration infrastructure to be able to use it in different contexts. In addition, the thesis proposes a working framework of how to analyse technology-supported collaborative work on different levels. In an enterprise organisation such as OGC, work is neither local nor global; rather, workers must continuously navigate between the two. As such, the workers use the same collaboration system differently depending on the situation at hand. Lastly, the thesis provides practical suggestions on how to conduct research in enterprise organisations.

The relationships of the papers to the research questions are shown in Table 1.

Paper

P1 P2 P3 P4 P5

Research question

RQ1 X X X (x) (x)

RQ2 X X X

Table 1 - Correlation between research questions and published papers

15

2 Collaborative Work: Theoretical Background Collaborative work in a professional setting can take many different forms. To explain how and why people act in such settings, a thorough understanding of their context, group relationships and work processes is needed.

This chapter briefly outlines the theory that guides the research in this thesis. Concepts from various theoretical constructs have been brought together to create a coherent conceptual framework that is used to illuminate the issues of interest in this work.

The theoretical frameworks thus consist of elements selected from (1) CoP, whichfocuses on collaboration and knowledge sharing within small and homogeneous groups or communities, (2) NoP, which examines groups that have a broader distribution but are still homogeneous, (3) Common Information Spaces (CIS), which emphasises the heterogeneity of groups as well as the role of artefacts in collaborative work and (4) Information Infrastructure (II), which focuses on the possibilities and constraints that guide the collaboration and on how people have to shift between different levels of focus.

2.1 Communities of Practice To understand the activities and processes that take place when people work, the concept of Communities of Practice (Wenger 1998) is commonly used to look beyond differences between actual everyday work and the way that the work is described in training, formal descriptions, organisational charts and job descriptions (Brown and Duguid 1991). The concept of CoP was based on the fundamental belief that the separation of theory from practice is not appropriate (Lave and Wenger 1991). Rather, it is argued “that learning should be contextualised by acknowledging its presence and allowing it to continue to an integrated part of work” (Berntsen, Munkvold et al. 2004).

According to Wenger (2006), “Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.” An example of a CoP is a group of engineers that work together within a business organisation to produce the optimal quantity of gas and oil from a reservoir below the seabed to a platform. Three crucial characteristics describe a CoP:

The Domain: A CoP is not just any group of people. The group must have an identity through a common domain of interest or goal, i.e., the members of the community must be committed to this goal.

The Community: To achieve their goals, the members of the community must interact with each other; they must engage in joint activities and discussions to help each other

16

and share information. Their interpersonal relationships enable them to learn from each other. However, it is important to note that the community members do not have to work together on a daily basis, but their interactions are vital in making them a community of practice.

The Practice: To form a community of practice, the members must be practitioners. The members of the community develop a shared practice over time through their interactions, experiences, tools and views. This is an on-going and continuous process. If the group stops interacting, their practices will deteriorate over time. This shared practice is developed through a series of activities, such as solving problems, sharing information, utilising expertise, reusing resources, coordinating, discussing and documenting.

A community of practice relies heavily upon each individual member’s understanding of who the members of the community are, what types of behaviour are acceptable within the community, what roles the various members have and what conventions areapplicable. Each member’s understanding of the community evolves as the community evolves through collaboration and experience. Through mediation, the community settles on a shared understanding based on accumulated knowledge and experience.

Within a CoP, the resources developed can be understood as the accumulated knowledge, which fits with the description of communities of practice as ‘shared histories of learning’ (Wenger 1998). Individuals are not considered to be a member of the community until they share histories that have developed over time with the rest of the community. However, not all community members share all their stories. Thus,there are differences within the community.

However, using CoP as an approach to understand everyday work is not without problems. Similar to most theories and approaches, CoP has weaknesses and limitations, which Wenger, McDermott et al. (2002) refer as the ‘downside’ of communities of practice. Roberts (2006) reviews the academic literature regardingcommunities of practice and identifies several concerns that are unresolved within the CoP approach. First, she considers power and power dynamics to be understated within a CoP. While negotiating meaning within a community, it is important to recognise the role of power in the process. Because members of a community of practice have different experiences, expertise, ages, personalities, and authority, their positions within a CoP will differ as well.

Second, Roberts argues that trust needs to be given more attention when discussing communities of practice because trust is important in knowledge sharing. The presence

17

of trust among members of a community implies that the members share a high degree of mutual understanding. This understanding is built upon a shared cultural and social context.

Third is the notion of predispositions. Within a CoP, meaning is said to be negotiated between the members of the community. Roberts argues that meaning does not take the members’ preferences and bias into consideration. When joining a community, new members bring their own predispositions. Meaning is thus mediated through these predispositions. With time, communities develop inclinations and predispositions that will influence the way that they create and absorb new knowledge.

Roberts (ibid.) also identifies several other challenges in communities of practice. One of these focusses on the size and spatial reach of the communities. In the literature, the CoP approach has been used for small groups that work in close proximity and also for large, globally distributed communities of 1,500 people. Roberts questions whether it is possible to use the same principle in such different settings. Furthermore, she investigates the use of the term community within the community of practice approach because the term has social and cultural annotations. The term community has different meanings in different settings, and the CoP approach can be biased towards societies that value communities over individuals. Finally, she finds that the CoP approach does not sufficiently consider the increasing complexity and rapid changes in today’s global world. It is difficult to form new communities of practice within a business organisation because of the increasing pace of change.

The CoP approach is one of several approaches that are used to increase the understanding of knowledge and knowledge management within organisations; as such, it has both strengths and weaknesses. However, in today’s changing work environments, being able to form closed-knit communities is not always possible or necessarily desirable because people tend to change jobs more often now than in the past. Staying in the same position for 50 years and receiving a gold watch from the employer uponretirement is no longer the goal for most people. Similarly, an employee cannot expect to have the same tasks and responsibilities for years to come. Things change today much faster than in previous years. Businesses have to adapt to new markets, new regulations and new ways of doing business to stay in business.

2.2 Networks of Practice Community of practice is often considered to be useful in explaining collaboration and learning processes. However, CoP cannot account for several challenges related to how people work in real life settings. Most of these challenges are related to the size of the

18

community as well as the distributed aspects of modern day collaboration. One example is when people share similar practices but are not collocated. Because the inclusion ofsuch settings would broaden the notion of a CoP beyond what is useful, Brown and Duguid (2001) coined the term Networks of Practice (NoP) to describe such groupings. Within a NoP, members do not necessarily have to work together and may never have met. However, they share a common foundation of work practices and possibly an interest in similar topics (Brown and Duguid 2001).

Members of NoPs have less in common with each other in their practices than members of CoPs. Members of a NoP do not even have to be aware of each other (Vaast and Walsham 2009). For instance, nurses working at one hospital would typically form a CoP. Despite working in different parts of this hospital, they would most likely develop some practices that are similar and some that are different depending on what type of patients they work with. However, nurses working at one hospital will have much in common with nurses working at another hospital. Nurses with the same specialisation will have very similar practices across different hospitals; thus, they belong to the same NoP. It is important that no two such groups develop identical practices. However, they do share enough similarities to be able to benefit from communication and knowledge sharing.

It is also worth noting that NoPs differ from project teams and other formal groups in that they promote knowledge sharing through informal social networks as opposed to contractual obligations, organisational hierarchies or monetary incentives. In addition, the way that membership is assigned differs. In formal groupings, the members are typically assigned to the group, whereas there are no formal membership expectations within NoPs (Brown and Duguid 2001).

2.3 Common Information Spaces NoPs introduce the concept of an increasing number of people working together as well as the idea of workers not being collocated. However, the people covered by both CoPsand NoPs are still quite homogeneous; that is, they typically belong to a single discipline. Because work tends to involve people from different disciplines, it is necessary to further expand the framework.

Such expansion leads to Common Information Spaces (CIS). The concept of CIS was initially proposed by Schmidt and Bannon (1992) to help focus research within Computer Supported Collaborative Work (CSCW). The primary idea behind CIS is that it should cover both the representations of information and the meanings attributed to the representations by the actors: “a common information space encompasses the

19

artifacts that are accessible to a cooperative ensemble as well as the meaning attributed to these artifacts by the actors.” (ibid, p. 28, emphasis in original).

By combining information carriers (i.e., artefacts) and meaning, CIS seeks to acknowledge the articulation work (Strauss 1985) necessary to coordinate tasks and activities that are required for distributed cooperative work. Within a CIS, all actors have the same set of information available to them. This information may be accessible through a shared database, a shared disk drive or collaboration infrastructure. The available information can then be accessed, read and manipulated by all members of the CIS. The CIS recognises that simply having access to information is not sufficient to facilitate efficient collaboration. Additional work is required by the actors to reach a common understanding of the information and objects that the information describes. Objects must be understood and given contextual meaning.

CIS provides an alternative to traditional workflow perspectives and explains how workflow perspectives fail to consider how work is performed in contexts where continuous negotiation and problem solving is required (Suchman 1987). Schmidt and Bannon (1992) argue for an alternative approach that would “allow the members of a cooperating ensemble to interact freely” by focusing on the importance of interpreting and understanding information as opposed to only having access to the information.

Bannon and Bødker (1997) continue the work on CIS and explore the duality of the concept. They argue that a common information space has a dual nature; it is both open and malleable on the one hand and closed and rigorous on the other hand. Within this duality, the openness is necessary to meet the needs of the particular community, whereas the closed nature is important for sharing information across different communities. They also argue that there are many different types of common information spaces. For instance, people can work together from different physical locations as in the maritime classification company presented by Rolland and Monteiro (2002), where a ship audit that is begun at one location can be completed at anotherlocation. Similarly, they can collaborate over time, as in the treatment of a patient where different shifts of health care workers care for the same patient (Munkvold and Ellingsen 2007). However, Bannon and Bødker argue that the different types of CIS all have some characteristics in common.

Bossen (2002) provided a third contribution to common information spaces by introducing seven parameters that provide a more detailed framework and thus can be applied to characterise the details of a given CIS. The seven parameters are: 1) Thedegree of distribution, which focuses on the physical distribution of the collaborative parties. It is believed that a broader physical distribution of the members of a given

20

community makes the collaboration more difficult. 2) The multiplicity of webs ofsignificance, which relates to the background (e.g., culture, language, education) of the community members. Again, more diverse backgrounds of the community members will make the collaboration more difficult. 3) The level of required articulation work,which examines how close the collaboration must be for a given CIS. When people have to work together more closely, more articulation work is required. 4) The multiplicity and intensity of means of communication, which involves the different channels that people use to communicate. Face-to-face communication is generally considered to be the most effective method of communication, but this is not always possible because of the distributed nature of much collaborative work. Accordingly,methods such as video conferences, telephones, email, instant messaging, and text messaging may be necessary. The use of a more intense channel such as video conferencing will require less work to achieve a common interpretation or understanding than a less intense channel such as email. For instance, when communicating using email, it will often be necessary for the recipient to respond with more or less the same content presented in a slightly different way to confirm their understanding. When using face-to-face communication, non-verbal signals, such aslooks and gestures, will often provide this confirmation. 5) The web of artefacts, which consists of coordinating mechanisms such as plans, strategies, and schedules that arenecessary for the collaboration to be possible. 6) Immaterial mechanisms of interaction,which are more informal than the web of artefacts and focus on the work practices within the organisation and how the work is really done (as opposed to how the work is described in various work flow models). 7) The need for precision and promptness of interpretation, which relates to how closely people work together and how important this closeness is.

A limitation of Bossen’s work is that several of his parameters are derived from his empirical case. In his study, the coordination of work tasks is a primary issue of the articulation work that Bossen describes. Thus, the framework might need to be adjustedto fully support other types of articulation work and collaborative settings.

2.4 Information Infrastructure Heterogeneity and the role of artefacts were introduced to the framework using CIS.The next step is to examine the aspects that surround and constrain the collaborative work. What guides and limits the development of work practices?

In the literature, the notion of Information Infrastructure (II) is used to describe the implementation of (large) integrated applications within an organisation while focusing on management issues, side effects and interconnectivity (Boudreau and Robey 2005).

21

However, II also involves sharing global resources that can shape and be shaped by practices and connections (Star and Ruhleder 1996; Rolland and Monteiro 2002).

Historically, infrastructures have been closely linked to material objects such as roads and railways. Such infrastructures have influenced work and economic development by allowing people to travel and move around more freely (Vaast and Walsham 2009).Today, such material infrastructures have been joined by information infrastructures with the help of technology that connects people across boundaries that are not only geographical in nature. Hanseth (2010) defines II as “a shared, evolving, open, standardised and heterogeneous installed base” and refers to the need to balance standards and standardisations on the one hand with openness and flexibility on the other hand. According to Pironti (2006), an II includes all of the people, processes, procedures, tools, facilities and technology that support the creation, use and transfer of information.

II has been a topic of interest among information systems researchers since the 1990s because of its contribution in shifting focus from organisations to networks and from systems to infrastructures. It has proven to be a valuable tool in a number of extensive case studies (Star and Ruhleder 1996; Ciborra 2000; Rolland and Monteiro 2002; Hanseth and Ciborra 2007).

II has also been useful for the development an alternative approach to IS design.“Infrastructures should rather be built by establishing working local solutions supporting local practices which subsequently are linked together rather than by defining universal standards and subsequently implementing them” (Hanseth and Ciborra 2007).

22

2.5 Framework for Analysing Collaboration A framework for evaluating collaboration on different levels has been established by combining elements from the theories of CoP, NoP, CIS and II. Table 2 provides a brief overview of how employees collaborate at different levels within OGC.

Theoretical foundation

Key characteristics used in the framework Empirical illustration

CoP Collocated groups of people within a discipline working together

A group of production engineers in the same office working on optimising production from an oil and gas field.

NoP Distributed groups of people within a discipline working on similar topics

Groups of production engineers indifferent locations within one organisation working on optimising production from various oil and gas fields.

CIS A group of people from various disciplines working together with a common goal

A group of production engineers working with a group of reservoir engineers to maximise reservoir output and optimise production.

II The available systems, tools, practices, and installed base that constitute the boundaries of the collaboration

The MS SharePoint-based collaboration solution used by OGC that dictates how collaboration is to be achieved and how knowledge is to be shared within the organisation.

Table 2 - Theoretical framework

23

3 Case Study

3.1 Oil and Gas Company In the early 1970s, an Oil and Gas Company (OGC, a pseudonym) was established by the Norwegian government. The company has grown to become a global energy company that currently employs approximately 21,000 people in 36 countries across four continents. The company has grown both organically and through mergers and acquisitions. The company is listed on both the New York and Oslo stock exchanges.

OGC has traditionally been organised according to hierarchical models with a strict division of labour. Currently, OGC can be described as a matrix organisation. The company is divided into business units that are responsible for various functions, andpeople work in projects across these units. As a result, the operation of a single oil and gas field involves several different groups and functions across the various units.

In addition, OGC has established standards for how core activities are to be executed. This standardisation attempts to ensure that decisions are of the highest quality possible and tries to support the transfer of employees. The goal is that core activities, such as the drilling of wells, are performed according to the same process across different geographical locations. These best practices are broken down into more detailed descriptions of the work to be conducted. However, the granularity and level of details varies for the different tasks. Core details, such as the drilling of wells and production optimisation, tend to be described in more detail than other processes. These best practice descriptions are continuously being modified. The process descriptions addressthe sequence of activities, the actors involved, the required deliveries and references to other relevant governing documentation.

OGC has also chosen to organise employees in co-located teams. If possible, the people responsible for one particular activity are located at the same location. Engineers from various disciplines work at the same location if they work on the same activity. For instance, the group that works on production optimisation in one particular oil and gas field typically consists of production engineers, reservoir engineers, geologists, and geophysicists. These employees are co-located. Fifteen years ago, this was not the case;all production engineers used to work in one location, and all reservoir engineers used to work in another location. The various disciplines would only have meetings to makedecisions. Today, the different disciplines are scattered throughout the organisation but meet during various network meetings and workshops to build competence and share knowledge.

24

OGC also has strong relationships with various vendors and service companies. Because oil and gas production is a complex business, OGC has chosen to outsource many of the support functions and focus on the core business. For example, because OGC dependson such a wide variety of complicated equipment, it is impossible to employ experts on all of the equipment within OGC. Rather, OGC relies on expertise from vendors and other external partners.

OGC also partners with other oil and gas companies. Because the exploration and establishment of new oil and gas wells and fields are expensive projects, the various companies form partnerships to reduce the risk. For instance, if company A and company B join in a 50/50 partnership on exploration project X and exploration project Y, both companies would still be able to make a profit if only project X was successful. If company A was solely responsible for project Y, it could encounter severe financial problems if project Y failed. These types of partnerships introduce even higher demands on documentation and reports.

However, although several oil and gas companies are often involved with an oil and gas field, only one of the companies is the operator of the field. In fact, some oil and gas companies do not operate any oil and gas fields at all.

3.2 Collaboration Infrastructure within OGC Approximately ten years ago, a new corporate initiative to improve collaboration was initiated within OGC. The aim of this initiative was to introduce better ICT tools for collaboration.

In the early 1990s, OGC introduced a corporate collaboration infrastructure based on Lotus Notes in an attempt to reduce cost and counter the effects of falling oil and gas prices and low dollar exchange rates. Centralised, standardised and market-oriented IT services were the direct outcomes of several projects.

The Lotus Notes infrastructure was quite successful and was widely used in manydifferent settings. Within this infrastructure, information was stored in a centralised Lotus Notes Arena database. Different projects had different databases that were customised to their needs. However, the Arena databases had no central indexing functionality; a user had to know exactly which database to search to find the needed information. For people working on one project at a time, this was not a problem. However, it was difficult for people working across different projects and for newworkers to find the correct information. By the turn of the century, there were an estimated 5,000 different Arena databases within OGC, which caused much work to be repeated. For instance, if people working on one field had problems with a specific type

25

of pump, they had no way of knowing what fields used the same pump, whether they had had similar problems or how they solved the problems. Rather, they had to solve the problem themselves. There was very little knowledge sharing or knowledge transfer across projects.

In addition, because not all types of files and documents were suited for the Arena databases, employees had access to additional file servers where they could store documents and information. The employees had access to both personal (“F disk”) and departmental (“G disk”) storage areas on these file servers, which further complicatedthe situation.

In 2001, OGC formulated a new strategy to overcome the shortcomings of Lotus Notes. This was considered to be important because the use of ICT continued to increase. Calculations showed that at that time, OGC produced more than 300,000 new documents every month. New laws and regulations also placed stricter requirements on OGC to be able to document decisions and ensure the integrity of these decisions.

The selection process for the new infrastructure was quite lengthy. The decision to implement a Microsoft SharePoint (MSSP)-based infrastructure was made in late 2003, and the first pilot began in early 2004. After the system was configured and several custom components developed, the finished solution was rolled out in early 2005. The roll-out process was largely problem-free and was finished by October 2005.

In addition to MSSP, OGC also introduced MS Exchange and MS Outlook for e-mail and calendar capabilities and FAST ESP (Enterprise Search Platform) for corporate-wide searching. A records management solution, Meridio, was also introduced for archiving purposes.

OGC was early at adopting Microsoft SharePoint; it was among the first large enterprise organisations to choose MSSP. The early version of MSSP was limited in functionality and can be considered as mainly a content and document management system for internal use within an organisation; the system provided a web interface to store and retrieve files. One of the key advantages of MSSP when compared with similar systems was the Microsoft Office-like interface and close integration with Microsoft’s Office suite because OGC had already standardised on these tools. Because of the interface similarities with Microsoft Office, it was anticipated that non-technical users would be able to efficiently use the solution rather quickly.

Because the feature list of MSSP was initially quite limited, OGC had to develop their own functionality before the system was released to the users. According to one ICT

26

manager, during this period of time “the country’s largest group of .NET developers was located at OGC’s headquarter”. OGC developed many components before the launch. Some of the features developed by OGC have been included in later versions of MSSP, including integration with various systems and the ability to upload both e-mails and attachments to MSSP directly from the Outlook e-mail client.

The central concept within MSSP is the team site, which can be considered a project room or space. Each project had one team site that was the centre of collaboration amongst the team members. Here, the team uploaded and stored all documents related to the project. Each team site had a flat structure. All documents were stored in one place but were tagged with keywords that allow filtering for easier retrieval. However, it was possible to create document workspaces for the temporary storage of incomplete documents. Some teams discovered that by nesting these document workspaces, it was possible to create a folder structure similar to what they were used to even though it wasagainst corporate policy.

All projects had at least one team site, and many had two: one where only OGC employees had access rights and one where external partners also had access. In addition, all departments, sections and other groups had their own team sites. Within just a few years, several thousand team sites had been created within MSSP.

The plan was to phase out Lotus Notes quickly. According to internal documents, the Arena databases were to be replaced or removed by the end of 2008. However, this became problematic. One problem was that longitudinal projects had started before the introduction of MSSP. Because of the difficulty of shifting platforms mid-project, these projects were allowed to continue the use of Lotus Notes. A second problem was that early versions of MSSP did not support documents with macros very well. Such documents, especially MS Excel spreadsheets, are important to some engineers. Thus, these documents had to be stored on file servers. Third, MSSP limited the file size of the documents uploaded to the infrastructure to 100 MB. Because specialised systems can produce files more than 10 times larger than 100 MB, file servers became necessary. Because of these limitations of MSSP, both the Arena databases and the file servers are still used in parallel with MSSP, although MSSP is the preferred solution.

The new MS SharePoint-based infrastructure, with its supporting sub-systems, has become a robust integrated collaboration infrastructure that is believed to improve the retrieval and reuse of information and documents within OGC and with external partners. Figure 1 gives an overview of how the collaboration solutions have evolvedwithin OGC.

27

As indicated by Figure 1, OGC has finished upgrading to a new version of MSSP since the data for this research were collected. The new version offers additional features,such as improved collaboration support, social networking functionality, and website creation.

3.3 Research Setting The research in this thesis can be divided into two parts. The first part addresses the overall collaboration infrastructure, including the technology, the intentions and the complexity. The second part focuses on the everyday collaboration and use of the system within a small group of engineers.

As such, the research in this thesis took place in two organisational contexts.

3.3.1 Technology and Research The first part of the research was conducted within the business unit that focuses on technology, research and development, as well as innovation to some extent (dubbed Technology & Research, or T&R). This business unit consisted of approximately 2,200employees and housed both the company’s ICT department and research and development department.

People working within T&R cover a broad range of responsibilities. T&R includes ICT professionals who are responsible for the daily operations of most of OGC’s ICT infrastructure. They also provide technical support, and they plan and implement new ICT-related solutions. The unit also includes information managers that ensure the quality of information available to various user groups as well as employees responsible for expanding OGC’s operations around the world. The fourth major group is

1992 -- Lotus Notes-based collaboration solution

2001 -- Decision to replace the old collaboration solution

2003 -- Decision to use Microsoft SharePoint

2005 -- Start of rollout of new collaboration solution

2008 -- Start of project to improve collaboration solution

2010 -- New solution available

Figure 1 - Timeline of OGC's collaboration solution

28

researchers. OGC has three research centres at three different locations in Norway thatconduct research on various topics, such as improving oil and gas production, developing new ICT systems, improving the search and exploration for new oil and gas fields, and analysing geologic structures.

OGC is a project-oriented organisation, which means that much of the work is organised as projects. These projects often involve employees from different business units within OGC and from external vendors and suppliers. A typical worker within OGC’s T&R unit works on two to four different projects at a time. The projects typically span severalyears and involve both internal and external partners. Thus, all members of the project are rarely collocated. The project may involve several different people from different organisations at different locations.

One such project was the acquisition and implementation of a new real-time monitoring system for oil and gas production. The project was run by the ICT department from OGC’s headquarters but involved ICT professionals located at one of the company’sresearch centres, production engineers at one of their operations centres, and people working for the chosen system provider. In the early stages of the project, the members of the project team met physically for a workshop and to get to know each other, whichwas considered to be important to establish a good rapport within the team.

After the initial workshop, the team members would typically work from their normal workplace and use video conferences for regular, scheduled meetings every week. During the development and implementation stages, the team had daily SCRUM meetings every morning through video conferences. Team members typically usedinstant messaging, phone or e-mail to communicate with other members. Information that was needed by all team members was distributed through e-mail. Documents, reports and presentations were stored in an area of the MS SharePoint-based collaboration solution where external partners could be granted access.

Another project conducted from the research centre focused on improving work practices and collaboration within a specific high-level leader group. The projectexamined how the weekly leader group meetings could be improved because this was an arena where important, and potentially costly, decisions were made on a regular basis. During this project, two researchers from the research centre spent two days per week at the corporate headquarters for several months to analyse the meetings, suggest changes to the agenda and develop guidelines for people presenting issues in the meetings on what and how to present their case.

29

In this phase, I worked on an internal project aimed at improving collaboration within the business unit conducted by OGC’s R&D department. Because employees in this business unit spent exceptionally large amounts of time travelling, the aim of this project was to identify ways to reduce the amount of travel time without reducing the quality of the collaboration and work. Following this project allowed me to access (parts of) OGC’s collaboration infrastructure as well as various documents and reports. I also had the opportunity to talk to engineers and managers within OGC’s IT department andformed strong connections with some of the company’s researchers.

During this phase, I mainly focused on the users’ thoughts of and opinions regarding the collaboration infrastructure and the intentions behind the system.

3.3.2 Oil and Gas Production The second part of my research was conducted within a group of engineers working onthe production of oil and gas from one particular field. The group consisted of approximately 25 employees from different disciplines, including production engineers, reservoir engineers, geophysicists and geologists. The group was divided into two sub-groups: one focusing on planning and future production and one focusing on the current production. My work focussed mostly on the latter sub-group, which works closely with operators offshore on the platform and other land-based organisations.

The field that this group was working on contained approximately a dozen individual oil and gas wells that were connected to one production platform. The group’s responsibility was to bring the optimal amount of oil and gas from the reservoirs below the ocean bed to the platform and to ensure that they would be able to extract as much of the oil and gas out of the reservoirs over time as possible.

The first goal was initially quite easy. The platform was able to process as much oil and gas as the group was able to produce from the reservoirs. However, in the fall of 2009,the group began managing an additional oil and gas field with approximately six wells. The total production from the wells exceeded the platform’s capacity, which meant that they could no longer produce from the wells at the maximum rates and had to limit production from some of the wells. Thus, part of their responsibilities became more challenging.

Their second responsibility, i.e., ensuring that they would maximise the total productionof oil and gas from the reservoirs, was more challenging. As oil and gas is removed from the reservoir, water flows in to fill the voids. Because water has properties different from those of oil and gas, changes in the water/oil/gas ratio cause changes inthe reservoir’s properties, such as pressure and temperature, which can result in

30

insufficient pressure to pump the oil and gas to the platform. If too much water flows tothe wrong places in the reservoir, the wells can start producing only water and the wellis considered to have drowned. To prevent this from occurring, the group has to ensure that the various operating parameters of the reservoirs and the individual wells are within given production constraints.

One of the central roles within this group is that of the production coordinator, which is rotated among the production engineers on a monthly basis. The production coordinator has the responsibility of coordinating the group’s activities with the activities of other groups that work on the same field, including managing the employees working on the platform, running the pipelines from the platform, performing maintenance, and monitoring the environmental impact of the production.

The first task that the production coordinator performs upon arrival is to check various logs to determine what has occurred since the end of work on the previous day. The production coordinator has a telephone meeting with the platform, which includes two people in the control room on the platform who describe what has happened and what isplanned for the day, i.e., an operations engineer and an employee from environmentalmonitoring, in addition to the production coordinator. This meeting lasts ten to fifteenminutes and follows a predefined agenda.

The production coordinator then immediately has a video conference meeting with the main office on the platform. This meeting involves the various leaders off- and onshore,including five to six people on the platform and ten to twelve people onshore. Thismeeting typically lasts for 20 to 30 minutes. The production coordinator uses this meeting to identify activities that will influence the production over short or long time periods. For instance, the replacement of a pump may require production to be stopped for several hours for safety reasons.

The production coordinator subsequently returns to his or her office, which is a room with space for four production engineers that is separated from a large open-plan office by a large sliding glass door. The production coordinator then leads a meeting for the rest of the group. He or she summarises the two previous meetings with a focus on what is relevant for production. The meeting is held in the office with people sitting on chairs along the wall or standing in the doorway. Typically, fifteen to 25 people will attend this meeting, which lasts between three and fifteen minutes.

The production coordinator will then usually focus on coordination activities throughout the day and, if time permits, perform tasks related to production engineering, such as

31

monitoring the wells, planning well activities, planning well tests and evaluating these tests.

I had the opportunity to follow the engineers in their normal, everyday work. I sat amongst them while they worked, I attended the group’s internal meetings, and I wasable to follow them to meetings with other parts of the organisation. I also had access to their collaboration system and some of their specialised tools.

I feel that through observations, semi-structured interviews and informal chats during lunch and in breaks during my time with the group, I gained insight into how the group works and interacts, which allowed me to narrow my empirical and analytical focus on how the available information infrastructure is actually used to support collaboration.

32

4 Research Method

4.1 Research Approach The aim of this thesis is to investigate how a new collaboration solution has been implemented and accepted in a large organisation. Methodologically, the research is framed as an interpretive longitudinal case study. In contrast to the positivist tradition, which depends on hypothesis testing, quantifiable measures of variables and objective and factual accounts, the interpretive tradition aims to increase understanding of aphenomenon within cultural and contextual situations where the phenomenon has beenexamined in its natural settings and from the perspective of the participants and where researchers do not impose their a priori understanding on the situation (Orlikowski and Baroudi 1991).

While the positivist tradition assumes “that reality is objectively given and can be described by measurable properties” (Myers and Avison 2002), the ontological position of interpretive research is that the social world is not a “given” but is constructed through language, meanings and artefacts (Klein and Myers 1999). From an epistemological viewpoint, the interpretive tradition embraces rather than eliminates aresearcher’s bias and emphasises the significance of engaging in the world to explain it. The interpretive tradition can thus be understood as a flexible research approach and can be particularly relevant for longitudinal explorative studies in which the research focus is not predefined but rather can be adjusted depending on the circumstances and emerging analytical patterns.

However, an interpretive stance should not be seen as a better methodological approach to study organisations. Different approaches are required to highlight different aspects of the same phenomenon. Scholars have not yet developed approaches to combinepositivistic and interpretive stances, although several researchers have begun to combine qualitative and quantitative data to focus on multiple aspects of the same phenomenon. A major critique of interpretive research is that such studies are mere “reportages and local narratives” (Carlsson 2003). The relevance of fascinating empirical detail, which is typical of interpretive studies, should be critically evaluated to avoid “ethnographic positivism”. When empirical data that are influenced by the researcher’s orientations, perceptive biases, and methods are taken as the ultimate yardstick for assessing reality, important but not immediately observable aspects of reality are ignored (Kallinikos 2004).

However, it has been suggested that empirical work should be “strategically” motivated to determine the relationship between technology and society at multiple levels and

33

timeframes. Although I agree with this critique, I share the view of Walsham (2005) that “interpretivism” is a label that enables rather than constrains imaginative thought. Several methodological alternatives have been proposed to improve research, yet few accounts solve the problem of “reportages and local narratives”. From this perspective, a method, such as an interpretive case study, is not seen as a set of tools ready to be applied; rather, it is thought of as a set of guidelines (Klein and Myers 1999) that mustbe mindfully integrated into the overall research project.

The interpretive tradition was identified as being a particularly relevant approach for this research because it emphasises (Walsham 1995; Walsham 2006): a) a varying style of involvement, b) different ways of using theory, c) a non-deterministic research plan, d) multiple data collection and analysis methods, and e) several possibilities for generalisations. These aspects are discussed in the following sections.

4.2 Negotiating Access Access to the object of study is of crucial importance for any in-depth study. As many researchers have experienced before, gaining access is not always easy; it is often a time-consuming undertaking with an uncertain result. In addition, obtaining access does not guarantee that you will continue to have it. Gaining and maintaining access is an on-going process (Walsham 2006).

I initially gained access to one of OGC’s research centres through my supervisor, who had an established connection with OGC, as well as through my co-supervisor, who works as a senior researcher within the company. These contacts allowed me to obtainphysical access to the building. After spending some time within OGC, I was introduced to other researchers who worked on projects that could have been interesting to me. Following this internal research project, I was introduced to other employees within the business unit, including ICT employees and researchers working with the new collaboration infrastructure. Interviewing these employees gave me insights into the overall collaboration system.

However, I wanted to observe “normal” users of the collaboration system and not only ICT professionals and researchers. Because OGC is an oil and gas company, I felt that itwould be particularly beneficial to speak with employees in the core business, i.e., oil and gas production. However, this was not easy. Through the contacts I had established,I was put in touch with three different production-related units. After initial positive impressions, I was turned down by all three.

Fortunately, two of OGC’s researchers were approached by a small production unit about helping them to improve the unit’s work. Because these two researchers only had

34

limited time to spend on this project, I was asked to assist and thus gained access. After helping to prepare and facilitate one workshop, I was allowed to continue to follow the group, and thus I obtained access to a group of people working on OGC’s core business.

4.3 Data Collection Within interpretive research, it is considered useful, if not necessary, to utilise a variety of different methods to obtain diverse perspectives (Klein and Myers 1999) and obtainthe broadest possible understanding of the topic at hand. Most interpretative case studies rely on qualitative data sources, such as interviews, observations and document analysis. The primary data for this study were qualitative, but quantitative data, such as statistics regarding the usage of the collaboration infrastructure, have also been useful in contextualising the findings (Walsham 2006). This section outlines the data collection activities.

Data collection began in early 2007, when I was granted access to one of OGC’s research centres to explore their new collaboration infrastructure. A variety of different modes of data collection was used, including semi-structured interviews, participant observations and document analysis. Table 3 summarises the empirical data collected in the two different contexts.

Context Data sources

Technology and Research

Interviews:Seventeen semi-structured interviews were conducted with developers, administrators and managers of the IT infrastructure.

Focus group meetings:I attended seven focus group meetings with four to sevenparticipants organised by OGS researchers related to theinternal project; each meeting lasted approximately one hour.

Document analysis:Internal documents (e.g., presentations, reports, and statistics) detailing the various aspects of the collaboration infrastructure.Internal training material/courses.

Oil and Gas Production

Interviews:Sixteen semi-structured interviews with various engineers and managers.

Participant observations:

35

Approximately 120 days spent following the group at OGC’s operations centre.More than 350 meetings observed, ranging from four minutes to an entire day in length with three to 25 participants each.

Document analysis:Meeting minutes.Best practice documentation/process descriptions.

Informal conversations:

Conversations during lunch, during breaks, while waiting for others and while walking to and from meetings

Table 3 - Data sources according to context

The interviews lasted between one and three hours and were semi-structured. The initial interviews, in both contexts, were open-ended and focused on identifying areas for further investigation. The later interviews were focused on gaining an understanding of the situation.

Within the Technology and Research context, the focus of the initial interviews was to understand how the collaboration infrastructure was implemented and the possibilities that were thus made available. The later interviews in this context focused on the use of the solution. In total, I conducted seventeen interviews with developers, administrators and managers. I also attended seven focus group meetings conducted by researchers within OGC in four geographical locations on the condition that I helped co-facilitate the meetings and shared my notes from the events with OGC’s researchers. Because these focus group meetings were about improving collaboration within the business unit, I found them to be very interesting. The focus group meetings were organised asgroup interviews with four to seven participants and lasted approximately one hour.

Document analysis was also useful in this part of the research. I studied a broad range of internal strategic documents related to the planning and implementation of the collaboration infrastructure. These documents ranged from presentations and reports openly available on OGC’s intranet to specifications and documentations e-mailed to me by interviewees. Because many of these documents were restricted to internal OGC use, having access to OGC’s network and an internal e-mail address made it easier for employees to send them to me. There would have been a higher threshold for people to e-mail the same information to my e-mail address at the university.

36

Within the Oil and Gas Production context, I conducted a total of sixteen interviews. I interviewed ten people; six people were interviewed twice. These interviews were semi-structured and lasted from one to three hours and typically about one hour. During the initial interviews, I focused on understanding what the interviewees considered to be their main task/responsibility, what ICT tools they used and what challenges they faced on a daily basis. Because the managers of this group were focused on improving work processes in preparation for the opening of a new oil field, which began production during my stay with the group, the daily work situation changed drastically. Thus, the later interviews focused on how the situation had changed and on how the tools that the managers used were able to handle these changes.

My first visit with the group of engineers was in March 2009. Over the remainder of the year, I spent approximately 120 days, between three and five days per week, with thegroup. I had access to a desk and a computer in the open-plan office where the groupworked. As such, I was able to observe the engineers in their everyday work without being a burden. I was able to attend the group’s daily status meetings in addition to at least two other daily meetings with other groups. In total, I observed more than 350 different meetings lasting from a few minutes to entire days. Between three and 25 people attended these meetings. Typically, approximately five, twelve, and sixteen people would attend the three daily meetings that I observed.

Spending this much time with a group of engineers was very useful. I gained manyinsights that I probably would not have obtained if I had to rely on interviews only. To some extent, I became a part of the environment. The engineers became used to having me in the office. However, I did not become a part of the group because I did not have the necessary background or knowledge. I did not understand much of what the engineers talked about, especially in the initial phase. However, being a novice in the field of oil and gas production also had its advantages. For instance, I was not threatening. Being observed by an expert who is able to question everything you do and every decision you make may be more intimidating than being observed by someone who will not understand if you say something wrong. Being a novice also afforded me the luxury of asking basic questions. For instance, if the engineers talked about a piece of equipment, I could ask about the equipment without being embarrassed that I did not know the answer.

4.4 Data Analysis Data collection is meticulous and time consuming. Obtaining large amounts of empirically collected data and relating it to existing theories is not a mundane task,especially in a longitudinal qualitative case study. However, it is important to establish a

37

38

practice for analysing the data. Several techniques for analysing collected data have been suggested, ranging from rigid/systematic approaches, such as grounded theory (Urquhart 2001; Hughes and Jones 2003), to loose guidelines (Walsham 1995; Klein and Myers 1999; Walsham 2006). I rely upon the latter approach in this study.

I used several steps developed by Miles and Huberman (1994) to analyse the data in my research project:

giving codes to the initial set of materials obtained from observations, interviews and analyses of documents;adding comments and reflections;trying to identify similar patterns, themes, relationships, sequences and differences in the materials;removing these patterns and themes from the field to clarify the next wave of data collection; gradually elaborating a small set of generalisations to cover the themes discerned in the data;linking these generalisations to a formal body of knowledge in the form of constructs or theories.

Although the proposed steps are outlined sequentially, data analysis in longitudinal research is continuous and iterative. As suggested by Klein and Myers (1999), the analytic process can be understood as a hermeneutic circle that relates the whole to thepart and the part to the whole. The ‘part’ is not a fixed unit; rather, it is flexible and permits the changing of the unit of analysis for a given purpose. For instance, when writing a paper, a ‘part’ can be an individual engagement with technology, and the ‘whole’ may be an emerging explanation of cross-contextual enactment (Jarulaitis and Monteiro 2009). In contrast, when writing the thesis, a paper becomes a ‘part’ in a larger context. The parts of the hermeneutic circle are discussed in more detail bellow.

The analytic process for this research began after the first discussions with the OGC actors. As mentioned above, several researchers from my faculty were involved in the same project, which resulted in discussions from the start regarding various issues that attracted our attention. I initiated a more intensive and personal data analysis when I began to navigate through the OGC on my own. During observations and interviews, I made field notes that I would fill in and elaborate on after the observation or interview.

As the intensity of my research increased and I became more comfortable with the field of study, my data analysis changed slightly. As I collected more data, my role changedfrom being a listener to being more of an active discussion partner. In particular, my

38

39

goal was to identify multiple perspectives (Klein and Myers 1999) and triangulate different data sources. During the interviews, I would refer to various data sources, such as previous interviews, documents or my impressions from observations, and ask my respondents to elaborate on their perspectives in relation to other views. Thus, the interviews became a place for discussions and analysis.

In addition to data collection activities, writing papers is an integral part of doctoral work. During the period of writing articles, the intensity of data analysis increased greatly. In particular, the role of theory took on a central role. As Walsham (1995)suggests, theory can be used as: i) an initial guide to design and collect data; ii) a part of an iterative process of data collection and analysis, and iii) a final product of the research. When writing a paper, the author must relate, apply and extend existing theory. This process implies balancing inductive and deductive reasoning. Whereas my early papers were more deductive (Hjelle and Jarulaitis 2008), my later papers were intended to show how statements evolve from data. The analysis when writing a paper focused on identifying similarities, differences and relationships among different interviews, observations and documentary evidence and required the establishment ofconnections across different data sources.

Furthermore, several hermeneutic cycles were required to produce the final version of a paper. Writing a new paper required the execution of the analysis process outlined above from the beginning because “each theoretical stance and each research questionopen up a unique reading of the transcripts” (Boland 2005). Writing a new paper required reading transcriptions, listening to recordings, producing new codes, identifying similarities and differences, and generating new relationships.

Overall, the analysis was continuous and iterative and followed multiple hermeneutic cycles. Activities such as coding, identifying patterns, triangulation and relating empirical data to theory were central to the analytical process. In addition, discussions with various actors were also crucial and occurred in three different arenas, namely the OGC, my faculty and academic conferences or workshops.

5 Results This thesis includes the following five papers:

1. Hjelle, T. & Jarulaitis, G. (2008). Changing Large-Scale CollaborativeSpaces: Strategies and Challenges. Paper presented at the 41st HawaiiInternational Conference on System Sciences, Hawaii, USA.

39

2. Hjelle, T. (2008). Information Spaces in Large-Scale Organizations. Paper presented at the 8th International Conference on the Design of Cooperative Systems, Carry-le-Rouet, Provence, France.

3. Hjelle, T. (2010). The Introduction of a Large Scale Collaboration Solution: A Sensemaking Perspective. Presented at the Norsk konferanse for organisasjoners bruk av informasjonsteknologi, Gjøvik, Norway

4. Hjelle, T. & Monteiro, E. (2011). Tactics for Producing Actionable Information.Presented at the 2nd Scandinavian Conference on Information Systems, Turku, Finland.

5. Hjelle, T. & Østerlie, T (2013). Joining a Community: Strategies for Practice-Based Learning. Submitted to the 46st Hawaii International Conference on System Sciences, Hawaii, USA.

The papers included in this thesis were written during different stages of the Ph.D.research project and are listed in the order in which they were published. This order, as well as the different topics investigated within the papers, reflects changes in my analytical thinking as well as changes in my involvement in the field. Because discussions with supervisors, colleagues and engineers and researchers within OGC have been important, all of the papers, including those where I am the sole author, should be considered collective products.

The papers investigate the two research questions outlined in the introduction of this thesis:

RQ1: How does a global collaborative information infrastructure support work practices within a local context?

RQ2: How do users navigate and adapt to the possibilities and limitations of information systems to facilitate their daily work?

Papers 1 and 2 were written in the early stages of the data collection phase and areanalytical papers that identify the challenges users face when being introduced to a new large-scale collaborative infrastructure. These papers both relate to RQ1. Paper 3 also addresses RQ1 in a similar manner but also contributes to RQ2 because it is geared towards a specific group of users. Papers 4 and 5 focus in depth on how collaboration occurs within a group of engineers. In these papers, I changed both my theoretical and empirical focus within this Ph.D. research project from examining and analysing systems to investigating a group of users and their daily work in the field of oil and gas production. These papers mainly contribute to RQ2, but they also contribute to RQ1 because the users’ actual usage of the systems is related to the systems.

40

Title of the Paper Theoretical grounding

Research Question

Contributions

Paper 1 - Changing Large-Scale CollaborativeSpaces: Strategies and Challenges.

Common Information SpacesIntegration

RQ1 This paper emphasises the non-technical aspects of integration and suggeststhat a large-scale CIS is a collection of smaller, overlapping commoninformation spaces with socio-technical arrangements that need to be continually (re-)negotiated by the actors involved.

Paper 2 -Information Spaces in Large-Scale Organisations

Common Information Spaces

RQ1 This paper builds on Paper 1 by using CIS to analysea large-scale organisationas a single CIS, thus identifying smaller overlapping spaces.

Paper 3 - The Introduction of a Large Scale Collaboration Solution: A Sense-Making Perspective

Information InfrastructuresSense-making

RQ1+RQ2 In this paper, we use the theories of sense-making and information infrastructure to explain how users’ understanding and use of an information system relate to their previous knowledge and experience.

Paper 4 – Tactics for Producing Actionable Information

IntegrationDecision supportKnowledge creation

RQ2 (RQ1)

The oil and gas industry strives to provide engineers with as much information as possible for them to make sound

41

decisions. In this paper,we show that it is nearly impossible to provide all this information and that this is not desirable because the engineers rely on general knowledge of oil and gas production as well as deep knowledge of the local situation.

Paper 5 - Joining a Community: Strategies for Practice-Based Learning

Community of PracticeNetworks of Practice

RQ2 (RQ1)

New engineers undergo atraining period when joining a group. In this paper, we show what strategies the engineers utilise to become full-fledged members of this group. The process includes a mixture of generic knowledge-sharing strategies andstrategies developed informally within the local context.

Table 4 - Overview of papers and contributions

5.1 P1: Changing Large-Scale Collaborative Spaces: Strategies and Challenges

This paper is theoretically based within the CSCW literature and draws on the notion ofCIS. The CIS concept provides a means of evaluating the challenges of establishing a large-scale collaborative space. Rather than assigning deterministic power to collaborative technology, Schmidt and Bannon (1992) argue that CIS can only be achieved through active involvement by the involved parties. In more recent publications (Rolland, Hepso et al. 2006), the authors emphasise the idea of commonality and the need to study large-scale settings.

Based on these insights, we identify a number of challenges that an organisation faces when establishing a large-scale CIS. In our analysis, we address the nature and

42

composition of CIS with respect to flexibility and heterogeneity as well as the management of CIS. Our findings suggest that a large-scale CIS is composed of smaller, overlapping common information spaces that each contain a heterogeneous collection of socio-technical arrangements that need to be continually (re-)negotiated by the actors.

Based on our findings, we conclude by challenging the assumption that collaborative work should only be supported by centralised and tightly integrated knowledge and information systems.

This paper began as an essay for a course and was written jointly by the authors, who were both Ph.D. students. It is difficult to identify each author’s contribution, but the second author mainly contributed to the theoretical portion, whereas the first author primarily contributed to the data analysis. The data collection was conducted equally by both authors.

5.2 P2: Information Spaces in Large-Scale Organisations The second paper builds on P1 by elaborating upon the use of CIS. By using the sevenparameters of CIS, we analytically analyse OGC and build on the framework by assigning importance and relevance to the individual parameters.

Our analysis suggests that the seven parameters of CIS provide a useful framework for analysing collaborative work within a certain context. However, the analysis also suggests that different parameters have different values in different settings. Our results show that in the case of OGC, the degree of distribution and the multiplicity of webs of significance seemed to be of higher than average importance, whereas the immaterial mechanisms of interaction had a lower than average importance.

Our findings were limited because they were obtained from a single case and would have been more robust if also obtained in an additional setting.

5.3 P3: The Introduction of a Large Scale Collaboration Solution: A Sense-Making Perspective

In this paper, we draw upon the theories of sense-making to analyse how the introduction of the new Microsoft SharePoint collaboration solution was received in one of OGC’s business units. The paper identifies several key characteristics of the new collaboration solution and discusses them using relevant theories.

Empirically, the paper was based on findings from a group of people working in the areas of research and ICT within OGC but also included management and HR. Our findings suggest that people’s understandings of the new collaboration were largely

43

based upon their previous understanding of similar systems. That is, people tended to use the new system in the same way they had used the older system, and thus they did they not utilise much of the new functionality offered by the new solution. Hence, the users did not experience the expected improvement.

5.4 P4: Tactics for Producing Actionable Information To make sound decisions, it is generally desirable to have complete and accurate information. However, this is nearly impossible within the oil and gas industry. There are practically no limits on how much information can be gathered for any given oil and gas well, process or field because of the large numbers of sensors and gauges that provide information. However, at any given time some of the sensors or gauges are either not working or are providing incorrect data. Thus, engineers that rely upon this information have to determine what information to trust and what information isrelevant. If information is not relevant for the situation at hand, it is not important whether or not the information is correct.

This paper follows a group of production engineers in their extraction of the optimal volume of oil and gas from the reservoir below the seabed to the processing plant on the platform. Because perfect information is not available, they must use various tactics to determine what information to trust. In practice, the engineers relied on historical information that had been shown to be reliable and calculated missing information based on known values, local knowledge and experience about the relevant oil and gas wells.

The paper was mainly written by the first author. The second author contributed someof the theory and gave advice and feedback on the text. The main part of the data collection was conducted by the first author, but the second author participated in some of the interviews.

5.5 P5: Joining a Community: Strategies for Practice-Based Learning To become a proficient production engineer, a new engineer requires knowledge about the relevant oil and gas wells. Because most production engineers do not have an educational background as production engineers, they require other training to develop the skills of capable production engineers.

In this paper, we again focus on a small group of production engineers to identify what it takes to become a skilled production engineer. Our findings suggest that although it is important to have a generic understanding of how oil and gas is produced, it is less important that this knowledge is production-specific. The engineer’s general understanding forms a background that the production-specific knowledge builds upon

44

through on-the-job training. New production engineers are introduced to their job through a number of strategies. For instance, new engineers are initially assigned a mentor to follow. The new engineers are then given minor tasks to fulfil. With time,they are also given the responsibility of analysing and following several wells. Through this on-the-job training, new engineers become members of the community of production engineers and thus become knowledgeable production engineers.

The paper was written by the first author. The second author contributed through guidance, advice and feedback on the text.

45

6 Implications

6.1 Implications for Information Systems Research Knowledge work is an increasing area of focus within most organisations. Because this type of work requires a knowledgeable workforce, it is important for these organisationsto have systems and solutions in place to help train and support its employees. A knowledgeable workforce will make better decisions, which will in turn benefit the organisation.

In a corporate environment, such training and support systems tend to focus onproviding workers with a comprehensive set of information. The idea is that when more data and information are available to the workers, better decisions will be made. A corporate-wide collaboration system, such as the MS SharePoint-based solution at OGC, is often used by the management. These systems tend to be structured and rigid to support the work tasks of as many users as possible.

However, this research has shown that even with such systems, people utilise the solutions in ways that were not anticipated by the ICT management or the vendor of the system. The research community must be aware of this practice when exploring work in these settings.

Within OGC, the new Microsoft SharePoint collaboration infrastructure was intended to provide a standardised collaboration environment for all employees throughout the organisation. However, this was not the case in practice. Rather, users have had the opportunity to customise and configure the use of the infrastructure to such a degree that it can be considered to represent numerous different adaptations of the Microsoft SharePoint infrastructure. This variation has been an important factor in making the new implementation successful.

It is also important to note that employees in large organisations tend to belong to different sub-communities within the organisation. All OGC employees are not equal. Some work in production, where others work in sales, research, and HR. In their daily work, employees work closely with a group of colleagues that make up their community. However, this does not mean that they are only members of such small communities. In fact, employees continuously shift between various sub-groups; some are small, whereas others are large. People make this shift automatically if they have been properly introduced to the organisation.

The framework developed within this research can thus be used to help researchers explore how employees navigate and negotiate complex organisational structures.

46

6.2 Implications for the Method Interpretive case studies encourage the investigation of various contexts before conclusions are drawn. Researchers typically focus on one particular setting, which makes it more difficult to draw valid conclusions. However, some research has exploredtechnology across different contexts (Barley 1986; Robey and Sahay 1996). My research follows this latter tradition.

First, we followed the introduction of a new collaboration solution based on MS SharePoint technology. We examined how the new information infrastructure was expected to be used within OGC. Second, we followed a large business unit within OGC. There, we examined how the information infrastructure was actually used in a large setting. Third, the research focused on a small community of engineers and how this group used the system in comparison with the intended use and within the larger business unit. As such, even though the research only followed one company, we conducted research on different kinds of user groups. Thus, although there was noopportunity to conduct research on the same system in different organisations, we argue that this is the second best option. In addition, because the collaboration system wascustomised to OGC, it would be impossible to conduct research on this system within any other organisation.

Interpretive case studies usually rely upon qualitative data collection methods. Structured and semi-structured interviews are often the primary method of collecting data. A researcher interviews several central employees and then leaves. During this research, however, most of the time was spent within the organisation. Having the opportunity to spend this time within OGC provided insights and understandings that might not otherwise have been possible. Spending this time within the organisationfamiliarised the informants with the researcher and built a level of trust that would likely not have been present otherwise, which also provided the researcher with insights into specific topics, such as oil and gas production, and thus enabled him to ask better questions during interviews.

Unexpectedly, it also appears that not being an oil and gas expert was advantageous.Because I was a complete newcomer, I had to ask the “stupid” questions during informal conversations, such as those that occurred while walking back and forth between meetings, while in line for lunch or while waiting for someone. This appeared to have built a level of confidence amongst the informants and made them more likely to confide in me and be straightforward when asked about various topics. These insights may be useful for other researchers that find themselves in similar situations.

47

6.3 Implications for ICT Management When OGC chose to implement a collaboration solution based on MS SharePoint technology, they desired an out-of-the-box solution. However, because such solutions are intended to cater to a broad customer base, this goal proved to be unrealistic. OGC had special requirements that the solutions had to meet. Thus, customisation of the system was necessary. The ability to customise the system is a factor that ICT management should be aware of when implementing new systems.

OGC initially did not plan to teach the employees how to use the new solution because it was believed that the system was heavily integrated with the Windows environment that employees were already familiar with. However, management quickly changed its mind and developed various courses on using the system. Because the new MS SharePoint-based solution differed substantially from the previous Lotus Notes-based collaboration solution, the users were not able to use it as intuitively as had been hoped.

In large organisations such as OGC, employees work on a variety of different tasks. Some employees work mainly with reports written in MS Word and presentations made in MS PowerPoint. These people have needs that are completely different from those of engineers who work with complex spreadsheets with numerous macros and dependencies created in MS Excel and enormous datasets from specialist tools. For instance, an engineer may work on spreadsheet A that uses macros to automatically collect information from spreadsheet B. If these spreadsheets are stored in the MS SharePoint-based collaboration system, spreadsheets A and B cannot communicate as they would have if they were both stored in the same folder on a disk. For the engineers, using the new system would be much more cumbersome than having the spreadsheets stored on a shared disk drive. When planning to introduce a new corporate-wide system in a large organisation, it therefore is important to acknowledge that different user groups have different requirements and accept that it is unrealistic to expect the new system to meet all of the users’ needs and requirements. It is, however, important to identify the areas that the system will not support and provide alternative solutions for these users.

It is also important to find the correct balance between structure and flexibility. When OGC introduced the new collaboration solution, they added metadata functionality to the system. The purpose of these metadata was to improve retrieval of information through the corporate search engine. This functionality required that whenever someone wanted to add a file to the system, they had to tag it with a predefined category and keywords. Because the users did not recognise the value of this information, they often

48

chose random categories and keywords. The users, of course, knew where the information was and did not need the search engine to retrieve it.

6.4 Implications for Users For successful job execution, employees need to be well trained and knowledgeable regarding both their work responsibilities and the organisation. Employees have to find a place both within the organisation at large and also amongst the people that they workwith closely with on a daily basis.

To find their place, employees must become a member of the community. Because technology has become a critical part of daily work, both as a tool and as a source of information, it is important to become familiar with the available technology. Becoming familiar with the technology is a way of becoming a community member. At OGC, new employees are introduced to the technology through a variety of computer-based courses, and new employees are required to complete a specific portfolio of courses. However, these courses are basic and are only intended to give the employees a common starting point.

When individuals begin to work within a smaller group at OGC, it is important that theyare introduced to the way that relevant technology is used within the particular group. For instance, senior employees can share their web browser favourites/bookmarks with the newcomer, which will quickly give the new employee an overview of what is relevant to the community. In other words, introducing new group members to the technology that is used in the group is an important way of having them quickly becomefull members of the community.

In organisations such as OGC, employees, and especially engineers, have a great deal of information available. The navigation of this information is challenging. Because the various systems attempt to provide the users with as much information as possible, it is easy for employees to become overwhelmed and get lost in the information. It is important to filter the information provided. All of the information provided mayactually be irrelevant for a given situation, or some of the information may be erroneous. Because the systems try to provide large quantities of information that is ascomplete as possible, it is important that users are aware of their role in filtering and evaluating information.

49

7 Concluding Remarks This research was conducted within an oil and gas company and examined how a new integrated collaboration system was implemented, received and understood amongst users with different backgrounds and experiences from different parts of the organisation. The research was conducted within a single organisation, but it relates to broader contexts related to the introduction of large-scale integrated collaboration infrastructures.

The empirical material within this thesis relates to different contexts, disciplines and user groups and thus should be relevant to other organisations and companies as well. Our case clearly illustrates how the same system is perceived differently in different contexts within OGC. The different ways that the system is used in different contexts suggest that a common, seamlessly integrated system is difficult to achieve. Thefinancial and resource costs suggest that migrating from an existing system to a new one may not be feasible.

7.1 Limitations As with other interpretive research, this study could be supplemented both in depth and breadth to strengthen the empirical insights. The empirical data relate to only two organisational contexts and cannot necessarily be applied to contexts that have not beenstudied. The study has extensively followed one of the two contexts but has only scratched the surface of the other context.

Second, because OGC is an international oil and gas company with activities across the globe, the study of both contexts in the same country is also a limitation. As such, the study does not consider cultural differences or similarities attributed to different work practices in different countries.

Third, because OGC has updated the collaboration solution since the data within this thesis were collected, the research does not fully represent the current status of collaboration within OGC.

7.2 Future Work This research closely followed one group of engineers that were connected to one specific oil and gas field. Because such groups have traditionally been quite isolated from each other, it would be interesting to follow a similar group that is connected to a different oil and gas field, preferably a field at a different level of maturity, to determinehow the same work is performed differently in different contexts.

50

It would also be interesting to follow oil and gas production groups in other organisations to compare the organisational cultures and contexts. It would also be interesting to examine the different systems and tools that are available to the employees of the different organisations.

51

References

:

31

2

16

2

12

76

:

41:

52

23

11

2

15

43

7

18

1(1-2)

43

53

26

20

4

15

15

54

Appendix: The Papers and statements from co-authors

55

56

Paper I

57

58

Changing Large-Scale Collaborative Spaces: Strategies and Challenges

Torstein Hjelle

Departement of Computer and Information Science Norwegian University of Science and Technology

[email protected]

Gasparas Jarulaitis Departement of Computer and Information Science Norwegian University of Science and Technology

[email protected]

Abstract Introducing new collaboration tools in an

organization is difficult and will most often cause side-effects and unforeseen concequences. In this paper we use the concept of Common Information Spaces (CIS) to analyze how information is shared within a large organization. We draw on findings from a large, international oil and gas company to analyze the the implementation of a Microsoft SharePoint based collaboration system within the organization. Analytically we discuss the drifting nature of large-scale efforts to establish a company-wide centralized and tightly integrated CIS. Acknowledging that introducing information systems that instigate changes in work practices are inherently difficult within any organization, and even more so in large enterprise organizations, combined with the diversed perspectives on how to establish CIS within the existing literature we characterize large-scale common information spaces and identify directions for further research.

1. Introduction

Today, organizations have to transfer information and share knowledge across geographical and organizational boundaries. It is widely assumed that integration of a fragmented infrastructure is a prerequisite rather than an option to achieve such vision. Despite the existing variety of technological techniques [11], integration activities tend to display emergent properties [29]. According to Kuldeep et al. [8, p.23] “integration has been the holy grail of MIS since the early days of computing in organizations”. This argument is continually supported with empirical studies from various industries, such as health care [5], ship classification [21], e-government [3], and oil and gas industry [12].

In this paper we employ the concept of common information space (CIS), which is extensively used within the field of Computer Supported Cooperative Work (CSCW) to analyze how actors jointly construct various socio-technical arrangements in order to share information across organizational boundaries. The crucial aspect of CIS is not a seamlessly integrated technological environment, but continuous interpretation work [22].

Consequently, the concept challenges taken-for-granted assumptions of achievable seamless integration: “the notion of ‘a uniform, complete, consistent, up-to-date integration’ of the knowledge in a community handbook is hardly realistic” [22, p.24]. As such, the concept enables analyzes of how integration activities unfold in practice.

Our research is motivated by a resent call to explore the dynamics of large-scale common information spaces [20]. We draw on diverse literature on CIS and discuss such aspects as the role of interpretation work [22], openness and closure [1] and heterogeneity [2]. Additionally, we discuss the nondeterministic character of IS implementation processes [19] and how initial plans drift in practice [4]. Furthermore we adopt the concept of uncertainty, which derives from a recent contribution by Latour [9]. The author vividly conceptualizes inherent uncertainties stemming from an endless web of mediators, where mediators “transform, translate, distort, and modify the meaning or the elements they are supposed to carry” [9, p.39]. We use this concept to expose and discuss inherent uncertainties, rather than rational planning with predictable consequences, in establishing large-scale CIS.

Empirically we illustrate integration activities within a large international oil and gas company (dubbed OGC). Our research is part of a larger, internal OGC project aiming at improving oil and gas production optimization activities. We zoom-in and unpack the (re)establishment of the collaborative infrastructure. The old collaborative infrastructure was based on Lotus Notes technologies and aimed to increase organization-wide standardization and cost-effectiveness [13]. This vision resulted in more than 5000(!) unsynchronized databases and an even more fragmented infrastructure. In order to increase consistency, remove fragmentation and ease both information retrieval and sharing, a new collaboration strategy was launched. The material outcome of this strategy was the implementation of a new collaborative infrastructure based on Microsoft SharePoint technologies. Findings from our ongoing research suggest that the new infrastructure constrain institutionalized work practices and tend to produce side effects [14]. Thus, the main purpose of this paper is to characterize large-scale common information spaces in an oil and gas company, and identify directions for further research.

59

The paper is organized as follows: We start off with a brief description of the history of CIS and identify some important contributions in the area. We then look into some possible problems that can arise when introducing new information systems, that is, unforeseen consequences, before we investigate the role of plans and strategies when introducing new IS tools. Then an outline of our research approach follows, before we introduce the case in context. Following this is a discussion of the case in contrast to the existing literature before we round off with a brief section describing our future research direction as well as a few concluding remarks.

2. Conceptualizing common information spaces

The concept of common information spaces was originally formulated by Schmidt and Bannon [22] to bring focus on an area of “critical importance for the accomplishment of many distributed work activities” [22, p.16] as they believe the area has been neglected within CSCW. CIS is offered as an alternative to the so-called ‘workflow’ perspective, where every actor’s actions can be predefined in advance. The authors draw on Suchman [25] and highlight that in contexts where continuous negotiation and problem solving is required, ‘workflow’ perspective fails to explain how work is done in practice. Consequently, they argue for an alternative approach, which would “allow the members of a cooperating ensemble to interact freely” [22, p.20]. They argue that cooperative work is not facilitated merely through access to information in a shared database, but also require a shared understanding of the meaning of this information, as the information always has to be interpreted by human actors. Then they introduce the concept of common information spaces, which seeks to explain how people in a distributed setting are able to work cooperatively through access to common organizational information and a shared understanding of the ‘meaning’ of this information: “a common information space encompasses the artifacts that are accessible to a cooperative ensemble as well as the meaning attributed to these artifacts by the actors” [22, p.21]. While interpretation work and construction of a particular object’s ‘meaning’ is situation dependent, and determined locally within a given context, the coherence is crucial: “in order for work to be accomplished, these personal, or local information spaces must cohere, at least temporarily” [22, p.21].

Bannon and Bødker [1] build on this concept and investigate the dialectic nature of CIS. They acknowledge that there are many forms of common information spaces. Sometimes common information spaces are comprised of people working at the same time and place, while other times people collaborate across both time and space boundaries. Though there are various forms of CIS, they recognize that common information spaces have some common identifying properties. In order to conceptualize

large-scale CIS authors draw on science and technology studies (STS) and identify such concepts as ‘immutable mobile’ and ‘boundary object’, which “both can be viewed as being concerned with how communities develop means for sharing items in a common information space” [1, p.84]. Additionally, authors highlight the relevance of the “community of practice” perspective developed by Lave and Wenger [10] in order to indicate learning and working environments. The central issue then becomes whether information should be circulated within or across communities of practice. In other words, to what extent local information should be malleable and at the same time ‘packaged’ to have a ‘common’ meaning in different contexts. Following these conceptualizations Bannon and Bødker [1] identify the dialectical nature of CIS:

“It is this tension between the need for openness and malleability of information on the one hand, and, on the other, the need for some form of closure, to allow for forms of translation and portability between communities, that we believe characterizes the nature of common information spaces, and leads to difficulties in their characterization. CISs are both open and closed – in a word, the have a dialectical nature” [1, p.85]

Bannon and Bødker [1] illustrate the dialectical nature of CIS with several empirical examples. At one end of the spectrum they discuss coordination rooms where co-located actors manipulate malleable CIS. At the other end, they expose a heterogeneous large-scale CIS, the WWW, inherently uncertain and dialectical.

Bossen [2] proposes a refinement of the concept of common information spaces and proposes a conceptual framework to analyze cooperative work. His framework is developed through the analyses of CIS in a hospital ward and results in the identification of 7 parameters he argues is useful in order to position a given CIS. Bossen acknowledges that “it is doubtful whether it will be possible to generate a distinct categorization, i.e. typology, through which specific work settings can be categorized into particular types of CIS” [2, p.185]. He therefore suggests, “it might be better to have a framework through which specific settings can be analyzed” [2, p.185].

Randall [18] is critical to the idea of commonality within CIS. He claims that “the very notion of CIS is radically underspecified” [18, p.17] as, he continues, “it is not possible to distinguish its putative features by reference to technology, to information or to organizational structure” [18, p.17]. The problems with classifying CIS occur in part because CIS ranges “from shared, small groups to complex inter-organizational chains” [18, p.17]. Because “we have to deal with issues that arise out of the complex historical and geographically dispersed range of information resources that might be in use in the large organization, or indeed across different

60

organizations” [18, p.17], it is problematic to identify exactly what is common across various work practices.

Rolland, Hepsø et al. [20] conducted a study of different common information spaces in a major international oil and gas company as well, and their findings suggested that some common information spaces appear to be more situated, momentary and malleable when embedded within an extremely heterogeneous context. They end their paper by acknowledging the “need for more research on large-scale collaborative systems in order to improve current conceptualizations of CIS” [20, p.499]. This is because “most studies within CSCW have been focusing on relatively small-scale systems involving a limited group of users collaborating over small distances” [20, p.499].

3. Research approach and data collection

In our ongoing research project we are aiming to study the transition process from Lotus Notes to MS SharePoint technologies in OGC. We are studying both technological and social complexity and investigate “the interaction between the engineering detail of the technical systems and the related dynamics of the surrounding social arrangements” [4, p.3]. We conceptualize our research as an interpretive case study [26] as we do not predefine dependent and independent variables and “attempt to understand phenomena through the meanings that people assign to them” [7, p.69].

We conceptualize our research design as emergent rather than highly structured. We lean towards an inductive approach and identify grounded theory [15] and ethnography-informed [24] studies as relevant approaches to explore IS implementation activities in real-world contexts, and build our theoretical perspectives on empirical data, rather than analytical constructs.

Data collection and fieldwork started in the beginning of 2007. Since then, we have conducted seven interviews with OGC representatives, who mainly represent the so-called management perspective. The interviews lasted from one to several hours. The first interviews were more open-ended with the primary focus on current OGC initiatives and plans regarding the use of MS SharePoint technologies. Latter interviews were more focused on both technological complexity and interdisciplinary work practices. Besides that, we have extensively studied various documents, including project plans, reports, various presentations and other related documentation. More than 300 pages were gathered and carefully analyzed. Additionally, we’ve had the opportunity to study several email discussions related to project planning and execution activities. Recently, we have also been granted access to an extensive information source of OGC activities – the intranet portal. The content of the studied documents and conducted interviews have introduced us to the existing socio-technical complexities, main strategic initiatives with expected deliverables and current

problems. Previous studies on implementation and use of Lotus Notes in OGC [8, 14-18] were also carefully studied and analyzed in comparison with current problems and challenges.

Further data collection activities will involve document analysis, participant observation and semi-structured interviews. We aim at obtaining in-depth knowledge of both surrounding socio-technical contexts and the diverse perspectives of various actors.

4. Case description

Introducing new systems for computer supported collaborative work is a complicated task in all organizations as it not only introduces new IT tools, but also new ways of working [16]. What makes this even more difficult in large organizations is that they have so many different people working on so many different tasks at a number of different locations. Introducing one single system to support all users at all tasks in all locations is a major challenge.

In 2001, OGC, a major international oil and gas company, that today has about 26 000 employees in 34 countries worldwide, formulated a strategy to improve collaboration within the organization. This strategy focuses on collaboration within the company, and ranges from so-called collaboration rooms, which are dedicated rooms where experts from various disciplines meet, to systems like video conferencing for collaboration between users at various locations to more traditional collaboration ICT-systems. Collaboration can take place between users at the same or different location, and at the same or different time. People wishing to collaborate at the same time can choose to use the collaboration room if they are all at the same location, or they can use a video conference system if they are located at two or more locations. If they want to collaborate at different times – maybe because they are dispersed around the world in different time zones – they can use a more traditional collaboration system.

One of the results of the strategy was the decision to change the collaboration infrastructure. This decision was made in 2003. OGC had up until then been using a system based on Lotus Notes, but after considerable research it was decided to discontinue the use of Lotus Notes and instead implement a new infrastructure based on Microsoft SharePoint technologies. It was believed that this new infrastructure would better suit the management’s vision of collaboration within the organization. In addition, the introduction of this new information system was used to catalyst an organizational change.

Due to the size of the organization, as well as the nature of the business, each month OGC creates about 70 000 Word documents, 65 000 Excel spreadsheets, 20 000 PowerPoint presentations and 145 000 non-classified documents. That is about 300 000 new

61

documents each month. Keeping track of all these documents require a robust and scaleable system.

The old Lotus Based collaboration system had a few aspects that were considered unfortunate. First, the Lotus Notes infrastructure had grown out of control. In total the system consisted of more than 5000 different, dispersed Arena databases for document storage. The Arena databases had no central indexing functionality, meaning that it was impossible to retrieve a document by searching if you did not know which database to search. Each user had in addition access to both personal and departmental storage areas. Not all users chose to store their documents in the Arena databases, but instead chose to store it on either of the aforementioned storage areas. This meant that even if it had been possible to search all databases at once, it would not have been successful as not all documents were stored in the databases.

Implementing the Microsoft SharePoint technology was hoped to change the way people worked. According to the strategy, OGC already had a set of general collaboration tools, but “these tools are poorly integrated”, and “there is a particular need for better and more integrated coordination tools, better search functionality and improved possibilities for sharing information with external partners”. (internal strategy documents). The strategy defined the main challenges to be addressed by the new solution to be “information quality, archiving, search/retrieval and proper handling of e-mail”. It was believed that the real benefit of introducing the new infrastructure was that it would change the way people worked. They would work more efficiently. The company acknowledged that introducing the new information system was just a minor part of the strategy. More important was the change of work practices. In one document they emphasize this by claiming that the new strategy would be “80% change - 20% IT” (internal strategy document).

As indicated by figure 1, MS SharePoint is, together with the metadata management system and the archive, a core element of the new collaboration solution. This solution would then be integrated with OGC’s email

system, and their in-house search engine. MS SharePoint is a collaboration system that in addition to being a repository for storing and retrieving documents also has functionality for checking-in and checking-out documents and version tracking; it has web-based discussion boards, as well as features for managing wikis. MS SharePoint can also be linked to email-systems and MS Live Messenger (previously MSN Messenger) for instant messaging.

Central within MS SharePoint is the concept of Team Sites. A Team Site is a virtual workspace shared by people working on the same project, in the same department, within the same discipline, etc. The average user will typically be a member of a handful different Team Sites. Typically a Team Site has a limited lifetime and is oriented around a specific task; for instance drilling of a given well. All documents related to the drilling of this well is gathered within this Team Site.

When a project is initiated the project leader will typically have a Team Site created. When creating a Team Site the project manager has to define a set of applicable metadata from a list of available values. The metadata would typically be selected based on what kind of task for which the Team Site was created. When uploading documents to the Team Site later, the members would have to assign metadata from the selected set to the documents to classify the documents for easier retrieval. This use of metadata is not a standard feature of MS SharePoint, but a custom feature added by OGC to give added value. As we will later discuss, this is also a source of problems when using MS SharePoint at OGC.

Today, there are three levels of metadata at OGC. Originally there were only two levels. But a third level was added to try to combat the problems that occurred. The two first layers only allow users to select values from a predefined list using drop-down boxes. The third level uses free text fields, allowing users to define the values they feel appropriate for a given document.

5. Analysis: introducing uncertainties

Given our case at OGC we’ve been studying relevant literature within the areas of CSCW and integration to assign meaning to, and build an understanding of, the context. While digging down into the case we have been alternating between understanding the context based on literature, and letting our findings guide our literature search. Through this process we’ve come up with 5 characteristics of large-scale common information spaces that will guide our further research.. These characteristics are described in more detail in the following sections. A summary of these characteristics can be found in table 1.

Figure 1: Integration of collaboration tools

62

5.1. Common space or spaces?

According to OGC’s strategy the company wishes to implement a single, common collaboration tool, namely a system based on Microsoft SharePoint technologies. They have developed ‘Best practice’ directives and guidelines for the use of this new system seeking to create a consistent usage of the tool throughout the organization. We can look at the use of Microsoft SharePoint as a collaborative tool from two different angles; either it is believed that the entire OGC organization with all employees, tools and equipment is one huge common information space that everyone is a part of, or that OGC consists of a vast number of smaller common information spaces that all have the new CSCW tool in common. Consequently, we ask if it is possible to establish one centralized and tightly integrated large-scale CIS in global organization.

Suggesting that OGC consists of one single CIS is problematic as it to some extend would imply that all employees have significant aspects of their work in common. In a large, heterogeneous organization like OGC this is at best doubtful. Exactly how much does a geologist’s work have in common with the work of a production engineer? Or what does the IT department have in common with Human Resources?

As this seems difficult it is more reasonable to assume that OGC consists of a vast number of different common information spaces. That is, for instance, production engineers have one view of the work situation

while geologists have a completely different view. The common information spaces at OGC, we suggest, are in many ways similar to the situation in an airport, as described by Fields, Amaldi et al. [6, p.21]. They found that they could “regard the airport not as a CIS but a constellation of overlapping interdependent CISs that are articulated through boundary objects” [6, p.21].

5.2. Objects or socio-technical arrangements?

Common information spaces can be seen from two different perspectives: Either 1) as a boundary object; or 2) as a socio-technical arrangement. Boundary objects are “entities that are interpreted differently in different communities of practice, yet are stable enough to retain their integrity as objects, thus facilitating working across the boundaries between different communities” [6]. That is, boundary objects are both flexible enough to allow local interpretations, as well as rigid enough to be similarly enough understood in different communities of practice. This way boundary objects can mediate between different communities. From the socio-technical perspective, a “CIS is not simply a boundary object for different communities of practice, but a socio-technical arrangement that only temporarily on specific occasions are practiced in such ways that give a momentary common understanding” [20, p.494]. This latter perspective, Rolland, Hepsø et al. [20] argue, is particularly relevant when introducing CIS across heterogeneous contexts, where “sharing and negotiating

Tensions Characteristics of large-scale CIS in OGC

Further research considerations

Common space or spaces Overlapping interdependent CISs

Is it possible to establish one centralized and tightly integrated large-scale CIS in global organization?

Objects or socio-technical arrangements

Fluid and continually negotiated socio-technical arrangements

What ‘common’ properties should various collaborative technologies have to enable effective collaboration between different disciplines?

Flexibility or closure Closure and minimum flexibility in local contexts

How can organization achieve flexibility in use and closure in compliancy with internal and external regulations?

Top-down or bottom-up Top-down initiative imposing rigid data classification standards

How, when and to what extent should bottom-up or top-down approaches be employed when implementing large-scale information systems?

Heterogeneity or homogeneity Heterogeneous and discipline-specific technologies

Can working and effective large-scale CIS be achieved with homogeneous technologies?

Table 1: Summary of findings

63

common understanding are much more temporary and fluid than the term boundary object suggests” [20].

In our context, the Microsoft SharePoint technology in OGC can be considered a boundary object, where all users have a similar understanding of what this new tool is, even though they do very different work and uses very different terminology in conducting their daily work. Using the technology, on the other side, can be seen as a socio-technical arrangement where different users interpret and use the technology in different ways in different situations. For instance, a user can utilize the same technology in different ways in different projects or at different times. As a result, we inquire what ‘common’ properties various collaborative technologies should have to enable effective collaboration between different disciplines.

5.3. Flexibility or closure?

As mentioned, Bannon and Bødker [1] explore the dialectic nature of openness and closure within CIS. The openness refers to the flexibility and malleability of CIS and indicates the desire for flexible and malleable information systems. Of course most users would prefer such a system when producing information as they would not have to make specific adjustments like special formatting or meta-tagging to upload their information into the system. The system would be able to handle anything.

The old Lotus Notes-based system appears to have had a large degree of this freedom. After all, by the end the system consisted of about 5000 different databases. If a given piece of information did not fit into the existing databases one would simply create a new database fitting ones requirements.

But to the management of OGC this solution is not satisfactory. It is inefficient; both with regard to workers having to do the same work again as they cannot find documentation of others having already done it, and with regard to storage utilization as the same information is stored more than once.

In the new Microsoft SharePoint based infrastructure users would have a more rigid solution. Information would have to be assigned meta-tags and keywords before they could be uploaded. The benefits of this are not necessarily obvious to the average user, and there is the risk that users simply add more or less meaningless meta-tags and keywords.

As mentioned in a previous section, the use of metadata was the cause of problems and frustration among users. The main problem, according to one of our interviewees (a leading engineer), occurred when establishing new Team Sites. It was very difficult to select an appropriate set of metadata up-front, that is, it is difficult to select the appropriate metadata values before one knows what kind of information that will actually be stored in the Team Site. The metadata selected when

creating the Team Site were the metadata members of the Team Site would have to choose from when uploading information. Another part of the problem was that no two people would ever select the same set of metadata if they where to create the same Team Site. The available set of metadata would therefore be dependent on who created the Team Site. As human beings are inconsistent creatures, the same person would also probably select one set of metadata one day and a completely different set the next day. According to our informants, users of SharePoint feel that these problems manifest themselves in the daily use of the technology, which caused a source to compare using SharePoint and Team Sites with “conducting an extreme sport”.

Another problem with the use of metadata is that the various members of the Team Site have to classify their documents when uploading them. One source commented that they were engineers, and that people study library science for several years to learn to classify information. This is an area that further research will investigate more thoroughly.

Balancing the organization’s need for standardized and strict solutions against the users’ wish for flexible and open tools we believe is an important area of research that we wish to look deeper into. Accordingly, we ask how to achieve flexibility in use and closure in compliancy with internal and external regulations.

5.4. Top-down or bottom-up?

There is a long tradition to promote participatory design as a way to reduce the design-use gap [23]. This approach represents so-called bottom-up patterns and requires active users’ involvement in various development and maintenance activities. While the benefits of such methods are widely recognized, we wonder how effectively it functions in large-scale projects. For instance, in a recent contribution Ellingsen and Monteiro [6] illustrate how the ambition of seamless integration unfolds over the time. The authors present integration activities in a large-scale health care context and illustrate how ordering activities in one context produce disorder in other contexts. Consequently, they question the appropriateness of participatory design in large-scale contexts: “…truly user-led development is impossible to achieve in large-scale integration projects. Furthermore, this increases the possibilities for unintended consequences and disorders…” [6]. This conceptualization underscores that unintended consequences are inherent, and participatory design techniques will hardly eliminate them. The question then remains to what extent participatory design should be cultivated.

Considering implementation and use of collaborative technologies in OGC, we identify a similar tension as well. For instance, with the previous collaborative infrastructure (Lotus Notes) local actors had the ability to

64

participate in the constitution and maintenance of their local information spaces. However, the new collaborative infrastructure based on MS SharePoint technologies imposes quite rigid information classification standards, which reduces the local actors’ ability to modify local spaces according to their needs. Such change illustrates movement from active participation to compliance. Being aware that truly user-led development and maintenance activities in such scale (over 25.000 actors) are hardly possible, we wonder how to balance the local needs with company-wide standards.

In OGC, the implementation of MS SharePoint has not given the users any opportunity to participate in the development process. It was in large a managerial decision to replace the old infrastructure with a new one. With regards to user participation, it was the definition of the metadata set that involved users, that is a few persons in managerial position within the various disciplines and departments in the organization. The average user has had no way to influence the implemented system.

We do as of now not have any data that suggests that the lack of user involvement during implementation is the cause of the problems OGC has had with metadata and classification, but nevertheless, they do have a problem with these issues. When questioned about what it would take to make the users satisfied with the system, one of our informants (a manager within the IT department) simply answered: “I don’t think it’s realistic to make the users satisfied”. He then stressed that he to some extend said this to provoke, but that there were an element of truth in what he said. This illustrates the difficulties of catering to such a large group of users: No matter what you do, you can’t make everybody happy! Therefore, we ask how, when and to what extent bottom-up or top-down approaches should be employed when implementing large-scale information systems.

5.5. Heterogeneity or homogeneity?

In theory, the constitution of CIS is quite explicit and clear, it encapsulates both actor-networks [9] and human enacted structures [21]. An interesting and more attention gaining aspect of CIS is heterogeneity. We conceptualize our research context as extremely heterogeneous [20], but for analytical purposes, we do not discuss the whole context as such, we zoom-in and unpack only a technological actor – collaborative technologies – MS SharePoint and Lotus Notes at OGC.

Both technologies are to some extent homogeneous. They cut across organizational boundaries and impose particular patterns of use. However, MS SharePoint technologies, as outlined above, are more rigid. Additionally, it is an integrative technology, which seamlessly integrates with other Microsoft products. Thus, MS SharePoint can be conceptualized as a large-scale homogeneous and integrated monolithic structure,

while Lotus Notes is more flexible and customizable to local needs.

Drawing on a recent conceptualization on CIS [20] in heterogeneous contexts, we inquire whether working and effective CIS can be achieved with homogeneous technologies. As illustrated by Rolland et al. [20] arrangements of heterogeneous technologies tend to be more effectively exploited in cross-discipline collaborative environments.

6. Conclusions and furhter research directions

Our research findings suggest that integration is an inherently complex process involving continuous negotiations between various human and non-human actors. Both technical [29] and socio-technical [4] studies report that integration efforts tend to drift from the initial plans and produce various side effects. Drawing on an interpretive field study in OGC we have identified the uncertain and drifting nature of CIS as well. In our case, ambitions to establish an ‘out of the box’, centralized and tightly integrated collaborative infrastructure produced side-effects and invoked the development of custom components, which aim to increase flexibility for the end user. As recently outlined by Ellingsen and Monteiro [6], ordering in one context tend to produce disorder in other contexts. This aspect can also be supported with the recent conceptualizations that actions are continually other-taken [9]. Such conceptualization emphasizes uncertainty and dynamics of use when technologies are put to use. Ambitions to eliminate particular tensions (as discussed in section 5) are rather subjective, because it is difficult or perhaps hardly possible at all to test or evaluate how a particular configuration will function later on.

Our ongoing research suggest that large-scale CIS is composed of smaller overlapping common information spaces containing heterogeneous collection of socio-technical arrangements that need to be continually (re)negotiated by the actors involved. These findings challenge the assumption that large-scale centralized and tightly integrated, rather than distributed and fragmented, CIS can be achieved in large-scale contexts.

Considering that this paper reports early research findings our further research activities will explore the questions identified in the discussion section, aiming to get in-depth knowledge on implementation and use of collaborative technologies in OGC.

7. Ackowledgements

This research was in part supported by the AKSIO project financed by the Norwegian Research Council (PETROMAKS, pr.nr. 163365/S30). We would like to thank several of our colleagues at NTNU for their

65

comments and suggestions, as well as acknowledge the contribution of the anonymous reviewers at HICSS-41.

8. References

1. Bannon, L. and Bødker, S. Constructing Common Information Spaces. in Hughes, J.A., Prinz, W., Rodden, T. and Schmidt, K. eds. ECSCW ’97. Proceedings of the Fifth European Conference on Computer-Supported Cooperative Work, September 1997, Lancaster, UK, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997, 81-96.

2. Bossen, C. The parameters of common information spaces:: the heterogeneity of cooperative work at a hospital ward Proceedings of the 2002 ACM conference on Computer supported cooperative work, ACM Press, New Orleans, Louisiana, USA, 2002.

3. Ciborra, C. Interpreting e-government and development: Efficiency, transparency or governance at a distance? Information Technology & People, 18 (3). 260.

4. Ciborra, C.U. The labyrinths of information : challenging the wisdom of systems. Oxford University Press, Oxford, 2002.

5. Ellingsen, G. and Monteiro, E. A Patchwork Planet: Integration and Cooperation in Hospitals. Computer Supported Cooperative Work, 12. 71-95.

6. Fields, B., Amaldi, P. and Tassi, A. Representing collaborative work: the airport as common information space Technical Report, Middlesex University, School of Computing Science, 2003.

7. Klein, H.K. and Myers, M.D. A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 23 (1). 67.

8. Kuldeep, K. and Jos van, H. Enterprise resource planning: introduction. Commun. ACM, 43 (4). 22-26.

9. Latour, B. Reassembling the social an introduction to actor-network-theory. Oxford University Press, Oxford, 2005.

10. Lave, J. and Wenger, E. Situated learning legitimate peripheral participation. Cambridge University Press, Cambridge, 1991.

11. Linthicum, D.S. Next generation application integration : from simple information to Web services. Addison-Wesley, Boston, Mass., 2004.

12. Monteiro, E. and Hepsø, V. Infrastructure Strategy Formation: Seize the Day at Statoil. in Ciborra, C.U., Braa, K., Cordella, A., Dahlbom, B., Failla, A., Hanseth, O., Hepsø, V., Ljungberg, J., Monteiro, E. and Simon, K.A. ed. From control to drift: the dynamics of coporate information infrastructures, Oxford University Press, Oxford, 2000, 148-172.

13. Monteiro, E. and Hepsø, V. Purity and Danger of an Information Infrastructure. Systemic Practice and Action Research, 15 (2). 145-167.

14. Nordheim, S. and Päivärinta, T. Implementing enterprise content management: from evolution through strategy to contradictions out-of-the-box. European Journal of Information Systems, 15 (6). 648.

15. Orlikowski, W.J. CASE tools as organizational change: Investigating incremental and radical changes in systems development. MIS Quarterly, 17 (3). 309.

16. Orlikowski, W.J. Improvising Organizational Transformation Over Time: A Situated Change Perspective. Information Systems Research, 7 (1). 63.

17. Orlikowski, W.J. Material knowing: the scaffolding of human knowledgeability. European Journal of Information Systems, 15 (5). 460.

18. Randall, S. What’s Common about Common Information Spaces?. Workshop on Cooperative Organisation of Common Information Spaces, Technical University of Denmark, 2000.

19. Robey, D. and Boudreau, M.-C. Accounting for the Contradictory Organizational Consequences of Information Technology: Theoretical Directions and Methodological Implications. Information Systems Research, 10 (2). 167185.

20. Rolland, K.H., Hepsø, V. and Monteiro, E., Conceptualizing common information spaces across heterogeneous contexts: mutable mobiles and side-effects of integration. in ACM conference on Computer supported cooperative work, (Banff, Canada, 2006), ACM Press, 493 - 500

21. Rolland, K.H. and Monteiro, E. The Dynamics of Integrated Information Systems Implementation: Multiplication of Unintended Consiquences.

22. Schmidt, K. and Bannon, L. Taking CSCW Seriously: Supporting Articulation Work CSCW: An International Journal, 1(1-2). 7-40.

23. Schuler, D. and Namioka, A. Participatory design principles and practices. L. Erlbaum Associates, Hillsdale, N.J., 1993.

24. Schultze, U. A Confessional Account of an Ethnography about Knowledge Work. MIS Quarterly, 24 (1). 3,39.

25. Suchman, L.A. Plans and situated actions : the problem of human-machine communication. Cambridge University Press, Cambridge, 1987.

26. Walsham, G. Interpretive case studies in IS research: Nature and method. European Journal of Information Systems, 4 (2). 74

66

Paper II

67

68

Information Spaces in Large-Scale Organizations

Torstein E. L. Hjelle

Norwegian University of Science and Technology, Department of Computer and Information Science, Sem Sælands vei 7-9,

7491 Trondheim, Norway [email protected]

Abstract. Common information spaces are sometimes used to help analyse and understand collaborative work. This paper uses this concept and the 7 parameter-framework created by Claus Bossen to analyse the collaboration infrastructure in a major international oil and gas company. The paper builds on the framework by weighing different parameters by classifying their importance into one of three categories in order to identify whether some of the parameters are more or less important than others.

Keywords: Common Information Space, Large-scale IS, Bossen’s parameters, analysis

1 Introduction Very few people conduct their work in complete solitude. Most people have to interact with other individuals in order to conduct their work. Some are able to do this face-to-face, but most people use some sort of technology-based collaboration tool. Such tools include the phone, text-messaging, email and instant messaging. These tools are to some extent pure communication tools that facilitate communication rather than true collaboration tools. Of course communication is an important part of collaboration, but it is not sufficient in order to be what is known as collaborative work. Hence, understanding collaborative work is an important area of research and the focus of Computer Supported Cooperative Work (CSCW) research.

The notion of a Common Information Space (CIS) is one of the concepts used within the area of CSCW in order to understand and analyze collaborative work. A CIS acknowledges the importance of the context under which this collaborative effort is conducted, and facilitates the examination of information sharing among the various actors involved. The paper builds on previous work [1] in the area of information spaces in large-scale organizations and uses the notion of CIS, and, more precisely, the 7 parameters of CIS identified by Claus Bossen [2] to analyze the implementation of a new collaboration infrastructure in a large Norwegian-based international oil and gas company (dubbed OGC). Through this exemplification, the paper contributes to a deeper understanding of CIS, as well as an evaluation of the appropriateness of Bossen’s 7 parameters as a mean of analysis. The paper also extends Bossen’s framework by evaluating the relevance of each of his 7 parameters – suggesting that some of the parameters are more important than others.

69

The paper is organized as follows: It starts off with a look at related work focusing on the concept of CIS generally and Bossen’s parameters specifically. Then the research approach and case is described before the appropriateness of the framework is discussed. The paper rounds off with a few concluding remarks and suggestions for further research.

2 Related Work

Common Information Space as a notion was initially conceptualized by [3] as an alternative to the so-called workflow perspectives for analyzing work practices. CIS seeks to bring attention to an area of “critical importance for the accomplishment of many distributed work activities” [3] and focuses on the relationship between actors, artefacts and information, as well as the context in which these occur. The role of artefacts to support and articulate work in cooperative situations is also important [2-4].

As mentioned, CIS provides an alternative to workflow perspectives for analyzing work practices, and, based on [5], explains how workflow perspectives fail to consider how work is done in practice in contexts where continuous negotiation and problem solving is required. Therefore, [3] argue for an alternative approach that would “allow the members of a cooperating ensemble to interact freely” [3]. According to the authors, cooperation is not facilitated through simply having access to the same information in a shared database, but it also requires a common understanding of this information. This is because information always has to be interpreted by the human actors involved.

Then, [4] takes the concept of CIS one step further and explores the duality of the concept. A common information space has a dualistic nature, being both open and malleable on one hand, and closed and rigorous on the other. Within this duality, the openness is necessary in order for the individual community of practice to experience the CIS as meeting their needs, and the closure is important in order to be able to share information across different communities of practice. In addition, [4] argue that there are many different types of common information spaces. For instance, people can be working together across different physical locations [6], or they can be working across different times [7]. However, all the different types of CIS all have some characteristics in common.

Bossen [2] further refines the concept by introducing 7 parameters he argues contribute to provide a more detailed framework, and thus can be applied to characterize the particularities of a given CIS. The 7 parameters are as follows: 1) Thedegree of distribution, which focuses on the physical distribution of the collaborative parties. It is believed that the more physically distributed the members of a given community are, the more difficult the collaboration will become. 2) The multiplicity of webs of significance, which relates to the background (culture, language, education, etc.) of the community members. Again, the more diverse backgrounds the community members have, the more difficult the collaboration will be. 3) The level of required articulation work, which looks at how close collaboration is required for a given CIS. The closer people have to work together, the more articulation work is

70

required. 4) The multiplicity and intensity of means of communication, which is about the different channels people use to communicate. Face-to-face communication is generally considered to be the most effective method of communication, but because of the distributed nature of much collaborative work this is not always possible so video conferences, telephone, email, instant messaging, text messaging, etc. might be necessary. Using a more intense channel like video conferencing will require less work in order to achieve a common interpretation or understanding of something, then a less intense channel like email. For instance, when communicating something using email, it will often be necessary for the recipient to respond with more or less the same content, but presented slightly different, in order to have their understanding confirmed. Using face-to-face communication, non-verbal signals like looks and gestures will often provide this confirmation. 5) The web of artefacts, which are the coordinating mechanisms like plans, strategies, schedules, etc. necessary for the collaboration to be possible. 6) Immaterial mechanisms of interaction, which are more informal than the web of artefacts. Here we are focusing more on the work practices within the organization – how the work is really done (in opposition to how the work is described in various work flow models). 7) The need for precision and promptness of interpretation, which relates to how closely together people work, and how important this is. In general, the more safety-critical a task is the higher the need for precision and promptness become. Table 1 lists the 7 parameters.

Table 1. Bossen's 7 parameters

1 The Degree of Distribution 2 The Multiplicity of Webs of Significance 3 The Level of Required Articulation Work 4 The Multiplicity and Intensity of Means of Communication 5 The Web of Artefacts 6 Immaterial Mechanisms of Interaction 7 The Need for Precision and Promptness of Interpretation

3 Research Approach

This paper reports from findings in an ongoing interpretive case study [8]. The study looks at both the technological and social complexity of ICT implementation and use in large organizations. In this earlier phase of the study the approach has been emergent, rather than highly structured. The aim has been on identifying areas suitable for further research.

Data has been gathered through document analysis, semi-structured interviews and observations. The document analysis has been very important in order to understand the background and motivation behind the decisions made by the management. Having gotten (limited) access to OGC’s systems has been important as it has given access to internal plans and strategies that has been very useful.

There have also been conducted 10 semi-structured interviews. Both ICT-professionals (managers and developers) and “normal” users have been interviewed.

71

The interviews have lasted between 1 and 3 hours. Only two of the interviews have been recorded as the others have chosen to decline our request to record the interviews. But as we have been two researchers conducting the interviews one could focus on the interview, while the other could focus on taking notes. In such a situation it is really useful being two researchers working together.

More recently, an internal research project conducted by OGC’s research centre has opened up new possibilities. The project looks at ways to improve collaboration within the organization, and by doing this it is very relevant to our research. Being able to sit-in on (a total of 4) meetings where collaboration is discussed has been very useful in providing insight into what aspects of collaboration that is important to OGC, and to identify future interview subjects.

4 Case Description

In 2003, the OGC introduced a collaboration system based on Microsoft SharePoint that was to encompass all of the organization’s 29,500 employees located in 40 countries world wide. This system was replacing an older system based on Lotus Notes - a system that had grown out of control with more than 5000 different databases without any central management. Nor did the old system have any centralized indexing functionality, meaning that to retrieve any information you would have to know which database to search. This was believed to cause a lot of redundancy, both with regard to the information stored within the system and the work being done (as people had to do work again because they where not able to find evidence of others having done it). The new solution was a “once size fits all”-solution and was intended used by all parts of the organization, from cleaning staff, via Marketing and Human Resources, to specialists within well drilling and production (e.g. geologists, geophysicists, drilling engineers, production engineers, etc.). They all got the same system with practically no room for customization.

Different groups (based on geographical or organizational location, department, disciplinary belonging, work area, etc), as well as different multi-disciplinary projects, got their own area, called a Team Site, on the MS SharePoint architecture. However, in essence all team sites were very similar. A project working with drilling planning got the same infrastructure for supporting their work as an ICT-project looking into best practices for the use of video conferencing. The only differences were in the availability of metadata used to categorize and classify information.

By late 2007, the MS SharePoint architecture had about 8000 different team sites. An average employee would typically belong to about 6-8 different team sites. This high number of team sites was not believed to cause the same problems as the 5000 different databases from the Lotus Notes-based system as the MS SharePoint infrastructure had one central indexing function, meaning that it would be possible to search the entire infrastructure from one single interface. This central indexing and search functionality was, according to one ITC-manager, the big selling point for choosing Microsoft SharePoint.

Another important aspect of the solution was the rule that all permanent employees would have read-access to all team sites within the infrastructure even though they

72

were not members of said team sites (of course some team sites, e.g. some belonging to Human Resources, would have restricted access).

5 Discussion

Using the parameters identified by Bossen [2] we will in this section focus on analyzing the common information space created by the introduction of MS SharePoint in OGC. Here one can argue that this is more of a theoretical exercise with little practical value as we here look at the entire organization and one given technology as one single information space. It can be argued that it would be of more practical value and relevance to look at OGC as a collection of overlapping interdependent common information spaces where the different communities of practice and organizational units make up a common information space, and analyze these individually. However, this would become an enormous task that, in addition to a lot of time, would require intimate knowledge of all areas of OGC – something we do not have, nor aspire to acquire.

Of course we could have focused on one specific area of the organization and conducted the analysis from this point of view, but as there are aspects of the introduction of the collaboration system that suggest a desire to create one single information space. The argument is that since OGC has one common, standardized and uniform solution to all employees, and that all employees have read-access to all team sites, the management look at this as a one-size-fits-all solution, and therefore one common information space. The fact that research questions this assumption does not diminish the assumption in any way. We also believe that this approach will be more general and easier applicable to other organizations.

In the next section we will look into the 7 parameters in more detail, and relate them to OGC and their collaboration context. Each of the parameters is then evaluated based on its importance in the setting in question. The level of importance is classified using a three-tier scale: High importance, average importance and low importance.

5.1 The degree of distribution

As mentioned, OCS operates in 40 countries world wide, and within many of these countries they have a number of different geographical locations. This point to that OGC is a quite distributed organization. In it self, this is not an indication of the importance of this parameter. However, when we know that projects within OGC often involve people from various geographical locations this signify the importance of this parameter. For instance, the internal research project we have been able to follow lately consists of people from 4 different locations in Norway, in addition to support people from locations in 2 other countries.

Another aspect that support the stand the degree of distribution is of high importance is the following quote from a participant at one of the meetings discussing collaboration we got to attend:

73

“The way the organization is built with geographical differences, it is important with simple and readily available tools where collaboration can take place across. This is lacking today.”1

The degree of distribution is evaluated to be of high importance.

5.2 The multiplicity of webs of significance

Even though OGC has about 29,500 employees, and they naturally come from a variety of cultural backgrounds, OGC is a company with a mainly Western tradition and philosophy. Of course, a geologist from – and working in – Norway has a completely different background than a driver from – and working in – Nigeria, but this seems to be of less importance. This suggests that even though people have different backgrounds, they all also have a common understanding of what it means to be an employee within OGC.

However, even though cultural and national backgrounds seem less important, there are areas within OGC where the multiplicity of webs becomes evident. In an oil and gas company like OGC there are people with a lot of different professional backgrounds working together. For instance, a geologist and a drilling engineer use completely different vocabularies in their daily work. But when they have to work together in order to plan or coordinate some activity they align their webs of significance in order to collaborate effectively. This is something that happens quite often, so in order to facilitate this process people from different disciplines are often (geographically) co-located.

The multiplicity of webs of significance is evaluated to be of high importance.

5.3 The level of required articulation work

Of course, the level of coordination varies between various projects within OGC. Looking into the various projects one would most likely come up with needs for coordination that spans any range one choose to classify it with. However, on a more generic note, we can say that as people work on projects across organizational units there has to be coordination between said projects and units. This is recognized as a challenge within OGC, as illustrated by a quote from a different meeting, but we have no findings that indicate that this problem is of high importance.

“Program and project management are detached from the unit – [this] creates difficulties with regards to reporting and resource coordination.”

The level of required articulation work is evaluated to be of average importance.

5.4 The multiplicity and intensity of means of communication

Face-to-face communication is commonly recognized as being the most effective form of communication[2]. This means that in addition to phone and email – because

1 All quotes are translated from Norwegian by the author

74

of the distributed localisation of the organization people tend to travel a lot. One unit, with about 2400 employees created a statistic of all travels done by the employees over a 4 month period at the end of 2007 and the beginning of 2008. The statistics showed that the unit had more than 18,000 travels during this period. More than half of these were one-day travels. People recognize this as a problem, and believe that many of these travels could – and should – have been avoided. As one person noted, they have to “balance the need for travel/personal contact against remote collaboration”. This has also been recognized by OGC which are fronting efforts to promote the use of video conferencing as a collaboration tool..

Another interesting finding is the wish for more informal, personal communications equipment. As people are used to use instant messaging with web cameras in their personal life, they want to use it at work as well. As one person said, getting to work is like “going 5 years back in time technologically”, and we “have better collaboration tools at home”. Simply put, they want a web camera and a headset at their work place.

The multiplicity and intensity of means of communications is evaluated to be of average importance.

5.5 The web of artefacts

OGC have production facilities (for instance a refinery plant on land or a production platform at sea) on a number of different locations around the world. Traditionally, these locations have been somewhat isolated islands. Each location have been allowed to evolve in a way that best suits there special situation. Hence, over the years the different locations has developed their own business models, plans and strategies. However, the introduction of the new collaboration infrastructure was seen a catalyst to help harmonize and standardize the different parts of OGC. As it is stated in a presentation of the new infrastructure, the new infrastructure is “10% IT, 90% change”. This suggests that the goal of the new infrastructure was not only to give the workers a new tool, but also to reduce complexity, i.e. reduce the number of artefacts.

Whether or not the plans to reduce the number of artefacts are successful or not will not change the fact that OGC still has a lot of plans, strategies, schemas, schedules, etc. that the employees have to comply with. Of course this is not only due to organizational complexity within OGC, but is also caused by the complexity of being in the oil and gas industry.

The web of artefacts is evaluated to be of average importance.

5.6 Immaterial mechanisms of interaction

Within the different areas of OGC there are a lot of work practices, habits, and informal ways of getting things done. However, narrowing down from all these areas to the ones related to the introduction and use of the MS SharePoint infrastructure we see that there really aren’t that many. This is of course mainly because the infrastructure is new and informal work practices have not really been established yet. There are signs that indicate that people are not using the metadata to categorize and

75

classify information as intended, nor are they using the search functionality as anticipated, but this appear to be more because of lack of formal practices (aligned with the work processes) rather than the existence of informal practices.

The immaterial mechanisms of interaction are evaluated to be of low importance.

5.7 The need for precision and promptness of interpretation

In some parts of OGC, like for instance in the control rooms at refinery plants and on platforms, precise and prompt communication is of utmost importance. In these settings there are safety critical systems that ensure the safety of both people and environment. However, in most everyday work tasks this is not the case (even though HSE2 is very important in a potentially dangerous industry like the oil and gas industry). We have no findings that suggest that there is any extra importance related to precision and promptness when it comes to the collaboration infrastructure – nor would we expect to find any since the collaboration infrastructure is not used for safety critical operations. There are other systems for these operations.

If people for some reason need more clarification about something, or they require a prompt response, our findings suggest that they would rather call the person up – or initiate an instant messaging session.

The need for precision and promptness of interpretation is evaluated to be of average importance.

The discussion above shows how the various parameters relate to the common information space established within OGC with the introduction of the MS SharePoint collaboration infrastructure. Table 2 summarizes the evaluation result for the 7 parameters.

Table 2. Classification of parameters

Parameter Importance 1 The Degree of Distribution High 2 The Multiplicity of Webs of Significance High 3 The Level of Required Articulation Work Average 4 The Multiplicity and Intensity of Means of Communication Average 5 The Web of Artefacts Average 6 Immaterial Mechanisms of Interaction Low7 The Need for Precision and Promptness of Interpretation Average

6 Concluding Remarks

Using CIS and Bossen’s framework appear to provide a useful framework for analyzing collaborative work within a given setting. As illustrated through the

2 Health, Safety & Environment

76

analysis of the collaboration system at a major oil and gas company this framework provide an useful insight in what is required with regards to communication and collaboration within a certain context.

However, the analysis suggests that not all 7 parameters are equal. In different settings, different parameters are more important than others. In the case described here the degree of distribution and the multiplicity of webs of significance seem to be more important, while the immaterial mechanisms of interaction are less important. This is of course based on the analysis of a single case, and further research in other cases and settings are necessary, and are likely to yield different results.

Categorizing the parameters according to importance can be useful as it highlights the focus areas for a given information space. Using a three-level scale seems appropriate and is a compromise between the ease of classification (few categories) and the need to differentiate the result (many categories). Further research may suggest a better classification schema.

References

1. Hjelle, T. and G. Jarulaitis, Changing Large-Scale Collaborative Spaces: Strategies and Challenges, in Hawaii International Conference on System Sciences. 2008: Hilton Waikoloa Village Resort, Hawaii, USA. p. 8.

2. Bossen, C., The parameters of common information spaces:: the heterogeneity of cooperative work at a hospital ward, in Proceedings of the 2002 ACM conference on Computer supported cooperative work. 2002, ACM Press: New Orleans, Louisiana, USA.

3. Schmidt, K. and L. Bannon, Taking CSCW Seriously: Supporting Articulation Work CSCW: An International Journal, 1992. 1(1-2): p. 7-40.

4. Bannon, L. and S. Bødker, Constructing Common Information Spaces, in ECSCW ’97. Proceedings of the Fifth European Conference on Computer-Supported Cooperative Work, September 1997, Lancaster, UK, J.A. Hughes, et al., Editors. 1997, Kluwer Academic Publishers: Dordrecht, The Netherlands. p. 81-96.

5. Suchman, L.A., Plans and situated actions : the problem of human-machine communication. Learning in doing : social, cognitive, and computational perspectives. 1987, Cambridge: Cambridge University Press. XIV, 203 s.

6. Rolland, K.C.H. and E. Monteiro, Balancing the local and the global in infrastructural information systems. Information Society, 2002. 18(2): p. 14.

7. Munkvold, G. and G. Ellingsen, Common Information Spaces along the illness trajectories of chronic patients, in European conference on Computer Supported Cooperative Work. 2007.

8. Walsham, G., Interpretive case studies in IS research: Nature and method.European Journal of Information Systems, 1995. 4(2): p. 74.

77

78

Paper III

79

80

THE INTRODUCTION OF A LARGE-SCALE COLLABORATION SOLUTION: A SENSEMAKING PERSPECTIVE

Hjelle, Torstein E. L., Norwegian University of Science and Technology (NTNU),Department of Computer and Information Science, Sem Sælands vei 7-9, 7491 Trondheim;Norway, [email protected]

Abstract

Effective and straightforward collaboration is important in most organizations - particularly so in large enterprise organizations. With large numbers of people working within the same organization it is important to have some sort of computer-based collaboration solution. In this paper we draw on an ongoing case study within a large, international oil and gas company and analyze how the introduction and implementation of a MS SharePoint-based collaboration infrastructure has been accepted by the users. We identify a number of important characteristics of the solution and use the theory of sensemaking to try to explain why it went like it did. Our findings suggest that people do want to use the collaboration as intended, but that the divergence between the technology and people's understanding of it cause unexpected results. This, we argue, is because the new technologies challenge the users' sensemaking processes.

Keywords: Sensemaking, organizations, collaboration, large-scale, CSCW.

81

1 INTRODUCTION

Introducing new collaboration systems can often be especially complicated as they will compete with already existing systems, as well as other more or less complementary technologies. A collaboration system is seldom, if ever, the users' only mean of communication and thus, collaboration. If a user finds a new collaboration system cumbersome or otherwise problematic he or she will have few difficulties finding other means of getting the job done. There are plenty of alternatives. For instance, different elements of a collaboration system can be replaced by e-mail, telephone, instant messaging, shared disks, etc. Users may choose to use these applications, even though they in many aspects are inferior to the collaboration system simply because they are familiar and well-known. Users trust these technologies and are therefore more likely to utilize them.

New technologies cannot simply be pushed out to the users. They must be given the knowledge and opportunity to utilize the technology in a desired way. If users are not able to become familiar with the technology, they will - at best - use the technology sub-optimally.

In this paper we seek to contribute to the CSCW research community by using sensemaking theories in a large-scale organizational context. The use of sensemaking theories has traditionally focused on either individual sensemaking or organizational sensemaking in small-scale settings. We do so by focusing on the implementation of a new collaboration infrastructure in a large international oil and gas company, motivated by a wish to understand why the introduction did not run as smooth as wanted(Grudin 1989). Could anything have been done prior to the introduction in order to improve the chances of a successful roll-out? Why do users not use the infrastructure as hoped?

The rest of the paper is organized as follows: First we take a brief look at some important related contributions, before we then explore the theoretical foundations for this paper. This is followed by a presentation of the case setting and our research approach. We follow this section by taking a look at our findings, before the discussion and our concluding remarks.

2 RELATED WORK

Even though sensemaking as a research topic dates back to the 1980s (Dervin 1983), it was not until the 1990s (Dervin, Glazier et al. 1992; Savolainen 1993; Hulland 1994) it gained interest as a subject for organizational research. Since then, a vast number of publications have appeared using sensemaking as a theoretical framework.

In a recent contribution to the field of CSCW it is argued that "appropriation and adaptation of groupware and other types of advanced CSCW technologies is basically a problem of sensemaking" (Bansler and Havn 2006). In order to successfully introduce groupware systems, technology-use mediation will facilitate the implementation and use of new CSCW technologies, and hence, play an important role in organizations. Implementing the system is not the end, rather the beginning, and requires ongoing user support and contextualization by mediators as the system's position within the context changes over time as the users get new needs, preferences and conditions. Here, the authors show, how different users understand, or make sense of, the same system completely differently. It is important to note that this applies to technology-use mediators as well, meaning that these mediators will affect users in different ways. There is not necessarily one common understanding of what a system is because the sensemaking in organizations is influenced by sensemaking by the individual.

Studies have also found that people's preconceived notions about technology help them understand and use new technologies (Orlikowski 1992). The lack of understanding would then defer the users from using the system. This is quite similar to our setting. However, in our case the new system has not caused the same level of failure as presented in Orlikowski (1992).

82

Another contribution into the area of sensemaking and organizations is the presentation of a study of emergency response workers and their use of mobile phones (Landgren and Nulden 2007). Here the authors use enacted sensemaking to explain the role of the exchange of phone numbers in relation to two serious railroad accidents. They illustrate how the mobile phones are important both as a communications tool and as a mean for understanding the role of others involved in the rescue work.

The concept of sensemaking can also fill important gaps in organizational theory (Weick, Sutcliffe et al. 2005). This is illustrated by identifying central features of sensemaking and closely relating this to organizational institutions. Using the case of nursing and childcare, the authors identify a number of important characteristics of sensemaking in organizations.

3 THEORY

The processes of sensemaking are about how people use their experiences, i.e. their beliefs and understandings, to produce meaning. This, again, affects their behavior and performance. One important contributor to the theory of sensemaking is Karl Weick. This paper draws on his work and theoretical framework.

In short, sensemaking is the ability to make sense of an ambiguous situation. More specifically, sensemaking is about the process people go through in order to create meaning and understanding in complex or uncertain situations. Sensemaking is not a finite process, but an ongoing effort. In fact, sensemaking is a "motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively" (Klein, Moon et al. 2006).

The idea behind sensemaking is that people "try to make things rationally accountable to themselves and attempt to produce some kind of stability and order amidst continuing change" (Bansler and Havn 2006). People look for meaning in order to deal with unfamiliar and difficult situations. The central aspect is how people "make sense" of these situations (Weick 1993).

Sensemaking is more than interpretation. "Interpretation focuses on understanding or "reading" some kind of "text." Sensemaking, on the other hand, deals not only with how the text is read, but also how it is created." (Bansler and Havn 2006). In sensemaking one tries to understand the intentions of the author as well.

The concept of "action" is important with regards to sensemaking. A user’s understanding of a context is what encourages him to act like he does. His action would then lead to some sort of result that again would influence his understanding of the context. If the result is what the person would expect, his understanding of the context would be strengthened. Otherwise it would be weakened. This is what Weick refers to as enactment.

Another important property of sensemaking is plausibility. Sensemaking is not about certainty and accuracy. It's more about the continuous reformulation of the context. If an action causes a result that were to be expected it becomes plausible that the understanding of the context were in fact correct. It's about being "correct enough", and does not provide "an occasion for objective, detached analysis" (Bansler and Havn 2006).

Within social settings sensemaking also plays an important role. Sensemaking is not only something that takes place inside an individual's head; it is also a social process. For instance, two different technology-use mediators can influence their surroundings in completely different ways based on their individual understanding of the technology in question (Bansler and Havn 2006).

With respect to organizational sensemaking both the organizational aspect and the sensemaking aspect are reliant on each other. "Sensemaking starts with chaos" (Weick, Sutcliffe et al. 2005). When an individual encounters a completely new situation there is "a million ways" to understand it. However, nobody starts with a blank slate. We all have backgrounds and experiences. Within organizations we

83

also have formal policies as well as informal practices on how to act when new situations arise within the organization.

Next, in an unfamiliar setting people are likely to seek out more details in order to "make sense" of the situation. In these early stages of sensemaking people have to actively dig themselves down in the context to create a common platform that can later be used for communication.

Sensemaking is also about classification. When an unfamiliar situation appears people will instinctively search for similar situations. For instance, if a rash suddenly appears on an individual’sarm, he is likely to go online to search for an explanation on what it is - even though he may never have seen a rash like this before. He is then trying to use other people's experiences in order to make sense of his unfamiliar rash. But as there are a lot of different types of rashes he might end up identify the rash as something harmless when in fact it is something serious that he should take to a physician.

This last example also illustrates how sensemaking is about beliefs and guesswork. In everyday life, it is not important to be completely certain. Becoming certain requires additional work and effort that most everyday tasks can't defend. When receiving some sort of information from a colleague, most people would assume that the information is correct rather than make sure. Exceptions to this are of course very critical situations, for instance decisions that regard life or death situations.

4 METHODOLOGY

4.1 Research Setting

In 2001, a major international oil and gas company (dubbed OGC, a pseudonym) with 29,500 employees and activities in 40 countries worldwide formulated a new collaboration strategy in order to improve collaboration and knowledge sharing within the organization.

This new collaboration strategy would replace an about 10 year old solution based on Lotus Notes. A solution that had grown out of control with more than 5000 different databases for document storage without any form of centralized indexing or search functionality. In addition to the Lotus Notes system, each user had access to both a personal and a departmental storage area. This meant that the information was spread across different solutions, the information was often duplicated and it was next to impossible to retrieve any useful information. A new, centralized infrastructure with stricter control and better search functionality were believed to be the solution. In 2003 the company landed on a solution based on Microsoft SharePoint-technologies.

The new solution was an all-encompassing solution including "all users' every need" with regards to collaboration. The MS SharePoint-solution would tightly integrate with both the e-mail application MS Outlook and the office tools in MS Office.

The basic unit within MS SharePoint is the team site. A team site is the virtual room where users collaborate, i.e. it is where users save the documents they are sharing. For instance, a team site would typically be created for a project and all project members, both internal from within OGC and external

E-mail (MS Exchange)

Classificationschema

Team site (MS SharePoint)

Search engine (FAST)

Figure 1. Model of collaboration solution

84

from partner companies, would have access to the team site. One important aspect of the solution is, that in order to improve knowledge sharing, all permanent employees of OGC would have read access to all team sites - including the ones he or she is not a contributing member of. In addition to the built-in search engine in MS SharePoint, a second, more powerful, search engine based on FAST technologies was introduced. Just as Google is the starting point for information retrieval to many on the Internet, it was envisioned that the FAST engine would be the starting point for information retrieval within OGC.

Figure 1 shows the collaboration solution with the classification schema connecting the e-mail system and the MS SharePoint Team sites, and the search engine acting as another layer binding it all together.

4.2 Data Collection

This paper is founded in an ongoing research project that looks at a collaboration solution within OGC. The project seeks to examine the interface between the technology and the social arrangements within such a large organization in order to better understand the relationship between the two. We have approached the research with inquisitive and knowledge-seeking minds - trying to be as open as possible - in what would be conceptualized as an interpretive case study (Walsham 1995; Klein and Myers 1999; Walsham 2006). Our focus has always been on why rather than how. In table 1 we have summarized our data collection methods.

The fieldwork began with interviews at a few different locations within Norway. These initial interviews were semi-structured, open-ended interviews aiming at getting insight into the organization in order to identify where to focus our research. These interviews were mainly focusing on managers and IT professionals.

From the beginning of 2008 we got office places at one of OGC's research centers and spent the majority of our time working from there (3-5 days a week) for the next 6 months. Spending this much time at OGC to some extend enabled us to become a part of the research group at the centre. We became "familiar faces" and not outsiders, which is important in order to get access to the right resources (Randall, Harper et al. 2007). We got involved in their work and got to "tag along" with them. The benefits of being able to tag along with OGC's own researchers are two-folded: It has given us access to a variety of different users within different parts of the organization, but - more importantly - it has given us legitimacy towards the users. Being able to refer to someone at the research centre made it much easier to convince other people to talk to us. We are also left with the impression that contacting people from an OGC email-account (that we got when we got access to the research centre) was more likely to result in a (from our side) positive response.

The latter part of our interviews has been more geared towards users of the collaboration solution. In this phase we talked with people from different professions, including production engineers, geologists and cybernetics professionals. As the initial interviews, these interviews have also been semi-structured and open-ended. Our focus here has been more directed towards specific aspects of the collaboration solution. These interviews have lasted from 45 minutes to almost 3 hour.

Only a few of the interviews have been recorded. These recordings have mainly been of the broader interviews with managers and IT professionals that have given us insight into the situation, i.e. how, but not much with regards to what we specifically look for, i.e. why. One reason for this reluctance to being recorded might be that very few, if any, of the persons we have interviewed have been 100% happy with the solution and didn't want to "go on record" with their opinions. Of course, having recordings of all interviews would have been beneficial to us as researchers, but on the other hand, going through our notes and recordings we find evidence suggesting that people are more likely to open up and talk freely if not recorded. We believe that the presence of a recorder signal that the situation is more formal and resembles more of a question-answer session than a conversation/discussion.

85

As we, on most of the interviews, have been two researchers conducting the interviews together we don't feel that the lack of recordings have been very essential. By dividing the work so that one of us has focused on interviewing/discussing, while the other has written quite detailed notes, we believe that we have managed to get quite good and reliable data. Immediately after the interview sessions we both sat down and wrote down what we remembered as being important from the interview before we compared and coordinated our notes. Given that one don't have access to record an interview session, we believe we have achieved the next best thing.

Data source Examples Collection and codingSemi-structured interviews Interviews with information

managers, IT professionals, engineers and other users

Some audio recordings transcribed and coded. Notes

Participant observation Focus group meetings, informal meetings

Notes. Coded

Document analysis Portal, documentation, e-mail communications, plans, strategies, training material

Coded and classified in relation to findings

Informal chats Chatting during lunch, in coffee breaks

Notes.

Table 1. Summary of data collection methods

Interviews have only been one of the ways we have gathered our data. Document analysis has also been an important way of increasing our understanding of the context. Having gotten access to OGC's computer systems, and their collaboration infrastructure, we have gotten access to lots of interesting and relevant information. Anything from user manuals, training videos, presentations, plans and strategies has been read and analyzed giving us a better understanding of OGC's collaboration strategy.

A more unique source of data has been a number of focus group meetings conducted by researchers within OGC. The aim of these focus group meetings was to identify areas for collaboration improvements within one of OGC's divisions (about 2600 employees within this division). A total of 11 of these focus group meetings where conducted at 5 different geographical locations. We got to participate in 4 of these meetings (at 3 different locations). In total about 100 people with different backgrounds and disciplines participated in these focus group meetings.

Each focus group meeting followed the same template: The participants were divided into groups and would discuss a total of three questions, with a plenary summary between each question. The first two questions were trying to identify collaboration challenges within the division, both on an individual and a unit level, while the third question was asking for the solution to these challenges. During these meetings we were both listening to the group discussions during the discussions and acting as minute's recorders during the plenary summaries.

All participants on the focus group meetings were given a sheet of paper for each of the three questions where they wrote down their thoughts and ideas during the discussions. These sheets of paper were collected at the end of the meeting, transcribed and used as foundation for further work. We got permission to use both the summaries and the transcriptions from these meetings in our work. This has given us valuable input in our understanding of the use and challenges related to the collaboration infrastructure.

The last method of data collection we have used is the most difficult to describe and quantify - and perhaps it is also the most valuable. It's what we have called "informal chats". This is everything that happens during a normal work day. Everything from exchanging a few words when passing in the hallway to chats during coffee breaks to discussions during lunch. Of course these types of chats most often are about irrelevant topics (with regards to our research) like the weather, current events in the

86

news, last weekend's football match and such, but every now and then the topic of the organization's collaboration solution is brought up. As everybody uses the collaboration solution to some extent, everybody has experiences with, or opinions about, it. In addition to giving us a "feel" of the organization and the collaboration solution, these informal chats have also given us concrete suggestions to who we should talk with and links to documentation we should look into.

4.3 Data analysis

When it comes to data analysis, this is a never-ending, continuous process. Being two researchers working on similar topics it has been very useful to challenge each other with regards to our understanding of the situation. Having different backgrounds, we naturally have different points of view and thoughts about what we see and experience. Also, having established a close relationship with several of OGC's researchers has given us another arena to discuss our findings.

Our data was initially classified into quite broad containers, for instance "technological aspects", "common misunderstandings", "communications" and "search". On the next iteration new containers would appear as we had gotten a better understanding of the data and the context. This classification is not able to cover all possible details, nor is it clear dividers between the different containers, but to us -with our qualitative approach - it did the job.

The process of verifying the validity of our data has been continuous as well. The nature of our interviews, i.e. semi-structured and open-ended, has opened up to more of a two-way conversation, rather than a pure question-answer session. If something has been uncertain we have rephrased our question or asked the interviewee if he/she could explain further. In addition, our rather unique access to the organization with work space within their offices has, as mentioned, opened up for informal chats and discussions. Bringing up something that we have found interesting during for instance lunch enables us to get other people's opinions and meanings, thus strengthening or weakening our understanding of the topic at hand.

5 FINDINGS

In a large enterprise organization like OGC the users are a diverse and heterogeneous group. Hence, our findings will not apply to all employees, but a smaller sub-group.

Our research suggests that the collaboration solution is used differently in different contexts. In some settings the collaboration solution is working very well, and people are satisfied.

When implementing and introducing the system, the management envisioned an "out of the box" solution with very little customization and modifications. However, in reality OGC ended up with a much customized solution with a quite closed solution with about 50 custom components. Some of these components were necessary in order to integrate the MS SharePoint-based collaboration solution - some of which today are part of newer versions of MS SharePoint - with other systems and platforms used at OGC. As one IT developer told us that "Norway's largest group of .NET developers were working at OGC". This seems contrary to the desire to have an "out of the box" solution.

As mentioned, some of the components were used to integrate the collaboration solution with other systems, but the majority of the components were part of the new classification schema that OGC had introduced as part of the new collaboration solution. The idea behind the classification schema was good enough; it would be easier to retrieve information, i.e. finding the information one was looking for would be easier because information would be classified properly.

The flip-side of this is that storing and archiving information become more troublesome as the authors have to classify their documents by adding keywords to them. The keywords would have to be selected from a list of approved keywords. The different divisions and units of OGC all have different keywords available. The major problem with the keyword lists was that the approved values were

87

based on the formal work documentation and procedures - not the way the work was really done. It ended up with people "selecting what is least wrong" (Middle manager). The same manager also told us that he "thinks people have given up [on the classification schema]".

Another problem with the classification schema was that it was too dependent upon the person classifying the document. Two different persons would typically use different keywords on the same document. "It requires more [work] than anticipated, and that those creating it believe" (drilling engineer). Retrieving information based on the classification schema was by one engineer compared to conducting "extreme sports". As one engineer said, it is a reason one can study library science, and "engineers are not librarians".

On the other hand, within some areas the classification schema seemed to be more suited. However, in order to be successful, a few pre-requisites had to be completed: 1) "Someone" had to take the time to align the keyword lists with the actual work practices, and 2) the tasks being classified had to both "well-defined and repetitive". An example given by a drilling engineer was the process of drilling a well. This was a process that OGC had been doing for 30 years, and was well-known and defined.

In order to promote collaboration and knowledge sharing OGC had put limitations on the size of the employees' e-mail accounts. Each employee was limited to a maximum of 250 MB. The idea was to limit the number of attachments sent through their e-mail system. Instead, people were expected to upload documents to the appropriate team site, and send the link by e-mail. Again, the intentions are understandable as this would limit the amount of traffic, but also reduce the number of different copies of documents in the system, as well as eliminate the versioning problems. Talking to an IT manager, we learned that they in the future wish to reduce the size of the e-mail accounts even further. They believe that most e-mail should be shared within the project groups - a belief that caused them to create a custom component to upload e-mail directly from MS Outlook to the MS SharePoint team site. However, people seem to believe that e-mail communication tends to be personal and not something "others would be interested in". It is not that they consider the e-mail to be private, it is more that they consider the e-mail communication to be part of the work process, and that it is the finished product they want to share.

Though people were not completely satisfied with the collaboration solution most people told us that they were using it on a regular basis. It was not because they did not have any alternatives. The departmental and personal disk drives were still accessible, and hence, the users could use these. When asked why they used it, most would refer to company policy. The MS SharePoint collaboration solution was what OGC used as a collaboration solution, and, therefore, so did they. "It is the correct thing to do."

The FAST-based search engine that was envisioned to be the starting point for all information retrieval appears to have been the most disappointing feature of the new collaboration solution. People were not able to find the information they were expecting to find. The search engine is configured to traverse and index a variety of sources. In addition to the MS SharePoint-based collaboration solution the search engine would also index the corporate intranet, the departmental disk drives and a number of the old Lotus Notes databases - that are still available, but closed for changes. This means that almost all information should be available through the search engine.

It is also important to note that the search engine filtered the content it presented based on who was searching. That means that two persons can search for the same term and get completely different results. This makes complete sense from a business point of view: Some information is sensitive and should only be accessible to a limited audience. On the other side, it is important that this audience has access to that information through the search engine. If some information is only accessible through other channels this devalues the search engine.

The problem with the search engine appears to be related to the act of search, i.e. using the search engine to retrieve information. People don't have the necessary skills to effectively use it. The most repeated comment about the search engine was variations of "Why doesn't it work like Google?".

88

One rather strange finding was the fact that OGC had had to disable the search functionality embedded within MS SharePoint. Apparently, this search engine was not able to index the huge amount of documents produced at OGC.

OGC has standardized on MS Windows XP with Internet Explorer and MS Office 2003 in their desktop environments. This is very well suited to the MS SharePoint-based collaboration solution as there is a tight integration between MS SharePoint and MS Office.

However, not all employees use desktop computers. Some, typically people involved in heavy computational work, are using workstations. And on the workstations OGC has standardized on Red Hat Linux. As neither Internet Explorer, nor MS Office, is available on Linux these users do not benefit from the tight integration between the MS SharePoint collaboration solution and MS Office. In fact, the MS SharePoint collaboration solution only really works with Internet Explorer. And, as such people using workstations are unable to utilize the collaboration solution.

These workers do, however, get access to their e-mail account and calendar through Outlook Web Access which is a web-interface towards the Exchange e-mail server. This actually works very well in for instance Mozilla Firefox and does not require Internet Explorer. When needing access to a team site on the MS SharePoint system, these users would typically use some sort of remote desktop software to log on to a Windows XP desktop computer and go from there. Not a very efficient way of going about.

6 DISCUSSION

In this section, the findings presented in this paper will be discussed and elaborated. We show how sensemaking can be applied in order to better understand why people ended up using the collaboration infrastructure the way they did.

Firstly, we'll discuss the classification schema. The classification schema was something completely new to the employees of OGC. Using the classification schema was unfamiliar to them. To the workers it was difficult to see the value of the extra work needed in order to classify the documents. In their everyday use, the retrieval of the information was not a problem as they, and their group members, would know where to find it. When one is working within one, two or maybe three, team sites on a day to day basis keeping track of the information needed in order to do one’s job is not difficult. Most users acknowledged that they understood how classification would be useful in order to retrieve information more efficiently, but that seemed to be on a more theoretical level, and not something they would consider important in a hectic, everyday work situation. In short, the classification schema did not make sense to most users in their everyday work situation. To make sense of the collaboration infrastructure, most users simply ignored the potential benefits of the classification schema and used more or less random values as keywords.

But the classification schema did make a renaissance within certain groups in OGC. However, not the way intended. One limitation of the MS SharePoint-based collaboration infrastructure is the lack of folders. This, as far as we understand, is a limitation of MS SharePoint. All documents - within one team site - would have to be stored in one single document library. Filters would then have to be created in order to sort away irrelevant documents. But as only team site administrators - who typically were the project managers with very busy schedules - could create these filters this was rarely done. Folders with their hierarchical structure were something people were familiar with and wanted. Organizing information in folders is something that has been done ever since the early days of computers. By classifying information with specific values, people were able to emulate the missing folder structure. Even though against company policy, team sites with up 9 levels of "fake" folders were found. This is an example of how people look at unfamiliar situations from a sensemaking perspective. As mentioned earlier, when unfamiliar situations appear people will search for familiar similar situations. The classification schema was something unfamiliar, but the folder structure was familiar. By connecting those two, people were able make some sort of sense out of the classification

89

schema. This also illustrates the importance of plausibility. The users knew it was not the correct understanding of the classification schema, but it was "correct enough" for their everyday use of the system.

The 250 MB limit on the e-mail account was often referred to as an "annoyance" by our interview subjects. Most informants had quickly reached the limit and had to delete e-mail in order to be able to continue to receive and send email. This limit was very strict, and one would have to have very strong arguments in order to have it increased. Even high level management had to obey by this.

When reaching the limit there were different strategies taken to reduce size. One simple solution used by many was simply to sort all email by size and delete the ones with the largest attachments - most often after having stored the attachment on the personal disk drive.

Another solution often used was to move old e-mail messages to one single folder within MS Outlook and then archive this folder. The archived folder would then be stored to the personal disk drive. This way, the archived folder could later be opened from within MS Outlook if needed.

A third solution to overcome the e-mail limit was to create a local e-mail folder from within MS Outlook, but instead of saving it on the MS Exchange e-mail server, it would be saved to the personal disk drive. E-mail in this folder would be accessible from within MS Outlook "as normal", but would "not count" as part of the 250 MB available quota since it was not stored on the e-mail server. The drawback with this method was that each time the user logged in on a different computer for the first time he would have to manually import that e-mail folder. Initially it was quite easy to create these folders from within MS Outlook. However, when management discovered this breach of company policy they quickly disabled the possibility. But, if you knew somebody who already had one such e-mail folder it was quite easy to have them send an empty folder as an e-mail attachment. Management had only removed the opportunity to create such files. Opening and using, i.e. modifying, them was no problem.

The problem with the 250 MB e-mail quota, and the three ways of relating to this listed above, illustrates the differences in how people see e-mail, and e-mail usage. Very few had ever uploaded an e-mail message to a team site. In fact, some didn't even know of the opportunity to do so. Within OGC, e-mail has traditionally been used for personal communication and has not been something thathas normally been shared. (Of course, list e-mails sent with information of this and that to all employees/project members are not included here.) With this institutional background it is easy to see how people make sense of and use e-mail. E-mail communication is not something that should be shared. If the information in the e-mail were meant for all project members, it would have been e-mailed to all project members in the first place!

The different approaches to avoid the 250 MB e-mail quota illustrate enactment within sensemaking. Many people want to have their old e-mails available and act accordingly. They understand that if they for instance create an e-mail folder on their personal disk drive they will be able to store as much e-mail as they want. Several interviewees acknowledged that they understood why it was desirable from a corporate point of view to limit the size of the e-mail inboxes. However, this policy did not align with their everyday work.

OGC is involved in a number of projects with external partners. One of the key benefits with the new collaboration infrastructure was to improve collaboration with these partners.

In one project, involving about 50 project members, 10 from OGC and 40 from the external partner, they had simply not been able to use the MS SharePoint based collaboration solution. It was too cumbersome to give the external project members access to the team site. Instead, after about 6 months using OGC's solution they formally applied for permission to use the partner's collaboration solution. This solution was based on MS Groove.

Getting permission to use the MS Groove-based solution was a lengthy process. First, they used the MS SharePoint-based solution for the aforementioned 6 months, all the time acknowledging that it

90

was not really suited for the task, mainly because it was company policy. Then, as OGC is a large organization, it is also a quite bureaucratic organization. Not until they got the permission from the division leader were they able to use it as a sort of pilot study.

The project members considered the MS Groove-based collaboration solution quite useful. In most aspects it was considered superior to OGC's MS SharePoint-based solution. The more familiar interface, better file and folder organization, as well as better awareness functionality made MS Groove a better solution in such projects.

The last three paragraphs illustrate two points. First, people want to be loyal. They want to follow company policy - even if it is cumbersome and possible not the most efficient solution. They understand the value and importance of standardization and uniformity to a large organization like OGC. Not until this value and importance severely interfere and hinder their day to day job do they look for alternatives. In such a situation, following company policy does no longer make any sense and they are willing to abandon the path they were following.

Second, good design can build on the sensemaking processes already carried out. When they began to use the MS Groove-based solution the OGC employees received no training at all, nor did they get any form of assistance from central ICT support at OGC. They were on their own. They just started using it. But because the interface of MS Groove is so similar to what they were already used to from MS Windows XP, making sense of MS Groove did not pose any challenges.

This, again, shows the importance of plausibility. The users have expectations on how the system should be used. When they use the system and it responds as expected their understanding of the solution is strengthened.

Lastly we'll discuss the search engine. Most people we talked to at OGC were familiar with search engines and how to use them in order to retrieve information. Google and the Internet was the reoccurring example. People claimed to be able to find information on the Internet using Google. However, using the search engine within OGC was a completely different story.

Many complained about not being able to find what they were looking for when using it. Some believed that the problem was that the search engine did not prioritize the results, i.e. putting the most relevant hits at the top of the first page of the search results. Others believed the reason was that the search engine only indexed some of the information sources, and, thus, information from other sources was impossible to find through the search engine.

Some had simply given up on the whole search engine and relied on colleagues, i.e. networks, in order to find what they needed. Others were very particular about bookmarking locations as soon as they came across something they thought might be useful in the future.

When the search engine was introduced people believed they would get a tool that would help them find all sorts of information. At first they tried using it. Most believed they had an understanding of how to use a search engine - based on their experiences with for instance Google. When the internal search engine did behave as expected, it did no longer make sense to the users. Instead of trying tounderstand it, i.e. make sense of it, many users simply stopped using it. They had nothing but Internet search engines like Google to compare it to, and it simply became meaningless.

All in all, the search engine and the lack of ability to find useful information with it appear to be just a minor problem in people's everyday work. They were used not to have search engines and. therefore, made due without it - which is sort of strange considering how dependent most Internet users are upon search engines.

91

7 CONCLUSIONS

This paper has presented a study of a collaboration solution in a large organization. It has used the theory of sensemaking in order to explain how this collaboration solution is understood by various users.

A number of characteristics of the collaboration solution are identified and described. Then the use of the collaboration solution is discussed with regards to these characteristics using the theories of sensemaking.

People’s understanding of a new collaboration solution is in large based upon their previous understanding of similar systems. When their expected results is not occurring the continuous processes of sensemaking enables them to gain understanding of the solution. Every time their expectations of the system are met, users develop a stronger confidence in their understanding, i.e. process of understanding, with regards to this system.

Further work is needed in order to provide a more thorough understanding of the various details outlined in this paper.

References

Bansler, J.P., E. Havn. 2006. Sensemaking in Technology-Use Mediation: Adapting Groupware Technology in Organizations. Comput. Supported Coop. Work 15(1) 55-91.

Dervin, J.D. Glazier, R.R. Powell. 1992. From the mind's eye of the user: the sense-making qualitative-quantitative methodology Qualitative Research in Information Management. Libraries Unlimited, 61-84.

Dervin, B. 1983. An overview of sense-making research: Concepts, methods and results to date International Communication Association Annual Meeting, Dallas, TX.

Grudin, J. 1989. Why Groupware Applications Fail: Problems in Design and Evaluation. Information Technology & People 4(3) 245.

Hulland, C.M., Hugh. 1994. Science, stories, and sense-making: A comparison of qualitative data from a wetlands unit. Science Education 78(2) 117-136.

Klein, G., B. Moon, R.R. Hoffman. 2006. Making Sense of Sensemaking 1: Alternative Perspectives. IEEE Intelligent Systems 21(4) 70-73.

Klein, H.K., M.D. Myers. 1999. A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly 23(1) 67.

Landgren, J., U. Nulden. 2007. A study of emergency response work: patterns of mobile phone interaction Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, San Jose, California, USA.

Orlikowski, W.J. 1992. Learning from Notes: organizational issues in groupware implementation Proceedings of the 1992 ACM conference on Computer-supported cooperative work. ACM Press, Toronto, Ontario, Canada.

Randall, D., R. Harper, M. Rouncefield. 2007. Fieldwork for Design: Theory and Practice (Computer Supported Cooperative Work). Springer-Verlag New York, Inc.

92

Savolainen, R. 1993. The Sense-Making theory: Reviewing the interests of a user-centered approach to information seeking and use. Information Processing and Management 29(1) 13-28.

Walsham, G. 1995. Interpretive case studies in IS research: Nature and method. European Journal of Information Systems 4(2) 74.

Walsham, G. 2006. Doing interpretive research. European Journal of Information Systems 15(3) 320.Weick, K.E. 1993. The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster.

Administrative Science Quarterly 38 628-652.Weick, K.E., K.M. Sutcliffe, D. Obstfeld. 2005. Organizing and the Process of Sensemaking.

Organization Science 16(4) 409.

93

94

Paper IV

95

96

Tactics for producing actionable information

Torstein E. L. Hjelle and Eric Monteiro

Department of Computer and Information Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway

[email protected], [email protected]

Abstract. Exploitation and production of oil and gas involve significant human, environmental and economical risks. Everyday concerns – Should we drill further down? Maybe inject more water? Why is the well producing these amounts of sand? – rely on vast amounts of structured and unstructured information accessible in different formats, applications and platforms. This information is never perfect: there are missing/ unavailable information, inaccuracies and errors, in short, there are uncertainties tied to the information. Our primary question is, how do engineers – in practice – cope under these conditions? We contribute by (i) identifying tactics for making information credible and trustworthy enough for daily operations i.e. strategies to transform “mere” information into actionable information as well as (ii) drawing out practical implications of our analysis.

Keywords: Integration, trust, interdisciplinary collaboration

1 Introduction

If nothing else, the recent accident with the BP operated oil drilling rig Deepwater Horizon in the Gulf of Mexico vividly illustrated the threats to human life, the environment and the economy posed by deep sea oil and gas production. Oil and gas companies are extremely information intensive operations. They rely on truly vast amounts of information, e.g. seismic data, geological models, reservoir models, simulations, drilling logs, production measurements, intervention reports and process parameters, for their everyday operations. In principle, daily operations of an oil and gas field are based on “all” this information. In practice, there are a number of impediments. The amount of information is staggering not to say prohibitive for careful analysis. Moreover, the extensive number of databases, applications and platforms containing this information is in part fragmented, redundant and inconsistent rather than as seamlessly integrated as advocates of technological fixes to integration tout. In addition, the information contains errors and/ or inaccuracies that entail it cannot be trusted at face value. What, then, do practitioners do to cope?

The aim of this paper is to (i) identify tactics employed by users when producing actionable information from information that is overwhelming, poorly integrated and often inaccurate and (ii) to draw out practical implications of our analysis.

97

Empirically we study the everyday work of two of the principal professional communities involved in oil and gas operations, viz. production and reservoir engineers. They plan and fine-tune the running of oil and gas wells on a daily basis. Our case company is a 30.000 employee Fortune 500 company OGC (Oil and Gas Company, a pseudonym for anonymity) operating in 40 countries, a large international energy company with focus on gas and oil production. OGC is the world’s largest operator on ocean depths of more than 100 meters.. Due to the limited potential for growth in its home region, OGC is actively seeking to grow globally.

The rest of the paper is organized as follows. Section 2 reviews relevant literature.Section 3 describes our research methods and background for the case. Our analysisin section 4, followed by the discussion in section 5, is organised around strategies for coping with (i) missing/ unavailable information, (ii) inaccurate or erroneous information and (iii) situations where previous knowledge do not apply. The last section offers concluding remarks.

2 Making operational sense of information in business organisations

The running of any business is based on extensive information. Recent interest in e.g. business intelligence (see MISQ Special Issue on Business Intelligence Research [1])is motivated by the prospects of having available relevant information at your fingertips, as it were. This would enable different types of analyses helpful to the running of the business with subsequent sound actions based on this. The problem, however, is that such an approach abstracts away from real-world issues with fragmented, potentially inconsistent and inaccurate data. Business intelligence assumes that information quality is adequate, and proceeds to look at ways to massage, manipulate and present views of the information. We are concerned with situations where strong assumptions about information quality do not hold.

A key reason why at least larger business organizations struggle with data quality is the heterogeneity of the information: a large number of applications, modules, formats and databases that have been set up over many years. Such collections of information systems tend to be fragmented by historical and organizational reasons and they stubbornly resist many attempts of integration. Or, “Rather, the information is spread across dozens or even hundreds of separate computer systems, each housed in an individual function, business unit, region, factory or office” as formulated by [2].

Technical approaches to integration of large collections or portfolios of information systems dominate. ”Integration has been the Holy Grail of MIS since the early days of computers in organizations” as pointed out by [3]. Over the last decades, a rich and expanding repertoire of technical mechanisms for integration has been proposed, from low-level (e.g., database schema integration), to middle-level (e.g., middle-ware like CORBA, Web services), to high-level (e.g., Service-Oriented Architectures (SOA)) solutions [4]. Yet organizational implementation lags

98

significantly behind these promised returns [5-9]. Traditional approach to integration, in short, remains overly optimistic, prescriptive, and programmatic.

Some scholars have argued that tighter integration of information systems is unattainable as it but expresses an inherent socio-technical complexity [10]. A central tenet here is how integration triggers unintended consequences that fuel escalating complexity [11]. In short, this complexity makes information imperfect, or as Law dramatically phrases it ‘‘There are always many imperfections. And to make perfection in one place (assuming such a thing was possible) would be to risk much greater imperfection in other locations...The argument is that entropy is chronic’’ [12].

Within such a theoretical perspective, the prospects of integrated information systems appear gloomy. In empirical cases, however, users display ingenuity and improvisations rather than despair. How do practitioners cope with these (inherent?) imperfections in the information in their everyday work? The literature points out a number of resources to draw on.

First, information underpinning decisions and action need to be trusted. Not only the content of the information matters, also who produced it i.e. the identity of the source. This reiterates [13]’s point about the deeply social character of knowledge in the sense that: “Insofar as knowledge comes to us via other people’s relations, taking in that knowledge, rejecting it, or holding judgment in abeyance involves knowledge of who these people are. … What of relevance to credibility assessment do we know about them as individuals and as members of some collectivity?”.

Second, robust knowledge is produced through collective and manual deliberations with peers. This involves, as [14] point out, an element of validating or sense-making of the different elements: “In summary then, the problem of integration of knowledge in knowledge-intensive firms is not a problem of simply combining, sharing or making data commonly available. It is a problem of perspective taking in which the unique thought worlds of different communities of knowing are made visible and accessible to others”.

Third, the kind of trust involved in knowledge work is not a static entity either present or absent. It is rather the performed achievement of a concerted and highly heterogeneous effort with actors, artefacts and other externalised knowledge representations. As pointed out by [15], “the perceived value of medical information is related to the perceived credibility of the source”. An important aspect of knowledge work, then, is to unpack how disembedded or externalised knowledge representations are rendered credible and trustworthy.

In sum, approaches to the use of operationally relevant information in business like business intelligence tend to gloss over how information, in and of itself, is not immediately trustworthy i.e. actionable. Users of information draw on social networks to filter the information. Still, a more detailed identification of tactics employed is lacking for users such as the engineers we study inside OGC.

99

3 Case

3.1 Setting

As most industries, the oil and gas industry is interested in maximizing profit. And as OGC has little influence on the price they get per barrel, the only way to maximize profit is to either produce more or reduce costs. As there is a finite number of oil fields available, production can only be increased by draining a larger portion of oil and gas from the existing reservoirs. Another option is to produce oil and gas more efficiently, i.e. produce more oil and gas per euro spent. A report from the national oil industry association indicated a pent-up potential of more than 30 billion Euros. Hence, an initiative to release this potential was kicked off early this century. The ambition was to increase recovery by supporting people in making better decisions quicker through improvements in both collaboration and communication.

One outcome of this initiative has been an increased use of collaboration technology. Most meeting rooms have been equipped with big screens, projectors and interactive whiteboards (i.e. Smart Boards). Being able to use technology to get together, share and work on the same information was believed to improve collaboration and lead to better decisions. In addition, many meeting rooms were equipped with video conferencing facilities.

In Northern Europe OGC have 3 operations centres. Each centre operates a number of oil and gas fields in the North Sea. This research has followed a group of people responsible for two such oil and gas fields. This group is responsible for ensuring optimal drainage of the two fields combined with optimal utilization of the available processing facilities. They both plan and follow up production on both a day-to-day basis, as well as on longer terms. They are responsible for planning and implementing well operations in order to ensure stable and reliable production, as well as preparing weekly and yearly production plans. In short, they are responsible for getting as much oil and gas from the reservoir under the seabed to the processing facilities either on a platform or onshore – given the limitations of each individual well, a group of wells, processing facilities or corporate policies.

All in all there are about 25 people working in this group. A number of disciplines are included in this group: Production engineers, reservoir engineers, petro physicists, geophysicists and geologists. The focuses of the different disciplines vary. For instance, the production focuses more on the everyday production of oil and gas as compared to most of the others who focus more on finding more oil and gas in new and existing reservoirs. The reservoir engineers are somewhere in between these extremes and contribute both in long term planning and short term production.

All members of the group are co-located. They work in a large open plan office where the desks are grouped together in threes – with the exception of 4 production engineers who work in a separate collaboration room separated from the rest of the group by a large sliding door. In addition to the 4 work stations, each with three monitors, this room is equipped with 2 projectors, an interactive whiteboard and one 42-46” LCD monitor. The content on any of the monitors at the workstations can be displayed and shared on any of the large screen in the room. The room is also

100

equipped with video conference facilities. This room is used for internal group meetings, as well as with people from outside the group if the meeting is related to production. Except for during meetings, the sliding door is most often left open.

The reason these production engineers sit in a separate room is mainly because they are responsible for the daily operations and interact more often with people from outside the group. For instance, it is the production engineers who most often interact with the people offshore on the platform or with people controlling the process plants.

As mentioned, the group is responsible for two oil and gas fields. Both are quite new. The oldest began production in 2005 and are currently producing from 12 individual wells. The field is mainly a gas producing field and produces very little oil. This field produces from a reservoir with higher pressure and temperature than any other field in the North Sea. The second field began production in the summer of 2009. It currently has 4 production wells in addition to one gas injection well. New wells are still being planned, drilled and put into production. This field produces mainly oil and is expected to be in production until 2029. Both fields are mainly subsurface installations connected to a common platform where the production is processed before being shipped away.

Fulfilling production goals on a week-by-week basis is in many ways what the group gets measured by. It is by far the most visible and easily understandable metric to measure success or failure. Working together to ensure good results is thus important. As mentioned, the production engineers handle the production aspect of the operations. One of their more important is every week to produce a production plan for the coming week. Simply put, this plan tells how much oil and gas they expect to produce the upcoming week. Knowing this is important to other parts of OGC; for instance the units who are responsible for processing, transporting or selling the oil and gas. If they don't know exactly how much oil and gas that is to be produced they will not know how to manage it.

In order to do their job efficiently the production engineers rely on good collaboration with especially the reservoir engineers. The production engineers create and maintain reservoir models - both for each individual well and the entire reservoir. These models are continuously updated and modified. The main purpose of the models is to predict the behaviour of the wells and reservoir in order to know how much oil and gas a well can produce and at what rate.

To decide how to best run a well, the production engineers and reservoir engineers have a dedicated management forum that meets about once every two weeks for approximately two hours to discuss the current status of the field, each individual well and how to run it in the upcoming period. As input to these meetings the production engineers bring with them the current status of field and wells, the production details for the last period and the results of any tests done the last period. The reservoir engineers bring with them modified and up to date reservoir models.

101

3.2 Methods

This paper reports from findings in an ongoing interpretive case study [16] where we look at how people conduct their everyday work in order to assign meaning and understand the rationale behind their actions

3.2.1 Data Collection

Data collection has consisted of (i) observations, (ii) semi-structured interviews and (iii) document analysis and has been conducted from early 2007 to August 2010.

Observations

Observations began in earnest in March 2009 when we got access to a group of engineers working with oil and gas production. During our observation period we visited OGC about 110 days. During the observations we followed their daily work routine. We were allowed to sit in during meetings, both internal meeting within the group and with external partners. In total, more than 375 meetings, ranging from 3 minute long status updates to day-long work sessions, were observed. Table 1 summarizes the meetings observed.

When not in meetings we were given access to work stations in the engineers’ open plan office where we could work while still being able to be a part of the

Table 1. Summary of types of meetings observed

Meeting type Frequency Duration Participants PurposeControl room meeting

Daily 15-25 minutes 8-12 Production related events last and coming 24 hours

Platform meeting

Daily 15 minutes 18-22 Platform related events last and coming 24 hours

Petroleum technology

Daily 3-15 minutes 15-30 Summary of the two previous meetings with focus on petroleum technology

Production meeting

Weekly 1-2 hours 15-20 Planning activities and operations for the coming week

Reservoir meeting

Bi-weekly 2 hours 6-12 Status of fields and well, planning activities for the next period

Various meetings

Occasional 5 minutes – 6hours

4 – 20 E.g. reservoir drainage strategy workshop

102

surroundings. This way we got the opportunity to observe how the engineers worked in their everyday work.

During observations handwritten notes were taken down. Either after the meetings or at the end of the day the notes were then written out. Thoughts and reflections made during the observations have been written down in a separate column in our notes. Questions were asked to clarify and elaborate findings, as deemed important by [17]. In order not to disturb meetings, these questions were most often asked while walking to or from meetings, or during lunch or breaks.

During these meetings, our role was purely observational. With one exception: The bi-weekly reservoir meetings. This meeting is a forum that was established during our period of observation and the group leaders wanted our input in order to make the forum as good as possible. So, at the end of each meeting, we spent a few minutes commenting on the meeting structure, organization, flow, etc.

Interviews

The second method of data collection has been semi-structured interviews. The initial interviews were quite open-ended and targeted at ICT professionals and lower to middle management. The aim was to get an understanding of various tools and systems available within the collaboration infrastructure and the rationale behind the decisions to implement and roll out the solutions they did.

The latter part of the interviews has been with the engineers at the operations centre. Here we have interviewed a variety of both junior and senior engineers within the disciplines of production, reservoir and process.

In total we have conducted 26 formal interviews, with 14 in the first part and 12 in the latter, lasting from 1 to 3 hours. Only 8 of the interviews have been recorded, but as we in most interviews we have been more than one researcher present, we have divided the task so that one is only focusing on writing down what is being said, and thus we have to some extent compensated for the lack of recordings. After the non-recorded interviews we immediately after the interview went through the notes together and clarified uncertainties.

Document Analysis

The third data collection method has been document analysis. We carried out an extensive study of presentations, formal descriptions of work processes, plans and strategies, both related to the collaboration infrastructure and to oil and gas production. This analysis gave us a good understanding of the information infrastructure; and the possibilities and limitations set by this.

3.2.2 Data Analysis

Our data analysis is a never-ending, continuous process. By overlapping data analysis and data collection we have been able to achieve added flexibility [18]. Our field notes have been very important. As suggested by [19] we have separated our ‘raw’ data from our own comments, reflections and questions. Seeing that we, as researchers, are influenced by our previous experiences and backgrounds, this has

103

influenced our analysis. Through inductive work, our analytical categories have materialized from both internal discussions and with discussions with researchers at OGC, as well as reading of field notes.

The first-order conceptualizing [19] began at the field site. Often, as we had reflected on our observations, we used this as a basis for further discussions and issues to pursue. Our data was manually coded and in turn categorized, similar to the process of constant comparison found in grounded theory. By using a bottom-up categorization strategy focusing on functions of quality practices, we developed an interpretive template as shown in table 2 that we use in the subsequent section.

Discussing early findings with both the engineers and other researchers provided valuable validation as the engineers could confront our interpretations. These sessions have helped ensure that we have understood the engineers’ world as thoroughly as possible.

4 Analysis

Even though the production engineers and reservoir engineers have access to a lot of different tools and systems that provide them with data they still have challenges they need to sort out in order to make their job as good as possible. To meet these challenges the group have developed a series of strategies and workarounds.

4.1 Filling the Gaps

During our time with the engineers we observed them having a number of problems with various types of measurement equipment. One such occasion was shortly after production began on the newest oil and gas field. Just a few weeks after being started, the contact with the subsea measurement equipment on two wells got lost. As these two wells were connected to the same production line (together with two other wells) they had no way of actually getting to know the individual properties of the two well without closing down at least one well – and thus lose production. For instance, they were not able to measure how much oil and gas the two wells produced respectively, nor could they measure the pressure or temperature from the two wells individually.As the wells had only been in production for a couple of weeks they had very little

Table 2. Our interpretative template derived through a combination of bottom-up, open coding and classification of data with deductive elements

Construct Evidence, exampleFilling the gaps “We probably don’t have a clue what the gas-oil

ratio is, and at what rate.”Coping with inaccuracies “Just ignore this point. It is an inaccurate test.”Dealing with the unexpected “Well W3 doesn’t produce water” when faced

with measurements suggesting so.

104

historically data. This meant that they had very little knowledge of the situation that they could use to compensate for the broken equipment.

As it is important to know how much oil and gas each well is producing in order for the reservoir engineers to create and maintain accurate reservoir models the lack of information was often discussed during meetings.

In one reservoir management meeting a couple of weeks after the loss of the equipment the leading reservoir engineer addressed the problem of the faulty equipment and how to compensate. The leading reservoir engineer wanted to know how long it would take to get the equipment repaired or replaced. The production coordinator informed that according to the last information he had, it would take several months.

The discussion then revolved around how they would cope until the situation was mended. As the leading reservoir engineer pointed out, to the reservoir engineers the time shortly after startup is important in order to see if things are evolving as expected. If not they would have to update their models and projections. “We probably don't have a clue what the gas-oil ratio is, and at what rate.” He then suggested closing down one of the wells with the faulty equipment for a short time in order to get the missing data. If they closed down one of the two wells they would be able to at least measure the how much oil and gas the wells produced by taking the total for the three wells and subtracting the production from the two wells with functioning equipment, giving the production rate for the third well. And by taking the total production from all four wells and subtracting the total for the three wells they would know the production from the fourth well.

However, the leading production engineer argued that the benefits would not outweigh the production loss: They would not get any information about the pressures and temperatures in the wells, and that was the most important information. Then he ended with saying “We are too hung up on details on [field two]. We have a lot of data. If [field two] had been a field without down-hole gauges and multi phase meters nobody would have worried about this.”

Production coordinator, whose screen was being displayed on a projection screen, used instant messaging to query the operations engineer about whether some maintenance planned for the next week would cause any production restrictions. The operations engineer answered that it was likely. The production coordinator then suggested that they use this opportunity to close down one of the wells and get the measurements. The leading reservoir engineer accepted this as a compromise, but insisted that they should push on to have the equipment repaired or replaced quickly.

4.2 Coping with Inaccuracies

For a period of several months the group had problems with detectors registering that one of the wells were producing sand. Sand in this setting is pieces of solid matter, i.e. tiny fragments of rock from the reservoir. Getting sand into the production is problematic because of the sand’s eroding properties. The sand passing through the pipes at high speeds would over time grind down the insides of the pipes, eventually breaking through the pipe and cause a leak. Depending on the location of the leak,

105

this, in turn, could possibly lead to a dangerous situation onboard the platform, or a major spill with severe environmental consequences.

During one of the reservoir management meetings the group tried to figure out what was going on. This meeting was held in a room (See figure 1) with 6 workstations, all facing three projection screens on a wall. Each workstation had 2 monitors. All monitors could be displayed on each of projection screens. The leader, the production coordinator, two reservoir engineers and two production engineers occupied the workstations. Two more reservoir engineers and one production engineer sat on chairs between the people at the workstations.

The production coordinator began by summarizing what had been done to solve to problem: The well had been run on a separate line of the processing plant and no sand had been found. The detector had been reconfigured – to no avail. Then the detector had been replaced, but still they got indications of sand being produced.

Next, the production engineer responsible for that well pulled up a PowerPoint presentation on one of the projection screens. This presentation contained the analyses of the results of all tests conducted on that well since it began production. The presentation contained about 80 slides, but the production engineer quickly shipped to the last slide. Here he had inserted a screen dump from one of the productionengineer’s tools. The slide showed a graph with a number of lines (See excerpt in figure 2). Annotated arrows pointed to specific positions on the graphs. The production engineer referred to various points on the graphs and told that nothing else, i.e. temperature, pressure, oil-to-gas ratio or water content, had changed since the apparent occurrence of sand. One point on graph did however not match the rest, but he dismissed it saying “Just ignore this point. It’s an inaccurate test.” If the well had in fact been producing sand, he would have expected some changes.

Figure 1. Layout of meeting room

106

The leader running the meeting then intervened and said “Maybe we are trying too hard to prove that it is not producing sand. Maybe we should try to prove that it is producing sand?” before asking if anybody else had something that could shed light upon the situation.

Then the reservoir engineer responsible for the reservoir area in question put his monitor on the projection screen. He then showed a model of the reservoir and told that according to their models there were no reasons for the well to produce sand. Other wells at similar depths in the same area did not produce any sand.

The production coordinator then summarized the situation; stating that since they did not find any physical sand from the well in the process plant and since they had no other indications of sand production they should conclude that there was no actual sand production from the well and that they had a false positive caused by some sort of interference. However, they would have to pay attention to the situation if something changed. Nobody disagreed.

4.3 Dealing with the Unexpected

Most of the time the engineers utilize their previous understanding of and knowledge about their oil and gas field in order to conduct their everyday work. However, sometimes their understanding and knowledge is either not comprehensive enough or simply not correct. When tweaking and adjusting information and models does not yield a sufficient understanding they have to use more drastic measures.

The reservoir engineers create, modify and maintain various models of both the entire reservoir and the individual wells. They also have models of various parts ofthe reservoir.

During a particularly intense reservoir management meeting where the engineers had difficulties agreeing how to run the field in the upcoming period, one of the more senior production engineers commented upon a graph in one of the reservoirengineers’ model that showed water production from well W3 (a pseudonym). “Well

Figure 2. Excerpt of slide presented by production engineer

107

W3 doesn’t produce water”, he said. The reservoir engineer responsible for this area of the reservoir then told that there was something wrong with the reservoir models for that area. Their initial prognoses showed that the reservoir would be so depleted within 4 to 6 months after start up that W3 would begin producing water. This had not happened. Whenever the models for the well were adjusted, the graph for water production was simply shifted forward in time (See figure 3).

A senior reservoir engineer then said that W3 was not the only well that did not produce water as anticipated. This was also the case with other wells in this part of the reservoir. But, as he said, “The longer it takes before we begin producing water, the better”. He then said that the normal history matching, or fine tuning, that they did on a regular basis did not manage to cater for this inaccuracy and that they needed to redo the reservoir models for the wells. However, they did not yet have had the time to do it as other tasks had had to be prioritized.

As just this meeting happened to be the first reservoir management meeting a newly-hired production engineer had attended, the leader running the meeting asked the senior reservoir engineer to say a bit about this task and possible reasons for doing it.

The reservoir engineers’ task in such cases was to continuously adjust the various reservoir models by fine tuning them. Fine tuning a model involved matching the

Figure 3. Graph showing plot indicating water breakthrough

108

model with historical production data. Simply put, they would plot the various production data on a graph and then tweak, knead and stretch the models to fit those data as good as possible. However, sometimes it would be impossible to match actual production data with the model, like in the case of water production on W3. In order to correct for this, they would have to re-work the entire model. This was a much bigger task and required more time than they had available at the moment. If they needed to have it done quickly, it had to be initiated by management.

Further he told that there were many reasons why the models could be wrong. It could be that the initial models were simply wrong, i.e. they had an erroneous understanding of the reservoir’s physical properties and had created a model with the wrong parameters. It could also be the case that the production from the reservoir had not been as anticipated. For instance, start up of other production wells may had been delayed, and thus, there would be more oil and gas left in the reservoir at the point in time when the model suggested the well should start producing water.

In this case, however, the senior reservoir engineer said that they suspected that there were some unexpected connections between two parts of the reservoir initially believed to be completely independent. When they had started injecting gas into this other part of the reservoir, they had gotten indications on a pressure build-up in the area with W3 as well. That is, there had to be some movements of fluid between the two areas. This was totally unexpected and not included in the initial models.

5 Discussion

As shown in the analysis above, oil and gas production is highly knowledge intensive work. The engineers rely on data and information from various sources and use their knowledge and understanding accumulated through experience.

The engineers need a variety of information in order to their job. Through various ICT tools and systems they get access to this information. In an ideal world they would have access to a complete and accurate set of data they could use in order to optimally produce oil and gas. However, the information the ICT systems provide are neither complete nor accurate. In order to compensate, the engineers utilize various strategies and tactics in order to ensure the quality and reliability of the information.

As is it near impossible to avoid that equipment breaks down, especially in difficult environments like oil and gas reservoirs where equipment have to deal with high fluctuations in both temperature and pressure, one cannot simply rely on reliable availability of information. It is important being able to cope when the information is not available. Thought the system does not provide redundancy as such, through domain knowledge and experience the engineers are able to calculate/estimate the missing information in a sufficient manner in order to get the job done. Being able to do so ensures that the engineers are able to run the wells in a safe and secure way, jeopardizing neither the well integrity, nor the safety of people or environment.

Due to the complexity of oil and gas production it is difficult to ensure that theequipment produce accurate information. For instance, identical sensors connected to the same pipe can yield different results simply based on the placement of the sensors,

109

for instance if the sensors are placed before or after a bottleneck or on the inside or outside of a bend. This means that the engineers cannot routinely trust the information they get. Instead they have to validate the information before trusting it. As it is impossible to ensure 100 per cent validity or reliability, the engineers get to know through experience and domain knowledge when they have sufficiently validated the information.

Hands-on experience with an oil and gas reservoir with high temperatures and pressures deep below the seafloor is impossible. Thus models and simulations provide the engineers with their understanding of what is going on in the reservoir. As the understanding changes when they get more experience with the reservoir it is natural that the models need tweaking and at times major revisions.

6 Conclusions

Having all information, as it were, at your fingertips and trustworthy at face value seems a highly unlikely scenario as Porter convincingly argued [20]. This image abstracts away from the shortcuts, imperfections and glitches so typical for real-world knowledge-based, information intensive work. There are accordingly two ways to read the implications of our study.

On the one hand, the efforts to create perfectly accurate, available, consistent and complete information that decision- and action-taking automatically can trust seem futile [12]. Information is captured and stored in information systems for targeted purposes that make reuse of the same information later for different purposes challenging. For instance, during drilling you log details about drill pace and sediments whereas you in the later stage of production and reservoir planning are interested in the exact location where the well enters the reservoir.

On the other hand, the users are highly skilled and competent in working out what we in this paper describe as socio-technical strategies for assessing the uncertainties tied to the information. The futility of the efforts of “perfect” information pointed out above does not represent as bleak a prospect as it may appear. The users cope quite well with imperfect information even with complex and risky decisions; the strategies or heuristics employed compensate for the shortcomings of the information to create a socio-technically robust system for risk assessment that go well beyond nominal notions of “managing” risk. As part of their everyday practices, the production and reservoir engineers in our study balance the conflicting pressures stemming from increased short-term productivity, safety norms and professional judgment; they “shoulder the risk”, to use Perin’s phrase [21], by competently juggling the competing agendas.

In short, in order to conduct their work the engineers’ knowledge must consist of general knowledge about oil and gas production, as well as local knowledge of the specific reservoirs and wells [22].

110

References

1. Chen, H., R.H.L. Chiang, and V.C. Storey, eds. Business Intelligence Research. MISQ Special Issue. Forthcoming.

2. Davenport, T.H., Putting the enterprise into the enterprise system. Harvard Business Review, 1998. 76(4): p. 121.

3. Kumar, K. and J.v. Hillegersberg, Enterprise resource planning: introduction. Commun. ACM, 2000. 43(4): p. 22-26.

4. Chari, K. and S. Seshadri, Demystifying Integration. Association for Computing Machinery. Communications of the ACM, 2004. 47(7): p. 59.

5. Goodhue, D.L., M.D. Wybo, and L.J. Kirsch, The Impact of Data Integration on the Costs and Benefits of Information Systems. MIS Quarterly, 1992. 16(3): p. 293.

6. Hanseth, O., C.U. Ciborra, and K. Braa, The Control Devolution ERP and the Side-effects of Globalization. The Data base for advances in informatin systems, 2001. 32(4): p. 34-46.

7. Kallinikos, J., Farewell to Constructivism: Technology and Context-Embedded Action, in The Social Study of IT. 2004.

8. Pollock, N. and J. Cornford, Customising industry standard computer systems for universities: ERP systems and the university as a 'unique' organisation. Information Technology & People, 2004. 17(1): p. 31-52.

9. Singletary, L.A., Applications Integration: Is it Always Desirable?, in Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 8 - Volume 8. 2004, IEEE Computer Society. p. 80265.1.

10. Hanseth, O. and C. Ciborra, eds. Risk, complexity and ICT. 2007, Edward Elgar Publishing.

11. Perrow, C., Normal accidents: Living with high-risk technologies. 1999, Princeton, N.J.: Princeton University Press. X, 451 s.

12. Law, J. Ladbroke Grove, or How to Think About Failing Systems. 2003; Available from: http://www.lancs.ac.uk/fass/sociology/papers/law-ladbroke-grove-failing-systems.pdf.

13. Shapin, S., A Social History of Truth: Civility and Science in Seventeenth-Century England (Science and Its Conceptual Foundations series). 1995: University Of Chicago Press.

14. Boland, R.J., Jr. and R.V. Tenkasi, Perspective making and perspective taking in communities of knowing. Organization Science, 1995. 6(4): p. 350.

15. Cicourel, A.V., The integration of distributed knowledge in collaborative medical diagnosis, in Intellectual teamwork: social and technological foundations of cooperative work. 1990, L. Erlbaum Associates Inc. p. 221-242.

16. Walsham, G., Doing interpretive research. European Journal of Information Systems, 2006. 15(3): p. 320.

111

17. Klein, H.K. and M.D. Myers, A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 1999. 23(1): p. 67-93.

18. Eisenhardt, K., Building Theories from Case Study Research. Academy of Management Review, 1989. 14(4): p. 532-551.

19. Maanen, J.V., Tales of the Field. 1988: The University of Chicago Press. xvi,173pp.

20. Porter, T.M., Trust in numbers: the pursuit of objectivity in science and public life. 1995, Princeton, N.J.: Princeton University Press. xiv, 310 s.

21. Perin, C., Shouldering Risks: The Culture of Control in the Nuclear Power Industry. 2004: Princeton University Press. 352.

22. Perby, M.-L., Computerization and Skill in Local Weather Forecasting, in Knowledge, Skill and Artificial Intelligence, B. Göranzon and I. Josefson, Editors. 1988, Springer-Verlag Berlin Heidelberg. p. 39-52.

112

Paper V

113

114

Joining a Community: Strategies for Practice Based Learning

Torstein Hjelle, Nord-Trøndelag University College, Steinkjer, Norway, [email protected]

Thomas Østerlie, NTNU Social Research, Trondheim, Norway, [email protected]

AbstractOil and gas production involve potential

danger to human lives, environment and economy. Everyday decisions and actions can lead to catastrophe. Production engineers are a group of people responsible for getting oil and gas from the subsurface reservoir to a processing plant. In order to this in an efficient and safe manner the engineers require a set of skills that we argue they cannot get for education alone. In order to become proper production engineers, they have to become members of a community of practice. A knowledgeable production engineers must acquire a contextual understanding of the world of oil and gas production. However, this world changes from oil field to oil field. Hence, it is important that new engineers are initiated into the existing practices of the organization. Our primary question is, how are new members introduced to the community practices in order to become a knowledgeable engineer? We contribute by characterizing strategies and mechanisms for helping newcomers become knowledgeable members of a community.

1. Introduction

The disaster with the BP operated drilling rig Deepwater Horizon in the Gulf of Mexico in April 2010 clearly illustrated how oil and gas production can be a potentially dangerous undertaking with risks to human lives, the environment and the economy. Oil and gas production are extremely information intensive operations, and faulty or unreliable information

can lead to severe consequences. Engineers working with oil and gas production rely on a vast amount of information, ranging from theoretical models, simulations, logs, intervention reports and production measurements. In order to make good decisions the engineers should ideally consider all this information. However, this information is not readily available to them. The information is dispersed across a huge number of databases, applications and platforms in a fragmented, redundant and inconsistent maze of systems. To make decisions, the engineers must first retrieve the available information, filtrate the information before they can begin to render the data in order to build knowledge necessary for sound decision making. When a new engineer is introduced to these tasks, he/she has to be introduced to the practices used by the rest of the group in order to do the job.

How to organize or mobilize for problem solving and learning across geographically distributed settings and communities have long been an important issue for both research and practice. Within the oil industry this is commonly referred to as integrated operation (IO). Given today’s situation in many companies, knowledge is increasingly more distributed across technological systems, people and organisational boundaries, IO denotes the commitment towards creating radically new and more effective ways of working and learning. A fundamental issue in this respect is how to collaborate across different boundaries [1] or Communities of Practice [2] and what kinds of technical and social arrangements provide a better context for learning and working to take

115

place. Our perspective on learning addresses how a network of people and tools (i.e. material entities) may change as a co-construction. This perspective from Science and Technology studies [3], is combined with a pragmatic view on knowledge. Learning and working in this perspective emphasize the network of actors; human and non human from where knowledge is created and shared rather than on individuals, methods, or particular systems. Efforts of establishing new arenas for sharing knowledge and solving problems have to foster a process which is iterative and continuously evolving where members interact with each other, share experiences and take action.

The key question in our study is how are new members introduced to the community practices in order to become a knowledgeable engineer? We illustrate how sosio-technical strategies in combination with organizational learning are used in such settings.

The empirical setting for this paper is a group of highly specialized engineers working with oil and gas production within a large international oil and gas company (dubbed OGC for anonymity).

The rest of the paper is organized as follows: Section 2 reviews the challenges of knowledge sharing within groups of highly specialized experts. Section 3 describes our research method and approach. Section 4 introduces the case. In section 5 we present our analysis centered around strategies for initiating someone into a community in order to provide them with the foundation for creating context specific knowledge. In section 6 we discuss our findings, while section 7 offers our concluding remarks.

2. Knowledge Work in Specialized Communities

On the one hand, we are becoming increasingly aware of the important role knowledge plays in everyday work [4-5]. On the other hand new technologies are opening for increased codification and physical fragmentation and the potential of distributing the overall knowledge of work on several [6-9]. A fundamental question then is what mechanisms are established to enable the

sharing of knowledge when existing work practices are facing new technologies.

From a technological point of view, sharing knowledge is a question about capturing and codifying the content of knowledge. Only then can it be made usable across contexts. Typically knowledge management tools such as experience factories, semantic web systems and organizational intranets have been applied to enable knowledge sharing. Such a perspective, often underlying the design and development of ‘new technologies’, have however been vastly criticized as it neglects the interactive and narrative side of knowledge (See e.g. [4, 10-13].

The problem with the technological perspective mentioned above is that it downplays to the level of non-existence the contextual side of knowledge (See e.g. [14-15]). In the same way the human interaction perspective tends to disregard the role of codified representations of knowledge [16]. In this paper we do not engage ourselves in a debate about one or the other, but appreciate both as important to the knowledge sharing discourse [17-19]. We take a pragmatic approach and conceptualize knowledge as the ability to act (See e.g. [11, 20]), and explore how ‘heterogeneous’ representations of knowledge (i.e. both codified and narrative forms) are brought together in specific practices.

2.1 Communities of Practice

There is a deviation between the way people conduct their actual, everyday work and the way the organization describes the same work in training, formal descriptions, organizational charts and job descriptions [17]. The concept of Communities of Practice [2] is an often used approach to increase understanding of the activities and processes taking place in work, as well as putting focus on the kinds of social engagements required.

The concept of Communities of Practice was based on the fundamental belief that separating theory from practice is unfortunate [21]. Instead it is argued “that learning should be contextualized, by acknowledging its presence and allowing it to continue to an integrated part of work” [22].

116

According to [23], “Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.”, i.e. a group of engineers working together within an business organization to get the optimal quantity gas and oil from a reservoir below the seabed to a platform would constitute a CoP – given that the three crucial characteristics of a CoP is met:

1. The Domain: A CoP is not just any group of people. The group must have an identity through a common domain of interest, or goal. That is, the members of the community must be committed to this goal.

2. The Community: In order to achieve their goals the members of the community must interact with each other; they must engage in joint activities and discussions, in addition to help each other and share information. Their interpersonal relationships enable them to learn from each other. It is, however, important to note that the community members do not have to work together on a daily basis, but that their interactions are vital in making them a community of practice.

3. The Practice: To constitute a community of practice the members must be practitioners. Through their interactions; their experiences, tools and views, the members of the community develop a shared practice over time. This is an ongoing and continuous process. If the group stops interacting, their practices will in time deteriorate.

Developing this shared practice is done through a series of activities like solving problems, sharing information, utilizing expertise, reusing resources, coordinating, discussing and documenting.

A community of practice relies heavily upon each individual member’s understanding of who the members of the community is, what kind of behaviour that is acceptable within the community, what kind of role the various members have and what kind of convention is applicable. Each member’s understanding of the community is an ongoing process that evolves as the community evolves through collaboration and experience. Through mediation the community settles on a shared understanding

based on accumulated knowledge and experience.

2.2 A practice based perspective on knowledge sharing and learning

Practice implies doing and is the situatedness of all human action [24]. It is fundamentally different from the way organizations describe that work in manuals, organizational charts, and job descriptions [17]. Emphasis is on Communities of Practice [2] were knowledge sharing takes place, rather than on individuals, methods, particular systems or single projects. According to [15] strategies to supporting knowledge sharing, even in large scale communities cannot discount for the interactional human-to-human processes through which it is nurtured. For instance, in practice information from the electronic patient record, clinical specific systems and other systems are often copied and printed out on paper to become usable in the everyday work [25].

In this paper we apply a practical and contextual perspective on knowledge [13], highlighting the active and productive processes of knowledge as in “sense-making, in which the unique thought worlds of different communities of knowing are made visible and accessible to others” [10]. By this we do not imply that technologies to embed knowledge entities are misplaced. Rather our argument is that these are always dematerialized knowledge entities. Peoples’ ability to make sense of them is thus intrinsically tied to the specific socio-technical setting through which they are recorded and actually used. As argued by [26]:

“Medical technologies and artefacts are located ethnographically and historically in the practice of designing or using the technology. Distinct from other related theories, where technology is considered to be a passive mediator of human action”

We lend ourselves to a socio-technical perspective and consider knowledge as a network of interdependent entities where “individual pieces [of knowledge] are linked together into complex structures in various ways” [27]. Knowledge sharing then is a collective, heterogeneous and ongoing

117

accomplishment, distributed, delegated and coordinated across time and space (see e.g. [28-29]). Making knowledge sensible across contexts requires work, articulation work. As argued by [30]:

“(…) disentangling the data from their primary contexts is possible; however, this involves a translation from one context to another, and this translation requires active work”.

Our approach thus holds that i) knowledge sharing as a process of translation because knowledge entities will always undergo a change when being used in different contexts. These are ii) heterogeneous processes and consequently, iii) the boundary between computer-based and paper-based technologies are blurred.

3. Method

This paper reports from an longitudinal research project that began in the beginning of 2007. Our research can be classified as an interpretive case study [31] as we “attempt to understand phenomena through the meanings that people assign to them” [32].

We began our data collection activities in early 2007 seeking to explore the changes introduced into OGC by the implementation of a new collaboration solution based on the Microsoft SharePoint platform. Through semi-structured interviews, observations and document analysis we gained insights into and understanding of the technological complexity, as well as the overwhelming size, of OGC’s collaboration infrastructure. In this phase of our research our main informants were IT managers, administrators and developers. The majority of this research was conducted at one of OGC’s three research centers where we were granted access, both to the building were we got an office space, and to the people working there. We got to interview people within such disciplines as technology managers, human resources and researchers – both within technology and organizational development.

In March 2009 we got introduced to a group of production engineers working at a nearby operations centre through a workshop at OGC’s

research centre. Shortly after we got access to this operations centre. We were given the opportunity to visit the operations centre and observe the production engineers in their daily work. During the next 15 months we visited the operations centre about 110 days. We were allowed to sit in during meetings, both internal meeting within the group and with external partners. In total, more than 375 meetings, ranging from 3 minute long status updates to day-long work sessions, were observed.

When not in meetings we were given access to work stations in the engineers’ open plan office where we could work while still being able to be a part of the surroundings. This way we got the opportunity to observe how the engineers worked in their everyday work.

During observations handwritten notes were taken down. Either after the meetings or at the end of the day the notes were then written out. Thoughts and reflections made during the observations have been written down in a separate column in our notes. Questions were asked to clarify and elaborate findings, which is very important [32]. In order not to disturb meetings, these questions were most often asked while walking to or from meetings, or during lunch or breaks.

During these meetings our role was only to observe. With one exception: The bi-weekly reservoir meetings. This meeting is a forum that was established during our period of observation and the group leaders wanted our input in order to make the forum as good as possible. So, at the end of each meeting, we spent a few minutes commenting on the meeting structure, organization, flow, etc.

The second method of data collection has been semi-structured interviews. The initial interviews were quite open-ended and targeted, while the latter interviews have been more targeted at specific situations and challenges.

In total we have conducted 26 interviews lasting from 1 to 3 hours. Only 8 of the interviews have been recorded, but as we in most interviews have been more than one researcher present, we have divided the task so that one is only focusing on writing down what is being said, and thus we have to some extent compensated for the lack of recordings. Upon

118

completion of the non-recorded interviews we immediately after the interview went through the notes together in order to clarify uncertainties.

The third data collection method has been document analysis. We carried out an extensive study of presentations, formal descriptions of work processes, plans and strategies, both related to the collaboration infrastructure and to oil and gas production. This analysis gave us a good understanding of the information infrastructure; and the possibilities and limitations set by this.

When it comes to data analysis, this is a newer-ending, continuous process. Being able to discuss findings with researchers working on similar topics has been very useful as to challenge each other with regards to our understanding of the situation at hand. Having different backgrounds, we naturally have different points of view and thoughts about what we see and experience. Also, having established a close relationship with several of OGC’s researchers has given us another arena to discuss our findings.

Our data was initially classified into quite broad containers, for instance “technological aspects”, “common misunderstandings”, “communications” and “numbers”. On the next iteration new containers would appear as we had gotten a better understanding of the data and the context. This classification is not able to cover all possible details, nor is it a clear divider between the different containers, but to us – with our qualitative approach – it did the job.

The process of verifying the validity of our data has been continuous as well. The nature of our interviews, i.e. semi-structured and open-ended, has opened up to more of a two-way conversation, rather than a pure question-answer session. If something has been uncertain we have rephrased our question or asked the interviewee if he/she could explain further. In addition, our rather unique access to the organization with work space within their offices has, as mentioned, opened up for informal chats and discussions. Bringing up something that we have found interesting during for instance lunch enables us to get other people’s opinions and meanings, thus

strengthening or weakening our understanding of the topic at hand.

4. Case

4.1 Context and History

OGC was established in the early 1970s and has since grown from being a small regional operator in Northern Europe to become a large Fortune 500 company with about 20000 employees and operations in 34 countries across 4 continents. The growth of the company has been both organic and through mergers and acquisitions. Due to the limited growth potential in the home market, OGC are currently expanding internationally.

As the company has grown in size, so has the need for a good information infrastructure, good tools and good collaboration solutions. A number of corporate-wide initiatives to improve communication and collaboration have been undertaken. In the early 1990s the information infrastructure had become decentralized and fragmented to such a degree that a project to improve the situation was implemented [33]. This implementation was based on a Lotus Notes collaboration solution.

The Lotus Notes solution was widely used within OGC – and especially the Lotus Notes Arena databases were successful in order to facilitate collaboration within projects. However, one major challenge with the Lotus Notes infrastructure was the communication across different projects. The Arena databases had no centralized indexing functionality, meaning it was impossible to retrieve a document by searching if one did not know exactly what database to search. Internal estimates suggested that at within 10 years of operations OGC had more than 5000 Arena databases within their Lotus Notes infrastructure. OGC also produced more than 300 000 new documents each month. In such an environment, finding a piece of information was definitely non-trivial.

In 2001 a new strategy to improve collaboration and communication was introduced to combat the limitations of Lotus Notes. In 2003 a decision to implement a new

119

collaboration solution based on Microsoft SharePoint technologies was made. During the next 2 years the new solution was implemented throughout the company.

Initially, OGC wanted an out-of-the-box solution that would require little or no user training. However, they quickly realized that an out-of-the-box implementation would not fit their needs. They chose to make it as generic as possible in order for it to fit most contexts, but also introduced a custom classification schema in order to facilitate future information retrieval.

The core element of the new infrastructure was a team site, i.e. a virtual arena for collaboration. This is where people would store their documents, relevant emails and other information relevant to the various tasks and projects. In many ways, a team site would equal an Arena database in the old system. A built-in search engine would help people retrieve information within the SharePoint architecture. In addition, a search engine based on FAST technologies was introduced. This search engine would, in addition to the SharePoint infrastructure, also cover old Arena databases, the corporate intranet, archive, disk drives and other sources, making information retrieval even more efficient.

4.2 Production Engineers at Work

In 2003, OGC discovered a new oil and gas field in the North Sea. A new unit was established within OGC to be responsible for running this new field. Within this unit, a number of engineers with different petroleum technological background was put together to form a division responsible for getting the oil and gas from the reservoir below the sea bottom and to the processing plant on the platform. This division consisted of about 35 people from a number of disciplines: Production engineers, reservoir engineers, petro physicists, geophysicists and geologists.

When our research began, there were 5 production engineers within this division. Their experience ranged from just graduated with less than 6 months on the job to people working within OGC 6 – 8 years. Only one of the engineers had an educational background as a

production engineer, while the others had different petroleum technological backgrounds. During our research period a total of 9 production engineers has been part of the group for longer or shorter periods of time. At the time of writing they are 7 production engineers with this group.

The production engineers’ main task is to get the optimal amount of oil and gas from the reservoir to the platform at any given time. To achieve this, the engineers have to run the 12 individual wells at an optimum. In the long run, they want to get as much of the oil and gas out of the reservoir, but in their daily work there are limitations and restrictions preventing them from just running the individual wells at maximum rate. For instance, the total production from the 12 wells might be higher than the capacity of the pipelines connecting the wells to the platform. If that is the case, the production engineers have to limit the production from one or more wells in order not to exceed this limitation.

At all times, one of the production engineers also has the role of production coordinator. Two of the production engineers alternate having this position every month. The production coordinator is responsible for coordinating the tasks within the petroleum technology group with tasks from other parts of the organization. The production coordinator is also the one making the production plan; a prediction of the production for the upcoming week, every week. The role is also responsible for going to various meetings and bringing relevant information back to the group. In short, the production coordinator is the petroleum technology group’s connection point with the rest of the organization.

All the various engineers within the petroleum technology group are co-located in a large open office area. At one end of this area is a separate room, a collaboration room, where some of the production engineers work. In the rest of the area office desks are grouped together into “islands” of three people. The collaboration room- or “glass cage” as it is sometimes referred to – is separated from the rest of the area by a large, sliding glass door. This collaboration room is, in addition to 4 workstations, equipped with 2 projectors, 1 interactive whiteboard and 1 42” LCD monitor. The content on any of the

120

workstations can be displayed and shared on any of the large screens in the room. The room is also equipped with video conferencing facilities. The group uses this room for internal meetings, as well as in meetings with people from outside the group if the meeting is production related. Except during meetings, the sliding door is most often left open.

The rationale behind separating the production engineers in their own room is because they to a large extent work on a different time horizon compared to the rest of group; they monitor the daily activities both in the reservoir and on the platform, while the rest of the group have a more long term focus. If something related to production happens to either the reservoir, the wells, the pipelines or the platform, it is the production engineers who have to handle it. The production engineers, and especially the production coordinator, also interact with other parts of the organization on a more frequent basis than the others, and if he/she should do this in an open office area they would more likely interrupt the others.

5. Becoming a production engineer

New engineers go through an on-the-job training period shortly after joining OGC or after being transferred to a new unit within the organization. During this phase the engineers are introduced to organization, their future tasks and responsibilities, as well as the tools and systems they will need in their new position.

5.1 The master-apprentice relationship

When a new engineer is joining the group he/she is coupled up with a more experienced, often senior, engineer. This mentor/mentee relationship is positive to the new engineer as he/she through following the senior around on meetings interacting with other parts of OGC the newcomer is introduced to the rest of organization.

“[Name 1] and [Name 2] were production coordinators this fall, but I attended all meetings.” – Production engineer

This is in sharp contrast to when one of the other engineers began shortly after the field

began production. He had no experience being a production engineer, nor did he have other production engineers to rely on:

“I relied on people around me that did not have this type of responsibility. ...[Newly employed engineer] began in a better setting. Fewer problems. Better to sit with the others.” – Production engineer

As the oil and gas field has become more mature, routines for introducing new members into the group has been established as well. From the ad hoc practices in the initial phases there is now a plan behind hiring new engineers.

“I got a job offer, began in August with a 6-months plan: Gradual training to become a production coordinator. Good to know what they wanted to use me for.” - Production engineer

5.2 Learning by doing

Another way of embodying new engineers into the community is to get them producing as quickly as possible. New engineers quickly get responsibilities and tasks. In the beginning quite simple, but with time they become more complex.

“[I] got some small tasks that soon became bigger.” – Production engineer

As one of the main responsibilities of an production engineer is to monitor the various wells in depth, each engineer has a special responsibility for a handful of specific wells. New production engineers are given the responsibility of wells and have to follow them with regards to production rates, changes in temperature and pressure and sand and/or water production. In order to do this, the engineers have to get to know the wells.

“Before Christmas I got the responsibility for the [group of wells]. [I] had to get to know their history from production began, as well as production data. ... I searched through [available] information myself. Documents from Teamsites. [System] has data back to 1. January 2008. [Name] has a different system for older data.” – Production engineer

Retrieving, sorting through and understanding this kind of information is very important to a production engineer. In order to

121

understand how and why a specific well is behaving they need to understand its history.

Knowing the history is paramount in becoming a part of the community of practice that the production engineers constitute.

5.3 Peer-based learning

Production engineering within OGC is versatile. The production engineers, whose primary responsibility is to get oil and gas out of the reservoir and to the process plant, need to know a little about everything due to their coordinating role. For instance, a production engineer needs to have a bit of understanding of reservoir engineering as well the processing plant. Due to this, it is near impossible to learn everything needed before beginning in the position. A consequence of this is that people that do not have a education specifically within production engineering can assume the role. For instance, amongst the initial group of production engineers, only one had an education specifically related to production engineering. One had an educational background within chemistry and experience from processing, while another had previously worked on drilling of wells.

“[I’ve] never worked on production earlier. [I had] no knowledge of production optimization.” – Production engineer

Because of the diverse educational backgrounds of the people working within production engineering, on-the-job training becomes even more important.

“I finished my education in ’97, everything I need I’ve learnt after school. Lots of courses – both here and at [previous employeer]. The education is just the background.” – Production engineers

Production engineering does require specialized and knowledgeable workers. However, being such a multifaceted discipline it is impossible to expect new engineers to possess all the required skills when they begin in the position. Thus, giving them the required knowledge through on-the-job training becomes even more important. A strong community of practice is in that respect paramount.

When a new production engineer is introduced to the community he must be adopted into the group, i.e. the community. Even though he knows a lot about production engineering, and perhaps have years of experience, there are still reservoir and well specific elements he must become taught. Each reservoir and well has specific characteristics with regards to for instance temperature, pressure and the ratio between oil and gas. The wells are also planned and completed differently.

“There is a lot of documents and history. [I] used a bit of time on [field name] and well [number] regarding completion, initial plans and final well trajectory.” Production engineer

However, the collaboration room in itself can be seen as a tool assisting the learning process. The room provides more than just co-located work. Through its design it facilitates learning by lowering the threshold to ask for help.

“No doubt things have improved with the new room. ... Easy to put things on the big screens and check with colleagues.” Production engineer

The room also plays a role as a place where discussions are held and decisions are made. Even to the engineers that are not directly involved in the discussions the fact that they are witnessing the discussion means that they are aware of what the focus of the discussion has been and what was decided. This is especially important to the production coordinator has he/she at all times need to have a big-picture understanding of what is going on.

But also to new production engineers, being able to witness discussions between senior engineers are valuable in giving them insights into how problems are being solved. In other settings, we can imagine problems being discussed and solved around one single computer monitor – effectively preventing anybody else from being included.

Some of the time, the production engineers work alone with their computer desktop displayed on one of the projectors. At times this is appear to be to signal to others that they are very busy and will not be disturbed, while at other times, for instance while making the production plan, they signal that they would like input from others.

122

6. Discussion

As our analysis has showed, being a production engineer within oil and gas production is an intricate position. The engineers have a variety of responsibilities and have to interact with multiple disciplines.

Production engineering is a core activity within oil and gas production. The engineers sit with their fingers on huge streams of revenue and can, if doing their job poorly, easily kill a well and cause their company millions in lost revenue. Still, production engineering is to some degree an entry level position within oil and gas production. As our analysis has shown, production engineers do not have a uniform background or education. In fact, most production engineers within this group do not have an education within production engineering.

Our findings suggest that production engineers are the generalists within gas and oil production. As opposed to for instance reservoir engineers or geologists that are more strongly focused on their specific world, the production engineers have to interact with significantly more disciplines. The production engineers need to know a little about everything; they need to know a little about what the reservoir engineers do, they need to know a little about what the processing engineers do, they need to know a little about what the operations engineers do, they need to know a little about what the geologists do, and so on.

Because of this versatility, you don’t have to be a production engineer by training/education in order to become a production engineer. You do, however, need an overall understanding of oil and gas production in order to fill the role. You need an overall understanding of how oil and gas behave within a reservoir with given properties. You need to know how oil and gas is being processed at the processing plant. You need to know how a well is constructed and completed, and you need to know how to treat the well in case something happens.

As there are significant differences between the wells on one field compared to wells on other fields; both with regards to production

conditions like pressure and temperature, and with regards to how the well is drilled and completed, as well as the tools and systems used, a production engineer cannot easily “just switch fields”. Joining a new field requires the new engineer to go through a period of training, no matter how experienced he/she is from other fields, before he/she can become a productive production engineer. Though, on an abstract level, production engineering can be seen as a rather uniform activity, i.e. get oil and gas from the reservoir somewhere under the surface and to some processing plant, in reality it is not.

Thus, industry initiatives like Integrated Operations are faced with a number of challenges when they seek to standardize and generalize oil and gas production. How can they succeed in standardizing such a heterogeneous activity as production engineering?

7. Conclusion

Due to the inherent complexity of oil and gas production, in order to become a production engineer newcomers must be initiated into the community of practice through extensive training. Experiences from other oil and gas fields are to some degree of little relevance as there are so large differences between two different oil and gas fields. Engineers with experiences from other oil and gas fields do have some benefits already knowing what to do, they still need to learn how to do it when coming to a new oil and gas field; as different fields use different systems offshore, have different tools – or simply use the tools differently. The different oil and gas reservoirs also often have very different characteristics with regards to temperature, pressure and permeability. There are also differences between how different wells are designed and constructed, as well as the various processing plants. All this suggests that in such complex settings, training are an extremely important in becoming a member of a community of practice.

8. References

1. Star, S.L. and G.C. Bowker, Work and Infrastructure. Communications of the ACM, 1995. 38(9): p. 41.

123

2. Wenger, E., Communities of practice : learning, meaning, and identity. Learning in doing social, cognitive and computational perspectives. 1998, Cambridge: Cambridge University Press. XV, 318 s.

3. Latour, B. and J. Law, Technology is Society Made Durable, in A sociology of monsters: essays on power, technology and domination.1991, Routledge. p. 103-131.

4. Blackler, F., Knowledge, knowledge work and organizations: An overview and interpretation.Organization Studies, 1995. 16(6): p. 1021.

5. Davenport, T. and L. Prusak, Working Knowledge: How Organizations Manage What They Know. 1998, Cambridge, MA: Harvard University Press.

6. Hutchins, E., Cognition in the Wild. 1995, Cambridge, MA: MIT Press.

7. Berg, M., of forms, containers, and the electronic medical record: Some tools for a sociology of the formal. Science, Technology & Human Values, 1997. 22(4): p. 403.

8. Becker, M.C., Towards a consistent analytical framework for studying knowledge integration – Communities of practice, interaction, and recurrent interaction patterns, in 3rd European conference on organizational learning, knowledge and capabilities. 2002: Athens, Greece.

9. Aanestad, M., et al. Knowledge as a barrier to learning: a case study from medical R&D. in 4th European Conference on Organisational Knowledge, Learning and Capabilities. 2003. IESE Business School, Barcelona, Spain.

10. Boland, R.J., Jr. and R.V. Tenkasi, Perspective making and perspective taking in communities of knowing. Organization Science, 1995. 6(4): p. 350.

11. Cook, S.D.N. and J.S. Brown, Bridging Epistemologies: the Generative Dance Between Organizational Knowledge and Organizational Knowing. Organization Science, 1999. 10(4): p. 381 - 400.

12. Alvesson, M., Knowledge work: ambiguity, image and identity. Human Relations, 2001. 54(7): p. 863 - 886.

13. Walsham, G., Making a world of difference : IT in a global context. Wiley series in information systems. 2001, Chichester: Wiley. XVI, 272 s.

14. Desouza, K.C., Facilitating Tacit Knowledge Exchange. Communications of the ACM, 2003. 46(6): p. 85 - 88.

15. Fitzpatrick, G., The Locales Framework - Understanding and Designing for Wicked Problems. Computer Supported Cooperative Work Series. 2003: Springer.

16. Nonaka, I. and H. Takeuchi, The Knowledge-Creating Company. 1995, Oxford, UK: Oxford University Press.

17. Brown, J. and P. Duguid, Organizational Learning and Comunities of Practice: Towards a

Unified View of Working, Learning, and Innovation. Organization Science, 1991. 2(1): p. 40-57.

18. Atkinson, P., Medical Talk and Medical Work.1995, Cardiff University: SAGE Publications Ltd.

19. Orr, J.E., Talking About Machines: An Ethnography of a Modern Job. 1996, Cornell, Ca.: Cornell University Press.

20. Orlikowski, W.J., Knowing in practice: Enacting a collective capability in distributed organizing.Organization Science, 2002. 13(3): p. 249.

21. Lave, J. and E. Wenger, Situated Learning. 1991: Cambridge University Press.

22. Berntsen, K., G. Munkvold, and T. Østerlie, Community of Practice versus Practice of the Community: Knowing in collaborative work.ICFAI Journal of Knowledge Management, 2004. 2(4): p. 7-20.

23. Wenger, E. Communities of practice - a brief introduction. 2006 [cited 2011 2011-03-10]; Available from: http://www.ewenger.com/theory/communities_of_practice_intro.htm.

24. Suchman, L.A., Plans and situated actions : the problem of human-machine communication.Learning in doing : social, cognitive, and computational perspectives. 1987, Cambridge: Cambridge University Press. XIV, 203 s.

25. Hardey, M., S. Payne, and P.G. Coleman, 'Scraps': Hidden nursing information and it's influence on the delivery of care. Journal of Advanced Nursing, 2000. 32: p. 208-214.

26. Berg, M. and S. Timmermans, Order and Their Others: On the Constitution of Universalities in Medical Work. Configurations, 2000. 8(1): p. 31-61.

27. Hanseth, O., Knowledge as Infrastructure, in The Social Study of Information and Communication Technology. 2004.

28. Berg, M., Accumulating and Coordinating: Occasions for Information Technologies in Medical Work. Computer Supported Cooperative Work, 1999. 8(4): p. 373-401.

29. Ellingsen, G. and E. Monteiro, Mechanisms for producing working knowledge: enacting, orchesterating and organizing. Information and Organization, 2003. 13(3): p. 203-229.

30. Berg, M. and E. Goorman, The contextual nature of medical information. Journal of Medical Informatics, 1999. 56(1): p. 51-60.

31. Walsham, G., Interpreting information systems in organizations. John Wiley series in information systems. 1993, Chichester: Wiley. XV, 269 s.

32. Klein, H.K. and M.D. Myers, A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 1999. 23(1): p. 67-93.

33. Monteiro, E. and V. Hepsø, Purity and Danger of an Information Infrastructure. Systemic Practice and Action Research, 2002. 15(2): p. 145-167.

124

125

126

127

128

To whom it may concern

Statement

Re. authorship to publications included in Torstein Elias Løland Hjelle’s ph.d. thesis (cf. the ph.d. regulations § 7.4, section 4)

As co-author of the following paper(s) included in the ph.d. thesis of Torstein Elias Løland Hjelle :

1. Hjelle, T. & Østerlie, T (2013). Joining a Community: Strategies for Practice-Based Learning. Submitted to the 46st Hawaii International Conference on System Sciences, Hawaii, USA.

I hereby confirm that the candidate’s contribution(s) to this (these) paper(s) are correctly identified, and I consent to Torstein Elias Løland Hjelle including it (them) in her/his ph.d. dissertation.

07.11.2012

……………………………………………………………… name (sign.)

129