what are you looking at? a rivalry network extension …
TRANSCRIPT
WHAT ARE YOU LOOKING AT? A RIVALRY NETWORK EXTENSION TO
COMPETITIVE DYNAMICS
Sruthi Thatchenkery University College London [email protected]
Riitta Katila Department of Management Science and Engineering
Stanford University [email protected]
The dissertation from which this paper draws was the winner of the STR Division Outstanding Dissertation Award in 2018 and the Best Paper Award at the Smith Competitive Dynamics Conference. We appreciate the thoughtful feedback from seminar audiences at Boston University, Harvard Business School, McGill University, New York University, Tulane, and University of North Carolina, and audiences at the Academy of Management, DRUID, INFORMS, Strategic Management Society, and West Coast Research Symposium. We thank Gautam Ahuja, Charles Eesley, Kathleen Eisenhardt, JungYun Han, Martin Kilduff, Jason Rathje, Ron Tidhar and Bala Vissa for helpful discussions, feedback, and comments. Rae Sloane, Aisha Shafi, Hannah Warner, Deanna Lee, and Melissa Du provided research assistance. The generous research support of the National Science Foundation Graduate Research Fellowship for the first author is gratefully acknowledged.
1
WHAT ARE YOU LOOKING AT? A RIVALRY NETWORK EXTENSION TO COMPETITIVE DYNAMICS
ABSTRACT
We examine how a firm’s positioning in rivalry networks relates to the firm’s awareness of
opportunities and is moderated by the firm’s motivation and capability to act on those
opportunities. Using a data set on 121 enterprise infrastructure product firms over an 18-year
period, we find that monitoring from positions that span structural holes is positively associated,
and monitoring of peripheral competitors is negatively associated with a significant competitive
action: a firm’s product introductions. We also find that firms that are motivated to compete -
that are themselves targets of intense monitoring - can amplify the benefits of structural holes
spanning. More competitively capable firms, meanwhile, can more effectively thwart the risks of
peripheral competitor monitoring and turn the risks into an asset. Overall, our findings contribute
to competitive dynamics research by incorporating a networks perspective. We also add to
increasing evidence that strategies that isolate organizations from competition can backfire,
especially in innovative industries.
Keywords: competitive actions, competitor identification, competitive dynamics, product
introductions
2
According to the Awareness-Motivation-Capability (AMC) framework (Chen and Miller,
2012; Ferrier, Smith, and Grimm, 1999), competitive awareness is a prime determinant of the
firm’s ability to effectively engage competitors. Anecdotal evidence suggests that limited
awareness – i.e. “missing” or underestimating competitive threats – may hurt performance (WSJ,
2018). Empirical research offers further insight, emphasizing the importance of awareness of
“local” competitors (such as leader-follower pairs or strategic groups within an industry), and its
links to improved market share (Ross and Sharapov, 2015; Smith et al., 1991). Awareness of the
second-largest firm’s product moves, for example, protects market leaders from dethronement
(Ferrier et al., 1999), and the lack of such competitive actions hampers the firm’s ability to
respond to competition (Chen and Miller, 2012). Overall, this stream makes a strong case for
monitoring1 local competitors to benefit short term performance.
Despite these insights, the focus on local competitors seems to limit the insights that the
AMC framework can provide. First, while research on competitive dynamics emphasizes
pairwise comparisons of rivals such as leader-follower dyads, as noted above, several scholars
have recently urged research to examine positioning within “structures of competition” created
by the broader network of competitive relationships among rival firms (Chen and Miller, 2012;
Hoffmann et al., 2018). Firms vary in precisely which competitors they choose to monitor, and
extant research on networks points in the direction - but does not directly examine - that such
variance in monitoring may allow some firms to adopt network positions that allow them to “see
further” and to escape rigidities and network lock-in (e.g., Burt, 2010). We address this gap by
expanding the AMC analysis to incorporate the firm’s position in the broader network of
competitive relationships that we label rivalry networks.
1 We draw from Ruef (2002: 431) to use the term monitoring to describe purposeful awareness of particular competitors, i.e. “directed ties [that] involve a unilateral monitoring of discourse and activities on the part of other actors.” Competitor monitoring can also be labeled as competitor identification.
3
Second, although AMC work emphasizes the benefits of competitive awareness, there is
reason to believe that not all awareness is beneficial, and we address this gap. For example,
monitoring of particular competitors in rivalry networks can be unhelpful or even harmful
because the information obtained may be incomplete or untrustworthy. Similarly, the influence
of spanning structural holes is an open question as there are arguments for either negative (Tsai,
Su, and Chen, 2011) or positive (Chen and Miller, 2012) influence. Overall, whether all
awareness in rivalry networks is productive, and what happens if the firm pays attention to the
“wrong” competitor(s), is not yet understood. Altogether, to address these unexamined questions
on competitive awareness, we focus on product introductions as one of the primary ways in
which the focal firm can improve its competitive stance (Miller and Chen, 1994), and ask: How
is monitoring of competitors in rivalry networks associated with firm product introductions?
We investigate the research question using a panel of public firms in enterprise
infrastructure software. Infrastructure software is used to manage and maintain critical
information technology assets in a corporation, including cybersecurity and system management.
We build a novel, hand-collected dataset on 121 infrastructure firms, their competitor
monitoring, and 8,502 product introductions over an eighteen-year period from 1995 to 2012. A
core strength is our comprehensive coverage of the entire population of U.S. firms that
developed infrastructure software during the study’s timeframe, including gathering data on the
complete rivalry network (cross-referenced with analyst calls and fieldwork) to avoid sample
bias. We also compile fine-grained measures of actual competitor awareness and in-depth hand-
collected measures of product introductions. Another strength is our effort to enhance causal
inference by careful examination of whether endogeneity is present, and the use of a government
intervention (the U.S. v. Microsoft antitrust ruling) to instrument for competitor monitoring. We
develop an understanding of the mechanisms underlying our quantitative results with first-hand
4
interviews with 21 industry informants. In particular, these interviews suggest several
mechanisms through which monitoring’s insights can be distilled into new products.
There are several contributions. First, we introduce positioning in rivalry networks to
offer a more accurate and nuanced view of competitive dynamics. The introduction of rivalry
networks extends the view of competitive awareness within the awareness-motivation-capability
(AMC) framework. Controlling for intensity of competition in the sample firm’s markets and for
inter-firm heterogeneity, we find evidence that spanning structural holes in rivalry networks is
positively tied with the focal firm’s ability to alter its competitive stance. Intriguingly, we also
show that not all awareness in rivalry networks is positive: attempts to add differentiation by
monitoring competitors in the periphery of the network exacerbates competitive inertia and is
negatively tied with the firm’s ability to compete (i.e. introduce products). We also find that
firms that are motivated to compete – i.e. firms that are themselves targets of intense monitoring
– particularly benefit from spanning structural holes while more competitively capable firms are
better able to thwart the risks of monitoring peripheral competitors. Overall, our findings
contribute to competitive dynamics research by expanding the theoretical reach of the awareness-
motivation-capability (AMC) framework to examine broader patterns in awareness across an
industry and the implications for competitive action.
THEORETICAL BACKGROUND
Prior Work on Competitor Awareness and Competitive Actions
Competitive actions – i.e. “externally directed, specific, and observable competitive
moves initiated by a firm to enhance its relative competitive position” (Smith, Ferrier, and
Ndofor, 2001: 321) are central to competitive dynamics literature. Research in competitive
dynamics shows that a firm’s ability to carry out competitive actions – in particular service or
product portfolio upgrades - is central to improving performance, including market share, and the
5
lack of such competitive actions hampers the firm’s ability to respond to competition (Ferrier et
al., 1999; Smith, Ferrier, and Grimm, 2001a). Sears, a retail chain on the brink of bankruptcy is a
case in point. “Unless we believe we will receive an adequate return on investment, we will not
spend money on capital expenditures to build new stores or upgrade our existing base simply
because our competitors do,” wrote the company’s CEO in 2007. In the next 10 years, Sears lost
the competitive battle to rivals like Target, Home Depot and Wal-Mart that embraced
competitive actions of rivals such as e-retailing together with upgrades to physical stores (WSJ,
2018). Altogether, the firm’s competitive actions and its response – or lack thereof – to rivals’
actions are at the heart of competitive dynamics research.
The AMC framework pays particular attention to three factors that enable firms to
undertake competitive actions: awareness, motivation, and capability (Miller and Chen, 1994).
The first driver of competitive action is competitive awareness – i.e., identification and
monitoring of a set of rivals. The key idea is that awareness increases the focal firm’s chances of
outcompeting a rival by helping the focal firm understand the rival’s strategic priorities, its
vulnerabilities, and its general patterns of competitive behavior (Chen and Miller, 2012).
Building up knowledge of rivals through competitive awareness thus enables the firm to plan its
strategy and launch its own attacks effectively.
The AMC perspective generally emphasizes a positive view of competitive awareness.
Studies suggest that when competitive awareness is low, firms fall prey to competitive inertia
and engage in fewer competitive actions such as product introductions (Miller and Chen, 1994).
Conversely, high awareness goes hand in hand with more competitive actions such as new
products (Chen, Su, and Tsai, 2007). As noted above, however, the focus of empirical work is
typically on “local” competitors within well-understood competitive clusters. An exemplar is
Giachetti and colleagues’ (2016) study of U.K. mobile phone manufacturers that showed how
6
rapid incorporation of local rivals’ product features into the focal firm’s phones increased the
quantity of phones sold, and, as such, provided evidence of awareness’ ability to reduce
competitive inertia. Overall, this stream emphasizes the benefits of awareness but has paid less
attention to positioning in rivalry networks and on its potential pitfalls.
An emerging stream of AMC studies has suggested that there is a need to examine the
structural patterns of awareness, i.e. the network structure of competitive relationships beyond
firm-rival dyads. However, most extant AMC work that has started to examine networks has
examined ties to collaboration partners rather than purely competitive relationships. For
example, Gimeno (2004) studied how rival airlines select alliance partners and Madhavan and
colleagues (2004) examined how rival steel producers entered into strategic alliances with one
another. But this empirical work on collaboration does not help us answer questions about
positioning in rivalry networks – i.e. questions that we address in this paper.
A second gap in prior work is that the handful of studies that have started to examine the
network structure of competition (rather than collaboration) have inferred competitive ties from
overlap in output markets (i.e., product overlap in geographic markets or product segments), but
direct measures of monitoring are missing. For example, Hsieh and Vermeulen (2014) defined a
competition tie to exist between two drug producers who manufactured active ingredients in the
same category. In another insightful study on major U.S. airlines, Tsai, Su, and Chen (2011)
created a competition network using overlap in airline routes. However, when product overlap is
used as a measure, collaborative and competitive relations may be mixed, and inferences about
purely competitive relationships become difficult.
In contrast, we isolate competitor monitoring relationships that are idiosyncratic to each
firm, focused on purposeful and unilateral monitoring of a rival (such that asymmetry can exist
between a pair of firms), and form a rivalry network through aggregation of these monitoring
7
relationships. Drawing from prior work on networks and innovation (Ahuja, 2000; Kumar and
Zaheer, 2018), we propose that two types of positioning in rivalry networks are particularly
relevant for the competitive awareness that we study: monitoring across structural holes and
monitoring the periphery of the network.
Figure 1 about here
The AMC framework also emphasizes that the performance effect of awareness is likely
to be moderated by two characteristics of the focal firm: its competitive motivation and its
competitive capability, and so we incorporate these two aspects in our model as well. Increased
motivation to compete is importantly triggered by the intensity with which competitors attack the
focal firm (Ferrier, 2001). That is, rivals’ competitive actions provide the motivation for the focal
firm to consider the rival(s) to be in direct competition with the firm (Chen et al., 2007). In
contrast, such motivation may be missing if the firm is shielded from competition. Firm’s
capability to compete is in turn defined by the extent of the firm’s competitive experience
(Miller and Chen, 1994). The importance of motivation and capability is echoed in research on
networks; for example, Burt (2010: 29) argues that “sloth” (i.e. lack of motivation) and
“incompetence” (i.e. lack of capability) are reasons why, at the individual level, managers differ
in the extent to which they act (or not) upon opportunities provided by network connections.
Figure 1 outlines our research framework that integrates insights from network research into the
awareness-motivation-capability perspective on competition.
Research Context: Infrastructure Software
Our research context, software, provides a relevant setting to examine the linkages
between positioning in rivalry networks and competitive action. Exploratory interviews in the
industry (details provided in methods) provided us a preliminary understanding of the process of
competitor monitoring in these firms. They also helped us understand how the firms turned
8
insights from monitoring into action – to product development and introductions of products on
the market to improve the firm’s competitive stance. Details of this process, as described below,
provide a foundation for the hypotheses and for the measures that we present in the next section.
First, the interviews confirmed that competitive awareness was central to the top
executives we interviewed. A recurring theme was that identifying relevant competition was a
key task and influenced the kinds of competitive actions that were implemented. An experienced
executive explained that “how [executives] respond to the competition, once they figured out that
there is a competitive issue,” was crucial for the firm to decide. “What kind of strategic moves do
they make? Is it building up their product offerings? Their product portfolio. Or is it a matter of
cutting their prices so they're the cheapest ones out there to buy from?” One typical pathway for
this process was an executive roundtable that raised awareness of a new competitor or re-
confirmed focus on an existing one. One interviewee pointed out that top executives in her
company often debated which competitors posed the most potent threats, and “it was good
because it led us to consider a richer set of product features and capabilities.”
The interviews also confirmed that firms in software commonly reacted to competitive
pressures by introducing new products to improve competitive position vis-à-vis rivals (Chen,
Lin, and Michel, 2010; Young, Smith, and Grimm, 1996). Several interviewees described to us
how competitive awareness trickled down to product development through resource allocation
decisions. When a particular product development project, and a competitor, was prioritized by
the top executives, it got more resources. As one product manager explained:“[Our product] was
the CEO’s pet project because he wanted to compete with [a particular competitor]…We got our
own design team, our own engineering team, and everyone else had to share.” We also learned of
several examples of projects that were initiated because the top team was personally vested in
beating a particular competitor. A product manager traced the origins of her project as follows:
“The CEO and a couple other top executives at our company looked at [a new competitor’s first
9
product] and were like, ‘Dude why don’t we have this, we could totally do this.’ …Not because this was something we were already doing, or, like, something we needed to do. I think he saw [the new competitor] and thought we could just swoop in and grab most of the market…. So he wanted [a similar product] done, and it was done.” We also asked our interviewees how they identified and monitored competitors. They
told us that they routinely followed market analyst reports and attended Gartner (and other major
industry) events and trade shows to follow relevant competition. One interviewee told us that
particularly in the 1990s and early 2000s, “retail sales and industry reports” were the “lifeline of
data” and were used by the top management teams to analyze “the products of the firms that were
closely behind or closely ahead of us in market share.” More recently, a CTO whom we
interviewed illustrated, “One of our marketing people puts together a few competitive feeds every
week. Like a competitive digest. What competitors have been up to. He shares it with product and
engineering and with the top people.” When asked whether there was any connection between the
feeds and the firm’s technology, he said that the idea to apply machine learning to a specific area
of web interfaces (the company’s second product) could be possibly attributed to exposure to
different ideas from the two communities.
Other interviewees also mentioned that customers told them about new competitor
offerings as well as about features that the focal firm was missing in its own portfolio. One
interviewee described to us how his analysis of customer survey data motivated the executive
team to rethink the product portfolio. “It made us appreciate a feature in another firm’s product
that we had previously thought as irrelevant.” As a consequence, the executive team decided to set
aside resources to beat the competitor by developing a new product.
We also asked what mistakes our interviewees had made in monitoring their competitors.
A board member recounted the time his company had lost a major deal with an enterprise
customer. The customer offered to walk the focal firm through a “side-by-side comparison” of
the firm’s and the competitor’s product “that caused us to start spending a lot more money on the
10
user interface. And that was a redirection of the R&D organization.” When asked why the firm
didn’t modify the product sooner, he said, “we were observing [the other firm that won the
contract], but were very dismissive of it as competitor. We missed the fact that, even though they had
fewer features [than us], the CFO's office could get [the product] up and running with very little
training.” Interviewees also noted that because rivalry was intense and there were many
competitors, top executives needed to “prioritize” and focus on the most relevant ones. “You
would drive yourself nuts trying to develop against the entire field,” said a former CEO. Another
interviewee offered that paying attention to “wrong” competitors rapidly undermines the firm's
strategy, “You may spend a lot of time looking at the competition but you do not have a good way of
filtering or of prioritizing your competition… So if you keep focusing on the wrong thing, you're
missing the market.” Given the significance of competitive awareness, and its suggested links to
product introductions, we now proceed to discuss our hypotheses, method and results.
HYPOTHESES
Extending the awareness-motivation-capability framework, we propose hypotheses
linking product introductions and positioning in rivalry networks. We propose effects of
spanning structural holes (H1), of monitoring peripheral competitors that are sparsely monitored
by others (H3), and their interaction (H5). We also propose that competitive motivation (H2) and
competitive capability (H4) moderate the effects of structural holes spanning and monitoring
peripheral competitors, respectively. Figure 2 provides an illustration.
Structural Holes Spanning and Focal Firm Product Introductions
The first type of network positioning that is likely to be relevant for competitive
awareness is the focal firm’s ability to span structural holes, i.e. the extent to which the
competitors that the focal firm monitors are (or are not) monitored by each other (see Figure 2a).
In the arguments that follow, we propose that spanning structural holes in rivalry networks is
related to early awareness and a push for the firm to respond to competition through product
11
introductions, for several reasons.
Insert Figure 2 about here
First, brokerage positions in rivalry networks provide early awareness of diverse
information about competitors, which can help the firm spot opportunities to pre-empt rival
actions. A firm that spans structural holes in its networks is more likely to gain faster access to
relevant pieces of knowledge about its potential competitors–e.g., early awareness of a number
of different competitive threats and early signals of where the industry is going–compared to
firms that span few structural holes (Burt, 2010). Note that although Tsai and colleagues (2011)
argue that dense networks (rather than sparse networks with structural holes) facilitate getting to
know a particular rival better, our argument focuses on implications for more broadly-targeted
competitive actions. We argue that spanning structural holes is likely to differentiate the
information gained relative to other firms, creating early opportunities to pre-empt rival actions.
Second, knowledge that can be gained from spanning structural holes is likely to enhance
awareness of opportunities that can be responded to specifically by altering the firm’s product
portfolio (Galunic and Rodan, 1998; Schumpeter, 1934). This is because knowledge that can be
gained expedites the firm’s ability to recombine distinct pieces of knowledge into new
combinations. In contrast, a firm who monitors competitors within a dense local cluster (i.e.
tightly-connected competitors) may find it difficult to spot opportunities to outcompete rivals
through product introductions because it lacks unique early access to diverse information.
Moreover, if the focal firm is part of a dense cluster of competitor monitoring, i.e. where the
firm’s competitors have many connections to each other, the new information that reaches the
focal firm also reaches other firms who monitor the same competitors, and is likely to push the
firm to alternative competitive actions, such as price wars.
Third, spanning clusters of competitors increases the usability of information that can be
12
extracted by the focal firm. Unlike alliance relationships, competitors do not work to solve
problems together. Firms typically do not make any effort to help competitors learn from them
and instead are likely to try to block or distract the monitoring efforts. If, however, a firm
monitors a competitor that in turn is part of a cluster of firms (who monitor one another), there
are likely to be more independent confirmations of the same information within each cluster and
more opportunities to understand aspects of information that may be more tacit. In other words,
monitoring competitors that are part of clusters adds scrutiny, more viewpoints and thus more
usability to the information that can be extracted. Overall, spanning clusters helps the firm
assemble and act on diverse and usable information about competitors in a timely manner.
Hypothesis 1. An increase in the degree to which a firm spans structural holes in rivalry networks is positively related to product introductions.
Contingent Effects of Structural Holes Spanning
Hypothesis 2 focuses on the moderating effect of competitive motivation on structural
holes spanning, drawing from the awareness-motivation-capability framework. Prior work
suggests that a firm’s managers will be more motivated to carry out competitive actions if the
rivals engage the focal firm, forcing the firm to defend its turf (Chen et al., 2007; Ferrier, 2001).
As a consequence, we expect that the influence on product introductions by the focal firm’s
spanning of structural holes is altered depending on the firm’s motivation to compete.
In particular, we expect that motivation will positively moderate the relationship between
spanning structural holes and product introductions. As noted above, firms in brokerage positions
are better positioned to make more sophisticated competitive actions such as new product
introductions rather than simpler actions such as price changes. However, because these more
complex actions also require more incentive to carry them out (in addition to information), we
expect that firms who are more motivated to compete are more likely to take advantage of
information advantages of network positioning and actually implement these more complex
13
competitive actions. We thus propose that firms are particularly motivated to take advantage of
the knowledge gained through brokerage positions by carrying out a new product move when
they themselves are more motivated to compete.
Hypothesis 2. The positive relationship between the focal firm’s spanning of structural holes in
rivalry networks and product introductions is positively moderated by its competitive motivation: the greater the competitive motivation, the stronger the positive relationship
between structural holes spanning and product introductions.
Peripheral Competitor Monitoring and Focal Firm’s Product Introductions
Hypothesis 3 examines the effects of monitoring competitors in the periphery of the
network, i.e. competitors that are monitored by the focal firm but only sparsely monitored by
other firms in the same market (see Figure 2b). A peripheral competitor’s “marginal” status in
monitoring networks implies that fewer firms are using its product or competitive behaviors as
templates, and there are fewer replications and repetitions of its ideas by others. Unlike in
collaboration networks where a peripheral partner is likely to provide a differentiating factor for
the focal firm (Rodan and Galunic, 2004), it is unresolved whether monitoring a peripheral
competitor in a rivalry network similarly facilitates competitive action.
First, unlike alliance networks which feature active cooperation to facilitate knowledge
transfer, competitor networks transmit information that may not be properly vetted. In particular,
the more peripheral the monitored competitor is, the less the focal firm (i.e. the monitor) can take
cues from (or "read into") how other firms react to that competitor, e.g. whether the ideas are
worth learning from or responding to in the first place. Peripheral firms are also missing out on a
"selection process" that pulls intensely monitored competitors to respond to others’ monitoring
by potentially improving their ideas and products. By contrast, a peripheral firm is left with
minimal reaction. Thus, unlike within clusters of competitors where monitoring and the ensuing
extra scrutiny by other firms may help the focal firm better understand information gleaned from
14
a competitor, such monitoring and vetting is by definition missing for peripheral firms. In
extreme, monitoring of peripheral competitors may mislead. A CEO related a story of a
peripheral competitor who, “baited [one of our other competitors] into following their lead, where
their lead is nothing more than a dead-end alley…they got them to commit resources and marketing
resources and other resources [to a product development project] because of their [misleading]
claims.”
Second, monitoring a peripheral competitor may create a counterproductive distraction.
As explained by our interview evidence, peripheral firms that are viewed as doing something
“new and different” can become a tempting distraction (Chen, 1996) that prompts the focal firm
to divert R&D resources away from the core business and towards new ideas that may not be
relevant. This creates the opportunity for a peripheral firm to further engage in judo strategy by
luring the focal firm into “combat on [the peripheral firm’s] terms” while avoiding “tit-for-tat”
battles in the established areas of the market in which the focal firm is more entrenched (Yoffie
and Kwak, 2002: 11). An interviewee recalled an instance in which his firm had chased a
peripheral competitor: “to catch up [with a peripheral competitor] you pull all these resources off
what you were doing before…off supporting your current customers. So now you're trying to run two
businesses…You're basically toast because you're in no man's land straddling those two worlds.”
Thus, monitoring peripheral competitors may create a distraction that makes it more difficult to
effectively carry out competitive actions, especially when in conflict with the firm’s existing
resource commitments. Altogether, because peripheral targets of monitoring may yield less
trustworthy information, and because the knowledge that can be gained from such firms is likely
harder to absorb into the firm’s own competitive strategy and product moves, we hypothesize:
Hypothesis 3. An increase in the degree to which firms monitor peripheral (rather than prominent)
competitors in rivalry networks is negatively related to product introductions.
Contingent Effects of Peripheral Competitor Monitoring
15
Given that monitoring a peripheral competitor is a challenge, a question arises about
when firms would engage in monitoring of such marginal actors in the first place. Competitive
dynamics literature and the awareness-motivation-capability framework suggests a possible
answer: not all firms have an equal capability to deal with competitive threats, and some may be
more capable than others.
In particular, we propose that the firm’s competitive capability is particularly relevant in
rivalry networks in which ties may provide information that is not always fully vetted. One of
our interviewees explained that competitors “will tell you stuff …all the time that actually doesn't
work. But that's where you've got to know the market and the products better.” Peripheral
competitors (that others do not pay attention to) can be valuable targets of monitoring and a
source of potential differentiation, but, as noted above, the knowledge that can be learned carries
the risk that it is less tested, has faced less scrutiny, and is possibly less readily integrated into the
firm’s existing competitive strategy. By having more competitive capability, firms can
potentially mitigate these risks of untested knowledge.
Hypothesis 4. The negative relationship between the firm’s monitoring of peripheral
competitors in rivalry networks and product introductions is positively moderated by its
competitive capability: the greater the competitive capability, the weaker the negative relationship between monitoring of peripheral competitors and product introductions.
Thus far we have examined types of positioning in rivalry networks (structural holes,
peripherality) separately, but have not discussed how they can influence product introductions in
combination. These dynamics are germane because firms are likely to derive their ideas for
competitive action from multiple sources and may monitor peripheral competitors while also
spanning structural holes in rivalry networks.
In hypothesis 5 we propose that the two types of network positioning are complements,
such that spanning structural holes will mitigate the drawbacks to monitoring peripheral
16
competitors. As noted above, peripheral firms (as actors that are distant from the ‘core’ of a
network) can be sources of particularly novel ideas. If a firm combines spanning structural holes
with monitoring peripheral competitors, the benefits to gaining diverse and particularly novel
information may begin to outweigh the drawbacks of that information coming from less vetted
sources. In other words, the firm is adopting a network position that maximizes both the diversity
and uniqueness of the information gleaned through competitor monitoring, allowing the strengths
of the two types of network positioning to build on each other (Ruef, 2002). In contrast,
monitoring peripheral firms while not spanning structural holes is likely to be particularly
harmful, as firms must deal with the drawbacks to receiving unvetted and possibly distracting
information from peripheral competitors while also receiving redundant information from
competitors connected to each other within a single cluster. We propose:
Hypothesis 5. The negative relationship between the firm’s monitoring of peripheral competitors
and product introductions is mitigated by the firm’s spanning of structural holes.
METHODS We test the hypotheses on a novel hand-collected dataset of 121 public U.S. firms in the
enterprise infrastructure software industry between 1995 and 2012. Infrastructure software forms
the backbone of enterprise computing and serves critically important functions for enterprise
clients. Prototypical examples of firms in this industry include Computer Associates (network
and system management), Symantec (security), and Forte Software (application development).
Infrastructure software products are typically used to manage and maintain complex information
technology (IT) systems, encompassing a wide range of functions such as data backup, virus
protection, and system performance.
Enterprise infrastructure software is a particularly relevant context for the study. The
industry is neither too concentrated, such that competitors would be few and identical across
17
firms, nor too fragmented, such that competitors would be treated as interchangeable (as is the
case of commodities markets) or too difficult to identify. This balance between concentration and
fragmentation provides rich but tractable variation in the data. We also chose public
infrastructure software firms because, as one of our interviewees noted, it is easy for firms that
have gone through an IPO to become complacent about competition, making competitor
monitoring and competitive action (vs. inertia) a significant strategic decision for our sample
firms (e.g. Chen and Hambrick, 1995). The setting was also appropriate because unlike the
coopetitive logics in many craft industries (Hoehn-Weiss and Pahnke, 2018), our setting is
representative of many technology-intensive industries in which competitive actions can be
isolated from coopetitive ones - providing clarity of argument and testing (Gulati and Singh,
1998). Finally, the setting is relevant because product introductions are key to competitive
advantage. Enterprise customers have come to expect sophisticated and robust infrastructure
tools to manage their increasingly expansive and critical enterprise IT systems. As the CEO of
security software firm McAfee explained, “Discounting will not win competitive business of this
scale, you need superior solutions” (McAfee, 2006).
We began our sample in 1995 to coincide with the transition from centralized to
distributed (aka “networked”) computing. This transition marked a fundamental shift in the
technical architecture of enterprise IT and created the need for more sophisticated infrastructure
tools. We ended the sample at the time when the industry underwent its next major technology
shift, the advent of cloud computing in 2012. The core strength of our data is its comprehensive
coverage of the entire population of public U.S. firms that developed software in the five
enterprise infrastructure software markets for all three operating systems since the widespread
adoption of distributed computing. Comprehensive data on the full population is particularly
important for accurately documenting rivalry networks, and for avoiding sampling bias.
18
Our primary dataset is a hand-collected longitudinal study of 121 firms, supplemented by
in-depth fieldwork. In order to better understand how monitoring is related with changes in the
firm’s product portfolio, we interviewed individuals in the software industry (both enterprise and
consumer software). The 21 interviews were unstructured but all featured the following open-
ended questions: “What does the product development process look like in your firm? Who is
involved (different functions, different hierarchical levels)? Who do you see as a rival? How do
you find out about new competitors? Tell me about a competitor that you wish you had stopped
paying attention to.” We refrained from asking leading questions that would either support or
negate our hypotheses.
Sample Construction
Because infrastructure software is not distinguished from other types of software in standard
industrial classifications, we took several steps (outlined below) to identify the firms that
operated in the industry. We also took care to triangulate between multiple sources to improve
the coverage and to create a comprehensive dataset.
We started by compiling a list of all public software firms in the United States. Consistent
with prior work, we defined a “software firm” as any firm with either a primary or secondary2
classification under the SIC code of “prepackaged software” 7372. Between 1995 and 2012,
there were 1,206 public software firms in the U.S. After excluding the 390 firms that developed
products only for consumers (to focus on enterprise software), we compared each firm’s product
portfolio with the list of Gartner Research’s IT Glossary (a standard industry source) that
provides a comprehensive list of infrastructure software product categories. Gartner Research’s
list has been found to provide a detailed and accurate description of the industry (Pontikes,
2012). We classified a firm as an infrastructure software company if the majority of its product
2 The most common primary classifications for infrastructure software firms with 7372 as a secondary classification were 7371 (programming services) and 7373 (integrated computer systems).
19
portfolio matched the Gartner keywords.3 We triangulated this information with The Software
Catalog (an annual listing of software products) to ensure a comprehensive sample. We also
went to two industry experts for suggestions of companies to include. Cross-validation of these
sources yielded a final sample that consists of 121 firms and 823 firm-year observations between
1995 and 2012. Our firms exhibit expected patterns of regional concentration typical of software,
with 31% of the sample firms headquartered in the San Francisco Area, and 11% and 9%
headquartered in the Los Angeles and Boston areas, respectively.
Data Sources
Competitive actions: Product introductions. We used several sources to build the
dataset. For product introductions, we assembled data using a "literature-based innovation output
indicator" method (Coombs, Narandren, and Richards, 1996); specifically, through a careful
examination of company press releases. Press releases are the most common way in which
enterprise software firms announce new products and the standard source for product data in the
industry (Thatchenkery, 2017). We searched LexisNexis using the combination of the names of
our sample firms and product-related keywords (e.g., new product, announce, launch, release,
version) to identify potentially relevant announcements (Li et al., 2013b). Our initial search
returned over 118,000 press releases. Next, we used text analysis in Python to filter out
duplicates (i.e. the same press release issued to multiple newswires) as well as announcements
about unrelated topics such as new executive hires or international expansions. This automatic
text analysis returned roughly 42,000 unique articles with possibly relevant information about
new products.
We then engaged in a painstaking manual review of the 42,000 articles. The first author
categorized all articles and a team of trained coders conducted a second, independent review of
3 We primarily relied on the 2012 version of the glossary but cross-referenced against descriptions of infrastructure software categories found in older Gartner reports and found them to be consistent.
20
the entire sample. We first read through the headlines only and identified about 9,000 as relevant
to our firms' product introductions for further inspection and categorization. We then reviewed
those articles in more detail and, for those that were about product releases, recorded each
product’s release date, name, version number, and a brief description. Infrastructure software
products were distinguished from other software products (e.g. services, applications) by
comparing product descriptions with Gartner keywords. Inter-rater reliability was 92%,
indicating very high agreement and accuracy of the coding. Disagreements were resolved
through discussion between coders. Overall, our careful content analysis of over 118,000 press
releases yielded data on 8,502 unique infrastructure software products.
Competitor Awareness: Monitoring networks. Building on and extending prior work,
we used 10-K filings as a source for competitor monitoring (Hoberg and Phillips, 2016;
Lewellen, 2013; Li, Lundholm, and Minnis, 2013a), and verified the accuracy of this source
using analyst calls and fieldwork, as described in detail below. All public U.S. firms must file a
yearly 10-K report that updates shareholders on the company’s strategy, structure, and
performance. We examined the mandatory “competition” section in Item 1, in which the firm
describes the competitive conditions it faces, including specifically naming competitors. Details
about this data source are included below. We defined a unilateral monitoring “tie” to exist each
year the focal firm lists another as a competitor in its 10-K. Ties are directed and thus sample
firms may monitor firms that do not monitor them in return.
If a firm offered products outside infrastructure software (such as software services), we
only included competitors that were listed in the 10-K’s section on infrastructure software to
maintain consistency across the firms in our sample. We also focused on public competitors in
10-Ks. This was important so that variation in monitoring across organizations reflects differing
beliefs about which particular competitors are relevant, not differences in how easy they are to
21
notice in the first place (Miller and Chen, 1994). In a sensitivity analysis, we assessed the impact
of all competitors (public and private listed in the 10-K) with consistent results. Correlation
between the measures based on all competitors versus public competitors is high (ρ=0.88).
10-Ks are an appropriate source for data on competitors, for several reasons. 10-K
listings are (1) comprehensive records (due to mandatory nature and incentives tied to SEC
filings), (2) shown to more accurately capture monitoring than more traditional SIC-based
measures, and (3) particularly relevant to keep track of monitored competitors in the software
industry (based on our qualitative field work, and examination of analyst calls). We discuss each
of these issues in detail below.
First, 10-K filings are particularly appropriate for capturing the range of competitors that
are monitored. Li, Lundholm, and Minnis (2013) note that 10-K filings capture “competition
from many different sources that are hard to identify empirically [otherwise], such as…potential
new entrants.” (Li et al., 2013a: 402). Hoberg and Phillips (2016) similarly note that 10-K’s are
informative regarding “firms that managers themselves perceive to actually be rivals” (Hoberg
and Phillips, 2016: 1448). This comprehensiveness and accuracy are at least partly due to the fact
that firms have an incentive to be accurate in how they describe competition in the 10-K filings
because the qualitative descriptions of the firm and its business in the SEC filings are
consequential for the firm. Research finds that investors respond to changes in the textual
portions of the 10-K even after controlling for changes in financial results (Brown and Tucker,
2011). Given careful investor attention to the text of the 10-K filing, then, it is critical for firms
to not leave the impression that they do not understand the competitive environment. The
regulation governing the 10-K filing also requires that firms not include names of competitors
that would be “misleading” to investors, reducing the likelihood that firms would include
competitors that they do not genuinely believe to be relevant. Thus, firms are incentivized to
22
document the particular firms that they monitor as competitors accurately and comprehensively
in their disclosures.
Second, an analysis of 10-Ks has been shown to create valid measures of competition,
even surpassing several traditionally-used measures. Research in finance and accounting has
found that measures of competition based on textual analysis of SEC filings including the
competition section of 10-Ks are more accurate measures of competition over more
traditionally-used SIC code based measures (Hoberg and Phillips, 2016; Lewellen, 2013; Li et
al., 2013a; Rauh and Sufi, 2012). For example, Rauh and Sufi (2012) found that listed
competitors in the firm’s SEC filings provided a 40% improvement in explanatory power over
more traditional SIC-based measures. Kim, Gopal, and Hoberg (2016) provided similar evidence
of measures of product market competition based on textual analysis of 10-Ks in the information
technology industry. Another case in point is Dedman and Lennox's (2009) survey of managers
in a cross-industry sample of firms that found little relation between managers’ perceptions of
their competitive environment and traditional economic measures of competition such as
industry concentration or numbers of firms in a market (that have been traditionally used as
proxies of relations with competitors; Greve and Taylor, 2000). Altogether, management’s
discussion of competition in 10-Ks allows us to create measures that meaningfully capture
variation in who firms in enterprise software monitor as their competitors.
Third, we also carefully verified that the competitor listings in the 10-Ks were an
appropriate source for our industry setting, i.e. infrastructure software. The first step was
comparison to analyst calls. We compared the competitors listed in the 10-K to the competitors
mentioned in analyst calls. For 32 sample firms, we inspected transcripts of the year end call that
aligned with the filing of the 10-K for at least three years per firm. Competitors mentioned by
executives in analyst calls were consistently found listed in the 10-Ks, which provides
23
independent confirmation that the 10-Ks are a comprehensive source of data on monitoring.
As a second step, our expert interviews and examination of the 10-Ks indicated that
naming competitors is the norm in infrastructure software. Firms in our sample listed a minimum
of one competitor and an average of 6-7 competitors in the 10-K. These numbers are highly
consistent with prior work on competitor identification, which finds that managers focus on
between two and nine competitors (Clark and Montgomery, 1999; Porac et al., 1995).
As a third step, we validated the use of 10-Ks as a data source by interviewing current
and former executives in enterprise software. During interviews, we showed each executive a list
of firms in enterprise software, and asked the executive to rate each firm on a scale of 1 (not a
competitor) to 10 (intense competitor). We also asked if there were any competitors that were not
listed but were relevant to that particular firm. Overwhelmingly, comparison of these surveys
with each firm’s 10-K filings confirmed that executives viewed the competitors (that their firm)
listed in the 10-K as significant competitive threats and we did not find any instances of major
competitors that were omitted. Furthermore, our executive interviews also confirmed that top
executives, including the CEO, were involved in both competitor monitoring and product
development strategy in infrastructure software firms. Interviewees also confirmed that
competitive analysis in public firms that we study was typically done “across a 12-month
timeframe” matching the frequency of 10-K reports that we observe.
As a final step, we compared the use of 10-Ks as a data source on competition monitoring
with collaboration ties in enterprise software, and excluded collaborations to focus on purely
competitive ties. Our interview data indicated that because of the nature of enterprise software, it
was typical to ally with partners in other parts of the enterprise IT ecosystem such as platform
owners, with software firms in complementary markets, and with software service firms. For
example, Symantec, a leading security software firm, entered into partnerships with
24
complementors such as a database firm Informix, platform owner IBM, telecommunications firm
AT&T, and a professional services firm KPMG. In contrast, collaboration with other security
software firms was less common. We verified these patterns by excluding partnerships listed in
10-Ks, and by cross-checking a subset of our monitoring data against alliance data downloaded
from SDC Platinum. Overall, the evidence that we gathered showed that within infrastructure
software, the 10-Ks’ lists of competitors were accurate representations of monitoring of the
firm’s competition.4
Finally, we collected data on executive team characteristics with a comprehensive search
and triangulation of several sources: LexisNexis, Thomson ONE, Compustat, SEC filings, and
LinkedIn (Smith et al., 1994).5 We use triangulation of multiple sources, particularly proxy
statements and Compustat, to identify executives in our sample firms because it has been shown
to provide an accurate list (Hambrick, Humphrey, and Gupta, 2015). We also collected data on
firm and competitor financial indicators, including S&P index membership, firm size, financial
performance, R&D expenditures, and merger and acquisition activity from Compustat, Thomson
ONE, CapitalIQ, and SDC Platinum.
Measures
Dependent variable. We measured product introductions using new infrastructure
software products introduced by each firm yearly, collected from press releases. Product
introductions are an appropriate measure of competitive actions (Lee et al., 2000; Young et al.,
1996; Ndofor, Vanevenhoven, and Barker, 2013; Young et al., 1996), particularly for public
firms in our setting. As one of our interviewees noted, “If you’re a public company, you need to keep
4 One of our expert interviewees noted that industries differ to the extent that competitors are listed in 10-Ks, but the norm in infrastructure software is to provide an accurate list of specific competitors. 5 The SEC requires that public firms disclose their principal executive officers, including “any other officer who performs a policy making function,” in major filings (Securities Act of 1933, Rule 501(f), 17 C.F.R. § 230.501(f)). Prior work has used SEC filings as a source of individuals at a firm who are making executive-level decisions, including about competition strategy.
25
growing and the way to do that is to…create new products.” As noted above, we carefully cross-
referenced product descriptions with keywords from the Gartner IT Glossary in order to identify
enterprise infrastructure software products and excluded all consumer products and enterprise
application products. We also only included products that were confirmed to have shipped (i.e.
we excluded planned releases that never materialized). To qualify as new, we counted brand-new
products only (e.g., 1.0) and excluded ports of existing products to new operating systems.6
Independent variables. We measured structural holes spanning by the extent to which a
focal firm monitors competitors that do not monitor each other, presenting a bridging opportunity
for the focal firm. We used Burt's (1992) constraint measure of structural holes7 and reverse
coded the measure by multiplying by negative 1 in order to examine the effects of less
constrained network positions, i.e. structural holes spanning. In an alternate version of the
variable, we removed, per Burt (2010), from the network those competitors with only one or only
two indegree connections (i.e. competitors that only 1-2 firms in the entire network are
monitoring) to ensure that brokering opportunities are occurring between clusters of competitors
rather than between isolated competitors. Results using this alternative measure are highly
consistent with our main results (available from authors).
We measured monitoring of peripheral competitors by examining whether the competitor
listed in the focal firm’s 10-K was also listed by other firms in the focal firm’s market(s) and
used lack of monitoring by other than the focal firm as a measure. In line with Greve and Taylor
(2000) and prior work’s measure of “outlier” competitors (Reger and Huff, 1993) we calculated
6 During the sample timeframe, there were three major enterprise server operating systems (Windows, Linux, and UNIX). Multi-homing, i.e. releasing a version of each product for every platform, is typical for our sample firms. 7 Constraint is measured as !" = ∑ %&"' + ∑ &")&'))*"*' +,' where Ci is the constraint of firm i, pij is the proportion of firm i's total ties invested in competitor j, and piq and pqj are defined analogously for competitors j and q. The lower end of the theoretical range approaches but does not reach zero when every monitored competitor is disconnected from every other monitored competitor. The upper end of the theoretical range exceeds 1 in dense networks. While this is not a concern in our sparse monitoring networks, we test an alternate measure that standardizes constraint (i.e. divides by the maximum possible value) (Burt, 2004) with consistent results.
26
peripheral competitors as: ∑ -./
where dj is the number of firms in the focal firm’s markets that
listed competitor j in their 10-K, aggregated across all competitors listed by the focal firm.8
Because the primary measure is dependent on the number of firms listed in the 10-K, we also
tested a proportional measure, calculated as the proportion of monitored competitors that were
listed as a competitor by the focal firm and at most one other firm in the same markets. We also
tested a measure based on monitored competitors’ size (i.e. firms below a certain threshold in
sales) (Chen et al., 2007). Results were consistent.
Because competitive motivation is triggered by the intensity with which competitors pay
attention to the focal firm (Ferrier, 2001), we measured firm competitive motivation using
indegree centrality: c/(m-1) where c is the number of public infrastructure software firms that
listed the focal firm as a competitor (i.e. the number of inward ties to the focal firm) and m is the
total number of firms in the industry (i.e. the number of nodes in the network).
Because a firm competitive capability is defined by the extent of the firm’s competitive
experience (Miller and Chen, 1994), we used number of years since the firm’s initial public
offering as a measure. Firms that have been public for longer have more competitive experience
to draw upon (private-firm competitive experiences are likely to be significantly different;
Hoehn-Weiss and Karim, 2014; Hoehn-Weiss and Pahnke, 2018). As a robustness check, we
also tested alternative measures of competitive capability including firm size (since larger firms
have more resources with which to compete), with broadly consistent results.
Controls. We included several controls. Because firm diversification may influence
8 If the focal firm is the only firm to list competitor j, dj takes a value of 1. The upper end of the theoretical range is m, where m is the total number of competitors monitored by the focal firm, and would only occur if every monitored competitor was not monitored by anyone else in the focal firm’s markets. The lower end of the theoretical range approaches but does not reach zero when all monitored competitors are prominent i.e. monitored by a high number of other firms. We also tested the results with an alternative measure of this variable that only included monitored competitors in the same markets as the focal firm. Results were broadly consistent.
27
product introductions by creating internal opportunities for cross-pollination (Ahuja and Katila,
2001), we controlled for firm scope, measured as the number of infrastructure software markets
in which the focal firm developed products. We also controlled, in alternative tests, for firm
monitoring intensity (measured as the number of competitors monitored by the focal firm divided
by the maximum possible number of competitors i.e. the firm's standardized outdegree
centrality), with consistent results. As an additional measure, we also controlled for firm size by
number of corporate employees yearly (in thousands), logged to mitigate skew, and, by the
firm’s annual revenues (in millions of U.S. dollars), inflation-adjusted, with consistent results.
Because firm scope was highly correlated with the other measures, we used scope as the primary
control.
Because firms with declining performance may be less likely, or, perhaps, in contrast,
more likely to introduce new products, because performance may influence urgency to out-
compete rivals, we controlled for firm performance, measured as return on sales (Young et al.,
1996). In a sensitivity test, we alternatively controlled for firm growth (measured as number of
employees in year t divided by number of employees in year t-1) and revenue growth (measured
as revenue in year t divided by revenue in year t-1), with consistent results.
Because investment in R&D is likely to influence new products, we also controlled for
firm R&D intensity, measured by dividing R&D expenditure by total sales annually. In a
sensitivity analysis we used R&D expenditures (inflation-adjusted and logged), with consistent
results.
We controlled for top executive team turnover of each firm because prior research has
tied top management team composition to competitive agility (Hambrick, Cho, and Chen, 1996).
We measured turnover by the number of executives who joined or departed the executive team
yearly. We counted joining and departing as separate events because team size and roles were
28
not fixed and not all executives who departed were replaced. As in prior work, we defined the
firm’s top executives as those employees listed as executive officers in the firm’s proxy
statements, cross-validated with Compustat (Hambrick et al., 1996).
Because firms with products in multiple markets are likely to face more competition, and
likely to enact more competitive moves, we controlled for competitor density by the total number
of public firms that developed products in the focal firm’s market(s). This measure aggregated
competitors that launched competitive actions across all the firm’s markets, but competitors that
overlapped with the firm in more than one market were only counted once. We also tested
several alternatives, including a logged version of the measure. Because firms that are similar in
size may exert stronger competitive pressures (Chen et al., 2007), we alternatively controlled for
density of similarly-sized competitors in the firm’s markets, counting a firm as a competitor if its
annual revenues were in the same quartile as the focal firm’s. We alternatively controlled for
density of large competitors as the number of competitors in the firm’s markets with annual
revenues over $1 billion, because large firms may exert more intense pressure than small firms
do. Results are consistent across all measures.
We also controlled for any unobserved market effects with five market segments in which
our sample firms operated. We included controls for five standard markets based on Gartner’s IT
Glossary—developer tools, integration and middleware, database management, security, network
and system management—setting each binary variable to one if the firm had at least one product
in that market in a year and zero otherwise. Firms in the sample entered and exited market
segments during the time period of our study, creating variation over time.
We also controlled for three geographic regions with high numbers of enterprise software
firms, i.e., San Francisco, Boston, and Los Angeles (Orange County) because knowledge
spillovers within a region can enhance product development (Owen-Smith and Powell, 2004).
29
Region effects drop out from the fixed effects Poisson regressions that we report in the tables but
are included in sensitivity analyses (e.g., random effects or negative binomial regressions).
We controlled for macroeconomic variation as well as possible year-to-year fluctuations
in technologies which may influence product opportunities by including unreported year fixed
effects. Lastly, we included a two-year lagged dependent variable (i.e. new products in year t-2)
to control for time-variant firm heterogeneity and to further enhance causal inference (Heckman
and Borjas, 1980). Because standard errors can be inaccurately reduced in models with lagged
dependent variable, we also ran a model excluding the lagged dependent variable, with
consistent results. Furthermore, the use of a two-year rather than one-year lag helps reduce the
potential bias on standard errors. Standard errors were highly consistent across models with or
without the lagged dependent variable.
Statistical Method
Because our dependent variable is a count variable, our main analysis used fixed effects
Poisson models. Fixed effects models help control for any baseline (i.e. time-invariant)
heterogeneity between firms, and were preferred over random effects by the Hausman test
(Hausman, 1978). We also ran several alternative specifications as robustness checks. Because
fixed effects models drop any firms with two or fewer observations or firms that do not exhibit
variation in the dependent variable over time, we also report random effects Poisson results.
Because our dependent variable exhibits signs of over-dispersion, we also ran fixed effects
negative binomial regressions, with consistent results (available from the authors). While most
firms introduce at least one new product each year, a few firms introduce none, and so we also
verified our results with zero-inflated Poisson models, which produced consistent results.
While we controlled for as many relevant factors as possible in our models, there may be
unobservable variables, such as firm quality, that also influence new products and thus could bias
30
our results. We attempt to reduce potential bias in several ways. First, as noted above, we
included firm fixed effects in all models, to control for unobserved, time-invariant variation
between firms. We also included several variables to control for time-variant firm characteristics
and a lagged dependent variable. To further account for unobserved firm heterogeneity, we ran a
model in which we included a presample products variable (Blundell, Griffith, and Van Reenen,
1995) —that is, we controlled for products introduced by the firm three years prior to the study
period. Because it is plausible that firms that have been active in product development in the past
will continue to be so, the presample variable accounts for such unobserved heterogeneity that
may otherwise influence the results (Heckman and Borjas, 1980).
Causal Inference
To further facilitate causal inference, we lagged all independent and control variables by
one year. We also examined the extent to which potential endogeneity exists in our sample by
running Durbin and Wu-Hausman tests for both of our main explanatory variables (Durbin,
1954; Hausman, 1978; Wu, 1973). Although the tests (reported below) confirmed that
endogeneity is not present and our main results’ estimates using observed data are consistent
with our instrumental variables results, we added an instrumental variables analysis out of
abundance of caution, and tested several alternative explanations for our findings (detailed
below).
Through instrumental variables analysis we attempted to control for unobserved factors
that are simultaneously related to how firms monitor competition and to new products (e.g.
“high-quality” firms may monitor more selectively and introduce more products). Running a
two-stage instrumental variables analysis that we document below allows us to provide more
assurance that differences in monitoring of particular competitors are specifically related to
differences in new products.
31
We instrument for competitor monitoring using an interaction between a major regulatory
event and firm visibility. The regulatory event is the landmark United States v. Microsoft Corp
antitrust case, in which Microsoft was found to have obstructed competition in software. A few
of our interviewees mentioned that firms were pushed to consider competition more thoughtfully
and more broadly following the case. In particular, U.S. v. Microsoft was the first major
regulatory action taken against a software firm, which led to concern over increased enforcement
in the future (Liebeler, 2002).
However, the effects of the Microsoft case were not equal among all software firms.
Rather, it was apparent from field evidence that more visible firms were likely to feel more
vulnerable to increased antitrust enforcement. Because inclusion in an S&P index draws
increased attention from investors and other stakeholders (Aghion, Van Reenen, and Zingales,
2009). to account for intensified effect due to visibility, we followed Aghion and colleagues
(2009) and Clay (2002) and measured visibility with a firm’s membership in a major Standard &
Poor’s stock index (i.e., the S&P 1500 that includes S&P LargeCap 500, S&P MidCap 400, and
the S&P SmallCap 600). Greater attention to a firm’s stock draws greater attention to the firm
itself, increasing visibility. We measured Member of S&P 1500 as a binary variable set to 1 if the
firm is a member of the S&P 1500. S&P index membership tracks a firm’s visibility but is
unlikely to be correlated with product performance because inclusion is not based on
expectations of strong future performance, but rather the extent to which a stock contributes to a
balanced representation of the overall economy (Standard & Poor's, 2013).
Because we expect S&P index membership (i.e. visibility) to be related to monitoring of
competitors in the years following the Microsoft antitrust ruling, we instrument our explanatory
variables with an interaction between the focal firm’s S&P index membership and the timing of
the Microsoft ruling in June 2000 (Post-Microsoft ruling measured as a binary variable set to 1 if
32
the year is after 2000). Thus, the interaction of these variables (which serves as the instrument)
takes a value of 1 when the year is after 2000 and the firm is a member of an S&P 1500 index.
Because our dependent variable is a count of new products, we ran instrumental variables
Poisson regressions (reported in table A3). The first stage uses the instrumental variables and
control variables to predict the potentially endogenous variable using a linear OLS model. The
second stage uses the predicted values of the endogenous variable to predict counts of new
products, using a Poisson model.
RESULTS
Table 1 reports descriptive statistics and correlations. The average firm in the population
develops products in 1-2 markets, and faces about 36 other firms across markets. Out of these 36
potential competitors, the focal firm monitors about 6 on average. Our data are thus consistent
with prior work that shows that executives monitor a limited number of competitors, and
specifically about 6-7 competitors on average (Clark and Montgomery, 1999). As expected,
spanning structural holes is relatively uncommon in the data (constraint measure is 0.32) which
indicates that an average firm monitors clusters of competitors that are moderately connected
rather than completely disconnected. Of the competitors that the firm monitors, on average 14%
are “peripheral” i.e. mostly ignored by other firms in the same markets.9 It is also noteworthy
that 67% of monitoring ties in our data are unidirectional, i.e. the focal firm monitors a
competitor that does not monitor the focal firm in return. The average firm that we study releases
2-3 new products per year.
All three measures of competitor monitoring exhibit high variation. Among explanatory
variables that are included in regression models simultaneously, correlations are mostly low to
moderate. Variance inflation factors (VIFs) for most independent variables (Menard, 2001),
9 Of the monitoring ties that are bidirectional, there is no pattern of peripheral firms monitoring other peripheral firms (i.e. dyads of network isolates are not common in the data).
33
including our hypothesized variables of peripheral monitoring and spanning structural holes10,
were less than the recommended cut-off value of 5.0. The only exceptions were controls for firm
performance, firm R&D intensity, and competitive density. We tested the results with and
without these controls, with consistent results. To address potential multicollinearity between
main effects and interaction terms, we mean-centered the variables (Cronbach, 1987).
Regression analysis
Firm fixed effects analysis. We first ran a Hausman test to determine whether fixed
effects or random effects models were more appropriate (Hausman, 1978). The Hausman
specification test revealed systematic differences in coefficients when estimating fixed versus
random effects, indicating that fixed effects were appropriate. Fixed effects Poisson panel
regression results are reported in Table 2 models 1-6 and random effects in model 7. Results for
control variables (Model 1) are in line with expectations. More diversified, higher-performing,
and more R&D intensive firms introduce more new products.
Table 2 and figures 3-5 about here
Hypothesis 1 predicted that spanning structural holes in competitor monitoring is
positively-related to product introductions. The coefficient is positive and significant across
models, supporting hypothesis 1. A one standard deviation increase from the mean in spanning
structural holes (which roughly equates to one additional structural hole) yields one additional
product per year.
Hypothesis 2 predicted that competitive motivation would strengthen the relationship
between structural holes spanning and product introductions. The interaction is positive and
significant across models, supporting hypothesis 2. Because point significance in non-linear
10 VIFs for structural holes spanning and peripheral competitors are 3.73 and 3.47, respectively, indicating the collinearity is unlikely to affect our results. Moreover, our results are robust to a stricter measure of spanning structural holes that specifically excludes ties between peripheral competitors (Burt, 2010).
34
models does not always imply significance over the whole range of data (Hoetker, 2007), Figure
3 plots the effect of structural holes on new products for two levels of firm competitive
motivation. The results demonstrate that spanning structural holes has a significantly more
positive effect on new products at high levels of competitive motivation: at high levels, a one
standard increase in structural holes spanning (i.e. an additional structural hole) yields roughly
three additional products per year (3.2 products). At low levels of competitive motivation,
however, additional structural holes do not yield any additional products.
Hypothesis 3 predicted that monitoring peripheral competitors is negatively-related to
product introductions. The coefficient is negative and significant in the full model, supporting
the hypothesis. A one standard deviation increase from the mean in monitoring peripheral
competitors, which roughly equates to adding two competitors that no other firm is monitoring,
decreases product introductions by roughly half a product per year (0.4 products), or roughly one
fewer product every two years.
Hypothesis 4 predicted that competitive capability would amplify the relationship
between monitoring peripheral competitors and product introductions. The coefficient on the
interaction term is positive and significant, supporting hypothesis 4. Figure 4 plots the effect for
two levels of competitive capability. The results demonstrate that monitoring peripheral
competitors has a particularly negative influence on product introductions among the least
experienced firms, but a mildly positive influence for more experienced firms. At low levels of
competitive capability, a one standard deviation increase in monitoring peripheral competitors
decreases product introductions by 0.4 products per year. At high levels of competitive
capability, however, a one standard deviation increase in monitoring peripheral competitors
increases product introductions by 0.5 products per year.
Hypothesis 5 predicted that structural holes spanning would weaken the negative
35
relationship between monitoring peripheral competitors and product introductions. The
coefficient on the interaction term is positive and significant. To aid in interpretation, Figure 5
plots the effect of monitoring peripheral competitors on product introductions at high and low
levels of structural holes spanning. The results demonstrate that the negative relationship
between peripheral competitors and new products is even stronger when monitoring dense,
tightly-connected clusters of competitors. At low levels of structural holes spanning, a one
standard deviation increase from the mean in monitoring peripheral competitors reduces product
introductions by 0.8 products per year. In contrast, at high levels of structural holes spanning the
effect of monitoring peripheral firms starts to turn mildly positive. Hypothesis 5 is therefore
supported. Random effects results for all hypotheses (model 7) are consistent.
Instrumental variables analysis. We first examined the extent to which potential
endogeneity exists in our sample. We ran Durbin and Wu-Hausman tests for both of our main
explanatory variables (Durbin, 1954; Hausman, 1978; Wu, 1973). For structural holes, the
Durbin and Wu-Hausman test statistics are both 0.24 (p=.63). For peripheral competitors, the
Durbin and Wu-Hausman test statistics are both 0.04 (p=.84). The lack of significance suggests
that bias from unobserved firm heterogeneity is not a concern for either of our main explanatory
variables and that our main results’ estimates using observed data are preferred to instrumental
variables results.
However, for comprehensiveness, we also ran an instrumental variables analysis (details
in Methods). Our field interviews and archival research documented that in the software
industry, the Microsoft antitrust case sparked an increase in attention to competition. To avoid
antitrust scrutiny, our interviewees noted that there was an increased urgency for software
markets to become more competitive, providing face validity for the instrument.
Tables A1-A2 report descriptive comparisons of product introductions and competitor
36
monitoring for the treatment and control groups in the three years before and after the Microsoft
ruling and show descriptive evidence that the trial had little effect on competitor monitoring in
the control group, while intensifying competitor monitoring for the treated firms (table A1), as
expected. Given the timing of the Microsoft case in 2001, an obvious concern with the
instrument is the influence of macroeconomic trends. It is noteworthy that the downturn in the
U.S. economy in 2001 reduced product introductions for both the treated and the control groups,
and did significantly more so for the treated relative to the control group (32% vs 20% decrease
in table A2), making our estimates more conservative. Thus, the data in table A2 potentially
reduce the concern that changes in the economy, independent of the antitrust case, would be
confounding.
Tables A1 and A2 about here
We then examined the relevance and validity of our instruments. First, we examined
whether the instrument was relevant (i.e. had an effect in the first stage) using a Stock-Yogo test
(Stock and Yogo, 2005). The F-statistic for structural holes competitors is 4.61 while the F-
statistic for peripheral competitors is 10.92. This indicates that our instruments are somewhat
weak (a typical challenge in organizations research) and so instrumental variables results should
be interpreted with caution.
Second, we examined whether the instrument was valid (i.e. uncorrelated with the error
term in the second stage). A review of the antitrust literature indicated that there was no evidence
of a systematic effect of the Microsoft antitrust case on software firms’ product development
(Page and Childers, 2007; Pitofsky, 2001). Economic modeling further suggests that, in fast-
moving technology industries, antitrust enforcement has both positive and negative influences on
innovation incentives that are ultimately likely to cancel each other out (Segal and Whinston,
2007). Our quantitative tests using Hansen’s J-statistic for nonlinear models (Hansen and
37
Singleton, 1982) provided confirmation. The test statistic for structural holes spanning is 3.46
(p=.18) and the test statistic for peripheral competitors is 6.23 (p=.04), indicating that our
instrument is valid for structural holes but not valid for peripheral competitors. In light of these
results, we ran the instrumental variables analysis only for hypothesis 1.
Table A3 about here
Two-stage instrumental variables Poisson regression results for spanning structural holes
are reported in table A3. Model 1 reports results from a first-stage linear regression. As noted
above, we instrument structural holes spanning with an interaction between the Microsoft ruling
and S&P index membership (just under 25% of the sample firms were part of S&P 1500). As
expected, the coefficients on the interaction terms between Post-Microsoft ruling and Member of
S&P 1500 are positive and significant (p=.007). Models 2-3 report second stage Poisson results,
with the instrumented value for structural holes spanning added in Model 3. The coefficient is
positive and significant (p=.02), lending further support for hypothesis 1. Altogether, although
the tests reported above (Durbin, 1954; Hausman, 1978; Wu, 1973) show that endogeneity is not
a concern in our data, our additional instrumental variables analyses provide further confidence
on our main findings.
Sensitivity analyses. One alternative explanation for our findings is that firms may be
both more conscious of opportunities to span structural holes and more likely to introduce new
products when they diversify to new markets. We therefore tested the robustness of our results to
dropping years prior to firm entries into new infrastructure software markets, by excluding one
year and two years prior to the release of a product in a new market. Results (available from the
authors) are consistent, indicating that the observed effects are probably not simply driven by
plans for expansion.
A related question is whether firms in general, as they consider a significant strategic
38
move, such as a market entry, have a tendency to strategically hide their monitoring of certain
competitors, so as not to reveal their intentions. However, hiding would make it harder for us to
find results (as new competitors that are monitored would not be listed) and thus is not a likely
explanation. Another somewhat related question is whether some firms would strategically over-
monitor lower-performing firms to appear relatively stronger to their investors (and as a
consequence low performance of monitored competitors rather than network position would
potentially explain the results on peripherality). However, again, this seems an unlikely
explanation as the correlation between peripherality and financial performance is very low,
indicating that peripheral, i.e. less-monitored firms are not necessarily low performers. In fact, in
parallel with the observation (from our first-stage models predicting peripheral-firm monitoring)
that higher rather than lower-performing firms are more likely to monitor a peripheral
competitor, these additional analyses pose an interesting question for future work of who the
peripheral firms are, including the possibility to study whether these firms are newcomers to the
network.
Another alternative explanation for our findings is that executives may be more likely to
monitor peripheral competitors or monitor competitors from disparate clusters (span structural
holes) when they are interested in acquiring those firms. In other words monitoring of non-local
competitors and product introductions go hand in hand because firms monitor acquisition targets,
and acquisitions in turn boost the product introductions of the acquiring firm (Ahuja and Katila,
2001). We tested for this alternative explanation by removing competitors that the focal firm
later acquired (within one year, within two years, or at any later point) from our measures of
structural holes and peripheral competitors. Results were again consistent.
DISCUSSION
We started the paper with the observation that despite the rich insights on awareness of
39
local competitors, the AMC perspective does not yet incorporate understanding of how
positioning in rivalry networks may influence competitive action. Our purpose was to extend the
AMC framework, and to show that not all competitive awareness is beneficial. Drawing on
research on competitive dynamics, and on a longitudinal analysis of 121 infrastructure software
firms from 1995 to 2012, a key finding is that positioning within competitive networks matters:
monitoring some competitors but not others can be a source of advantage over rivals facing the
same competitive environment. Controlling for intensity of competition in the firm’s markets,
and for firm heterogeneity, we find that product introductions are increased when firms span
structural holes in the rivalry network, and typically reduced when peripheral competitors are
added. Effect sizes are substantial. For example, adding one additional structural hole is related
to one additional product per year, while adding a peripheral competitor is related to roughly one
fewer product every two years, in a context where the average firm only introduces 2-3 products
per year. At high levels of competitive motivation the effect is even more dramatic, as an
additional structural hole yields three additional products per year. Altogether, our findings
indicate that competitive awareness is not simply a matter of who the firm decides to monitor,
but how those choices position the firm within the wider network of competitor monitoring
within an industry. These findings have implications for research on competitive dynamics.
Contributions to Competitive Dynamics
We make several contributions to research on competitive dynamics. First, we highlight
previously-unexamined variation in how firms think about competition and so offer a more
accurate and nuanced view of competitive dynamics. Most frameworks on competition imply
that ties between competing organizations form such that they aggregate into relatively
homogenous “strategic groups” (Smith et al., 1997). As a result, prior work has focused on
awareness of rivals that are “local” or in dense (i.e. tightly-connected) clusters (Chen et al.,
40
2007; Ferrier et al., 1999). In contrast, we expand to those monitored competitors that are
“further afield,” e.g. on the periphery of the network or spanning structural holes. Further, we
find that firms facing the same competitive environment vary in their propensity to monitor these
“non-local” competitors, which creates variation in network positioning. As one industry
informant noted, there is never “definitive data” about competition, just “a lot of built-in
assumptions.” Highlighting the existence of these varied positions in rivalry networks is
therefore one of the core contributions of our work.
We further contribute by examining the implications of these varied network positions for
competitive actions. Prior research suggests multiple, contrasting influences of spanning
structural holes in competitive networks (Chen and Miller, 2012; Tsai et al., 2011). Tsai et al.
(2011) argue for a negative influence, as closed networks help the focal firm understand the
behaviors of its primary competitors better. In contrast, we find that spanning structural holes has
a positive influence on competitive action, by prompting the firm to respond to competition
through product introductions. An exemplar case is two similar security software firms in our
data: Axent Technologies and Cyberguard. Axent, which spanned structural holes, introduced
more new products and grew rapidly. In contrast, Cyberguard, which monitored a denser, closed
network, struggled to launch products and grew more slowly.
We also contribute by showing that not all positions in rivalry networks are beneficial; in
particular, monitoring peripheral firms can be harmful. In rivalry networks, the objectives of
monitored firms are to compete rather than collaborate, and the knowledge that is obtained in
particular from peripheral firms (that are not monitored by many other firms) is not “kept in
check” by other network members, and so is less likely to be thoroughly vetted. Perhaps for these
reasons, we find that monitoring of peripheral firms in rivalry networks is detrimental. Further,
we find that more competitively experienced firms can alleviate these negative effects, possibly
41
because they are better able to distinguish trustworthy knowledge or better able to connect the
dots in case of missing information. Overall, then, the competitive objectives of network
members introduce an additional constraint on positioning and explain why effective positioning
in rivalry networks differs from positioning in collaboration networks.
Finally, we contribute by specifically examining networks of monitoring relationships
that we label as rivalry networks, and by using fine-grained new measures of awareness. Prior
work on rivalry networks has typically conceptualized competitive relationships through product
or geographic overlap (Hsieh and Vermeulen, 2014; Tsai et al., 2011). From this perspective,
competition is a characteristic of the firm’s environment that is generally symmetric (i.e. mutual)
between firms. In contrast, we build a network in which “ties” represent unilateral monitoring
relationships, allowing for variation between firms facing the same environment (e.g. firms in the
same product markets) and asymmetry (monitoring is not always reciprocated), and in fact
highlight the intriguing pattern that a vast majority (67%) of competitive ties, at least in
enterprise software, are uni- rather than bi-directional. Analyzing a rivalry network thus allows
us richer insight into which firms are truly “peripheral” in the eyes of other firms within an
industry, as well as identify where a lack of monitoring between clusters of firms creates
opportunities for brokered information flows.
There are several items of future work. We examined unilateral monitoring through 10-
Ks and analyst calls. Future work could expand to bilateral types of monitoring such as board
interlocks or exchange relationships (Vissa, 2011), or expand to specifically examine cases
where competitor monitoring is symmetric (both parties monitor each other) vs not. Further,
while our study focused on the consequences of monitoring, future work could also investigate
what drives some firms to think more broadly about competition in the first place. Prior work on
the drivers of competitor monitoring has focused on firm characteristics, such as firm size or
42
geographic location (Chen et al., 2007). Future work could investigate, for instance, whether
variation in executive team or board characteristics is associated with differences in monitoring
(c.f., Connelly, Ferrier et al., 2017). Finally, we focused on when firms respond to competition
by building up their product offerings. Future work could examine when firms respond by
cutting their prices so they are the cheapest ones out there to buy from, or increase their spending
on sales and marketing to make sure that they are in front of the competition in the eyes of
customers. These are interesting avenues for future research.
CONCLUSION
While research has uncovered several “levers” that executives use to facilitate new
product introductions, the firm’s understanding of competition has received less attention than it
deserves. Our analysis suggests that executives can improve competitive actions by thinking
about positioning of their firm within the wider network of competitor monitoring relationships
within an industry. Competition is thus not merely an obstacle to overcome but rather a pathway
to strategic advantage. As one CEO put it, “Celebrate competition…It’s good to have a bad guy.”
43
REFERENCES
Aghion P, Van Reenen J, Zingales L. 2009. Innovation and institutional ownership. NBER Working Paper, NBER Working Paper.
Ahuja G. 2000. Collaboration networks, structural holes, and innovation: A longitudinal study. Administrative Science Quarterly 45(3): 425–455.
Ahuja G, Katila R. 2001. Technological acquisitions and the innovation performance of acquiring firms:A longitudinal study. Strategic Management Journal 22(3): 197–220.
Blundell R, Griffith R, Van Reenen J. 1995. Dynamic count data models of technological innovation. The Economic Journal 105(429): 333–344.
Brown S V, Tucker JW. 2011. Large-sample evidence on firms’ year-over-year MD&A modifications. Journal of Accounting Research 49(2): 309–346.
Burt RS. 1992. Structural holes: The social structure of competition. Harvard University Press: Cambridge, MA.
Burt RS. 2004. Structural holes and good ideas. American Journal of Sociology 110(2): 349–399. Burt RS. 2010. Neighbor Networks: Competitive Advantage Local and Personal. Oxford
University Press: Oxford, UK. Chen M-J. 1996. Competitor analysis and interfirm rivalry: Toward a theoretical integration.
Academy of Management Review 21(1): 100–134. Chen M-J, Hambrick D. 1995. Speed, stealth, and selective attack: How small firms differ from
large firms in competitive behavior. Academy of Management Journal 38(2): 453–482. Chen M-J, Lin H, Michel JG. 2010. Navigating in a hypercompetitive environment: The role of
action aggressiveness and TMT integration. Strategic Management Journal 31(13): 1410–1430.
Chen M-J, Miller D. 1994. Competitive attack, retaliation, and performance: An expectancy-valence framework. Strategic Management Journal 15: 85–102.
Chen M-J, Miller D. 2012. Competitive dynamics: Themes, trends, and a prospective research platform. Academy of Management Annals 6: 1–83.
Chen M-J, Su K-H, Tsai W. 2007. Competitive tension: The awareness-motivation-capability perspective. Academy of Management Journal 50(1): 101–118.
Clark B, Montgomery D. 1999. Managerial identification of competitors. Journal of Marketing 63(3): 67–83.
Clay DG. 2002. Institutional ownership and firm value. SSRN Working Paper. Connelly B, Tihanyi L, Ketchen D, Carnes C, Ferrier, W. 2017. Competitive repertoire
complexity: Governance antecedents and performance outcomes. Strategic Management Journal, 38: 1151-1173.
Coombs R, Narandren P, Richards A. 1996. A literature-based innovation output indicator. Research Policy 25(3): 403–413.
Cronbach LJ. 1987. Statistical tests for moderator variables: Flaws in analyses recently proposed. Psychological Bulletin 102(3): 414–417.
Dedman E, Lennox C. 2009. Perceived competition, profitability and the withholding of information about sales and the cost of sales. Journal of Accounting and Economics. Elsevier 48(2–3): 210–230.
Durbin J. 1954. Errors in variables. Review of the International Statistical Institute 22(1): 23–32. Ferrier WJ. 2001. Navigating the competitive landscape: The drivers and consequences of
competitive aggressiveness. Academy of Management Journal 44(4): 858–877. Ferrier WJ, Smith KG, Grimm CM. 1999. The role of competitive action in market share
erosion: A study of industry leaders and challengers. Academy of Management Journal
44
42(4): 372–388. Galunic DC, Rodan S. 1998. Resource Recombinations in the firm: Knowledge structures and
the potential for Schumpeterian innovation. Strategic Management Journal 19(12): 1193–1201.
Gimeno J. 2004. Competition within and between networks: The contingent effect of competitive embeddedness on alliance formation. Academy of Management Journal 47(6): 820–842.
Greve H, Taylor A. 2000. Innovations as catalysts for organizational change: Shifts in organizational cognition and search. Administrative Science Quarterly 45: 54–80.
Gulati R, Singh H. 1998. The architecture of cooperation: Managing coordination costs and appropriation concerns in strategic alliances. Administrative Science Quarterly 43(4): 781–814.
Hambrick D, Cho T, Chen M. 1996. The influence of top management team heterogeneity on firms’ competitive moves. Administrative Science Quarterly 41(4): 659–684.
Hambrick D, Humphrey S, Gupta A. 2015. Structural interdependence within top management teams: A key moderator of upper echelons predictions. Strategic Management Journal 36: 449–461.
Hansen L, Singleton K. 1982. Generalized instrumental variables estimation of nonlinear rational expectations models. Econometrica 50(5): 1269–1286.
Hausman J. 1978. Specification tests in econometrics. Econometrica 46(46): 1251–1271. Heckman J, Borjas G. 1980. Does unemployment cause future unemployment? Definitions,
questions, and answers. Economica 47(187): 257–283. Hoberg G, Phillips G. 2016. Text-based network industries and endogenous product
differentiation. Journal of Political Economy 124(5): 1423–1465. Hoehn-Weiss M, Karim S. 2014. Unpacking functional alliance portfolios: How signals of
venture viability affect new-venture outcomes. Strategic Management Journal 35(9): 1364-1385.
Hoehn-Weiss M, Pahnke EC. 2018. Competition and cooperation in the craft chocolate industry. Paper presented at the Strategic Management Society Annual Meetings.
Hoetker G. 2007. The use of logit and probit models in strategic management research: Critical issues. Strategic Management Journal 28: 331–343.
Hoffmann W, Lavie D, Reuer J, Shipilov A. 2018. The Interplay of Competition and Cooperation. Strategic Management Journal (October): 3033–3052.
Hsieh K-Y, Vermeulen F. 2014. The structure of competition: How competition between one’s rivals influences imitative market entry. Organization Science 25(1): 299–319.
Kim K, Gopal A, Hoberg G. 2016. Does product market competition drive CVC investment? Evidence from the U.S. IT industry. Information Systems Research 27(2): 259–281.
Kumar P, Zaheer A. 2018. Ego-network stability and innovation in alliances. Academy of Management Journal: 1–53.
Lee H, Smith KG, Grimm CM, Schomburg A. 2000. Timing, order, and durability of new product advantages with imitation. Strategic Management Journal 21: 23–30.
Lewellen S. 2013. Executive compensation and peer effects. Working paper. Yale University. Li F, Lundholm R, Minnis M. 2013a. A measure of competition based on 10-K filings. Journal
of Accounting Research 51(2): 399–436. Li Q, Maggitti P, Smith K, Tesluk P, Katila R. 2013b. Top management attention to innovation:
The role of search selection and intensity in new product introductions. Academy of Management Journal 56(3): 893–916.
Liebeler L. 2002. Comments of the Computing Technology Industry Association on the revised proposed final judgment in United States v. Microsoft.
45
Madhavan R, Gnyawali DR, He J. 2004. Two's company, three's a crowd? Triads in cooperative-competitive networks. Academy of Management Journal, 47(6), 918-927.
McAfee. 2006, February 10. McAfee Inc FY2005 4th Quarter Earnings Call. Menard S. 2001. Applied Logistic Regression Analysis. SAGE Publications. Miller D, Chen M-J. 1994. Sources and consequences of competitive inertia: A Study of the U.S.
airline industry. Administrative Science Quarterly 39: 1–23. Ndofor H, Vanevenhoven J, Barker V. 2013. Software firm turnarounds in the 1990s: An
analysis of reversing decline in a growing, dynamic industry. Strategic Management Journal 34: 1123–1133.
Owen-Smith J, Powell W. 2004. Knowledge networks as channels and conduits: The effects of spillovers in the Boston biotechnology community. Organization Science 15(1): 5–21.
Page W, Childers S. 2007. Software development as an antitrust remedy: Lessons from the enforcement of the Microsoft communications protocol licensing requirement. Michigan Telecommunications Technology Law Review 77(77–136).
Pitofsky R. 2001. Challenges of the new economy: Issues at the intersection of antitrust and intellectual property. Antitrust Law Journal 68(3): 913–924.
Pontikes E. 2012. Two sides of the same coin: How ambiguous classification affects multiple audiences’ evaluations. Administrative Science Quarterly 57(1): 81–118..
Porac J, Thomas H, Wilson F, Paton D, Kanfer A. 1995. Rivalry and the industry model of Scottish knitwear producers. Administrative Science Quarterly. 40(2): 203–227.
Rauh JD, Sufi A. 2012. Explaining corporate capital structure: Product markets, leases, and asset similarity. Review of Finance 16(1): 115–155.
Reger R, Huff A. 1993. Strategic groups: A cognitive perspective. Strategic Management Journal 14(2): 103–123.
Rodan S, Galunic C. 2004. More than network structure: How knowledge heterogeneity influences managerial performance and innovativeness. Strategic Management Journal 25(6): 541–562.
Ross JM, Sharapov D. 2015. When the leader follows: Avoiding dethronement through imitation. Academy of Management Journal 58(3): 658–679.
Ruef M. 2002. Strong ties, weak ties and islands: Structural and cultural predictors of organizational innovation. Industrial and Corporate Change 11(3): 427–449.
Schumpeter J. 1934. The theory of economic development. Harvard University Press: Cambridge, MA.
Segal I, Whinston MD. 2007. Antitrust in innovative industries. American Economic Review 97(5): 1703–1730.
Smith K, Ferrier W, Grimm C. 2001a. King of the hill: Dethroning the industry leader. Academy of Management Executive 15(2): 59–70.
Smith KG et al. 1994. Top management team demography and process: The role of social integration and communication. Administrative Science Quarterly 39(3): 412–438.
Smith KG, Ferrier WJ, Ndofor H. 2001b. Competitive dynamics research: Critique and future directions. In Handbook of Strategic Management, Hitt MA, Freeman RE, Harrison JS (eds). Blackwell Publishers: Malden, MA: 315–361.
Smith KG, Grimm C, Wally S, Young G. 1997. Strategic groups and rivalrous firm behavior: Towards a reconciliation. Strategic Management Journal 18: 149–157.
Smith KG, Grimm CM, Gannon MJ, Chen M-J. 1991. Organizational information processing, competitive responses, and performance in the U.S. domestic airline industry. Academy of Management Journal 34(1): 60–85.
Standard & Poor's 2013. S&P 500 Fact Sheet. S&P Dow Jones Indices.
46
Stock JH, Yogo M. 2005. Testing for weak instruments in linear IV regression. In Identification and Inference for Econometric Models, Andrews D, Stock J (eds). Cambridge University Press: Cambridge, England: 80–108.
Thatchenkery S. 2017. Competitive intelligence: Drivers and consequences of executives’ attention to competitors. Stanford University. Ph.D. Dissertation.
Tsai W, Su K-H, Chen M-J. 2011. Seeing through the eyes of a rival: Competitor acumen based on rival-centric perceptions. Academy of Management Journal 54(4): 761–778.
Vissa B. 2011. A matching theory of entrepreneurs’ tie formation intentions and initiation of economic exchange. Academy of Management Journal 54(1): 137–158.
Wall Street Journal Editorial Board. 2018, October. How Sears lost its Mojo. The Wall Street Journal: 1–3.
Wu D-M. 1973. Alternative tests of independence between stochastic regressors and disturbances. Econometrica 41(4): 733–750.
Yoffie D, Kwak M. 2002. Mastering balance: How to meet and beat a stronger opponent. California Management Review 44(2): 1–3.
Young G, Smith KG, Grimm C. 1996. ‘Austrian’ and industrial organization perspectives on firm-level competitive activity and performance. Organization Science 7(3): 243–254.
47
Figure 1. Model of Competitor Monitoring and New Product Introductions
Figure 2a. Redundant versus spanning ties in competitor monitoring
Redundant Spanning
Figure 2b. Monitoring of prominent versus peripheral competitors
Prominent Peripheral
Each node represents a potential competitor and directed ties between nodes indicate that the originating firm monitors the target firm. Inward ties to the focal firm (i.e. indegree centrality) are excluded for clarity.
48
Figure 3. Interaction between firm’s structural holes spanning and firm competitive motivation
(H2)
The x-axis displays the largest part of the range of the data (i.e. the 5th to 95th percentiles).
Figure 4. Interaction between firm’s monitoring of peripheral competitors and firm competitive
capability (H4)
Figure 5. Interaction between firm’s monitoring of structural holes and peripheral competitors
(H5)
0123456
-0.6 -0.4 -0.2
New
Pro
duct
s
Structural Holes Spanning
5th Percentile Firm Competitive Motivation
95th Percentile Firm Competitive Motivation
0123456
1 3 5 7
New
Pro
duct
s
Peripheral Competitors
5th Percentile Firm Competitive Capability
95th Percentile Firm Competitive Capability
0
1
2
3
4
1 3 5 7
New
Pro
duct
s
Peripheral Competitors
5th Percentile Structural Holes Spanning
95th Percentile Structural Holes Spanning
49
Table 1. Descriptive statistics and correlationsVariable Mean S.D. 1 2 3 4 5 6 7 8 9 10 11 12 13
1 Product introductions 2.95 3.372 Structural holes spanning1 -0.32 0.17 0.253 Peripheral competitors 2.22 1.91 0.12 0.494 Firm competitive motivation 0.04 0.06 0.41 0.26 0.155 Firm competitive capability2 1.73 0.85 0.15 0.07 -0.02 0.426 Firm scope 1.62 0.81 0.27 0.15 -0.07 0.37 0.387 Firm performance -0.39 2.72 0.05 0.12 0.03 0.08 0.05 0.058 Firm R&D intensity 0.42 3.28 -0.02 -0.05 -0.05 -0.04 -0.09 -0.04 0.319 Team turnover 0.36 0.31 -0.03 -0.04 -0.07 -0.15 -0.18 -0.13 -0.05 0.01
10 Competitor density 36.63 17.47 0.26 0.17 -0.08 0.06 -0.06 0.51 -0.01 0.03 0.0211 Developer tools 0.28 0.45 0.05 -0.06 -0.08 0.02 0.13 0.46 -0.02 -0.02 0.02 0.1312 Integration and middleware 0.32 0.47 -0.08 -0.04 -0.07 -0.02 0.03 0.43 -0.02 -0.03 -0.07 0.23 0.2713 Databases 0.25 0.43 0.15 0.12 0.01 0.18 0.27 0.50 0.05 -0.03 -0.09 0.12 0.20 0.0814 Security 0.25 0.43 0.15 0.14 0.14 0.35 0.13 0.01 0.02 -0.03 0.00 -0.19 -0.32 -0.36 -0.17
121 firms, 823 firm-yearsCorrelations above .07 are significant at p<.051 Burt's constraint measure, multiplied by -1 2 Logged
50
Table 2. Fixed effects Poisson models predicting number of product introductions
DV: Product introductions 1 2 3 4 5 6Random
EffectsStructural holes spanning 0.60 * 1.31 *** 1.50 *** 1.45 *** 2.24 *** 2.66 ***
(0.29) {0.04} (0.33) {0.000} (0.37) {0.000} (0.38) {0.000} (0.46) {0.000} (0.40) {0.000}
Firm competitive motivation -1.84 † -2.07 * -1.26 -0.37 0.69(1.00) {0.07} (1.03) {0.04} (1.07) {0.24} (1.11) {0.74} (0.94) {0.46}
Structural holes spanning x Firm competitive motivation 31.83 *** 33.52 *** 29.43 *** 22.18 ** 22.29 ***(7.09) {0.000} (7.30) {0.000} (7.39) {0.000} (7.72) {0.004} (6.79) {0.001}
Peripheral competitors -0.02 -0.05 † -0.14 *** -0.14 ***(0.02) {0.29} (0.02) {0.06} (0.04) {0.000} (0.03) {0.000}
Firm competitive capability 0.24 † 0.26 † 0.19 *(0.14) {0.09} (0.15) {0.07} (0.09) {0.03}
Peripheral competitors x Firm competitive capability 0.06 * 0.07 ** 0.06 **(0.02) {0.02} (0.02) {0.004} (0.02) {0.005}
Structural holes spanning x Peripheral competitors 0.59 *** 0.65 ***(0.19) {0.001} (0.17) {0.000}
ControlsFirm controlsFirm scope 0.68 *** 0.64 *** 0.46 * 0.47 * 0.49 ** 0.37 † 0.33 *
(0.18) {0.000} (0.18) {0.000} (0.19) {0.02} (0.19) {0.01} (0.19) {0.010} (0.20) {0.06} (0.14) {0.01}
Firm performance 0.15 * 0.16 * 0.18 ** 0.18 ** 0.16 * 0.17 ** 0.11 *(0.06) {0.02} (0.06) {0.01} (0.06) {0.004} (0.06) {0.004} (0.06) {0.01} (0.06) {0.009} (0.05) {0.04}
Firm R&D intensity 0.59 * 0.63 * 0.70 ** 0.70 ** 0.64 * 0.65 ** 0.29(0.25) {0.02} (0.25) {0.01} (0.25) {0.004} (0.25) {0.005} (0.25) {0.01} (0.25) {0.010} (0.21) {0.16}
Team turnover 0.06 0.08 0.09 0.09 0.08 0.08 0.000(0.11) {0.60} (0.11) {0.46} (0.11) {0.41} (0.11) {0.43} (0.11) {0.45} (0.11) {0.47} (0.11) {1.00}
Market controlsCompetitor density -0.001 -0.001 -0.003 -0.004 -0.01 -0.01 -0.01
(0.01) {0.89} (0.01) {0.87} (0.01) {0.51} (0.01) {0.45} (0.01) {0.21} (0.01) {0.23} (0.004) {0.18}
Developer tools -1.25 *** -1.21 *** -0.82 * -0.80 * -0.79 * -0.70 * -0.09(0.31) {0.000} (0.31) {0.000} (0.32) {0.01} (0.33) {0.01} (0.32) {0.02} (0.32) {0.03} (0.15) {0.57}
Integration and middleware -0.62 † -0.55 -0.29 -0.33 -0.36 -0.23 -0.19(0.37) {0.09} (0.37) {0.14} (0.38) {0.45} (0.38) {0.39} (0.38) {0.35} (0.38) {0.55} (0.15) {0.22}
Databases -0.89 *** -0.85 *** -0.64 * -0.62 * -0.60 * -0.45 † -0.32 *(0.26) {0.000} (0.26) {0.000} (0.27) {0.02} (0.27) {0.02} (0.27) {0.02} (0.27) {0.10} (0.16) {0.04}
Security -0.24 -0.20 -0.02 -0.04 -0.07 -0.04 -0.01(0.23) {0.30} (0.23) {0.39} (0.23) {0.92} (0.23) {0.87} (0.23) {0.75} (0.23) {0.86} (0.15) {0.96}
Firm fixed effects Y Y Y Y Y Y Y
Year fixed effects Y Y Y Y Y Y Y
Chi-Squared 184.2 187.9 206.9 207.4 218.8 226.5 254.5
Standard errors in parentheses, p-values in brackets. Two-tailed significance tests: † p < .10 *p < .05 **p<.01 ***p<.001.
121 firms, 823 firm-years. All models include firm and year effects and two-year lagged dependent variable
51
Appendix
Comparisons are for the three years before and after the Microsoft trial.
Table A3. Two-stage instrumental variables models predicting number of product introductions
Table A1. Average number of listed competitors pre- and post-Microsoft trial
Treated (S&P index members) Control (Not S&P Members)Pre-trial 6.1 6.9Post-trial 7.2 6.6
Table A2. Average number of product introductions pre- and post-Microsoft trial
Treated (S&P index members) Control (Not S&P Members)Pre-trial 6.0 3.0Post-trial 4.1 2.4
Table A3. Two-stage instrumental variables models predicting number of product introductions1st Stage OLS 2nd Stage Poisson
Structural holes spanning
Number of product introductions
1 2
InstrumentsPost-Microsoft ruling -0.06 ***
(0.02) {0.001}
Member of S&P 1500 -0.01(0.02) {0.60}
Post-Microsoft ruling x Member of S&P 1500 0.08 ***(0.03) {0.001}
Instrumented explanatory variableStructural holes spanning 6.73 ***
(1.69) {0.000}
ControlsFirm controlsFirm scope 0.01 0.15
(0.01) {0.33} (0.11) {0.18}
Firm performance 0.01 *** -0.02(0.003) {0.001} (0.03) {0.44}
Firm R&D intensity -0.005 *** 0.01(0.001) {0.000} (0.01) {0.28}
Team turnover -0.01 0.07(0.02) {0.59} (0.16) {0.65}
Market controlsCompetitor density 0.001 * 0.004
(0.000) {0.02} (0.005) {0.42}
Developer tools -0.03 † 0.37 *(0.02) {0.08} (0.16) {0.02}
Integration and middleware -0.01 -0.27 *(0.02) {0.57} (0.13) {0.03}
Databases 0.04 ** -0.23(0.02) {0.003} (0.16) {0.15}
Security 0.06 *** -0.06(0.02) {0.000} (0.18) {0.71}
Standard errors in parentheses, p-values in brackets. Two-tailed significance tests: † p < .10 *p < .05 **p<.01 ***p<.001. 121 firms, 823 firm-years. All models include year effects.