marketing research notes
TRANSCRIPT
Page 1
Your company may find the following approach useful. It involves screening potential markets,
assessing the targeted markets, and drawing conclusions.
1. Screen Potential Markets
Step1.
Obtain export statistics that indicate product exports to various countries. Published export
statistics provide a reliable indicator of where U.S. exports are currently being shipped. The
U.S. Census Bureau provides these statistics in a published format. Trade statistics also can
be obtained using the National Trade Data Bank (NTDB).
Step2.
Identify five to ten large and fast-growing markets for the firm's product. Look at them over
the past three to five years. Has market growth been consistent year to year? Did import
growth occur even during periods of economic recession? If not, did growth resume with
economic recovery?
Step3.
Identify some smaller but fast-emerging markets that may provide ground-floor
opportunities. If the market is just beginning to open up, there may be fewer competitors
than in established markets. Growth rates should be substantially higher in these countries
to qualify as up-and-coming markets, given the lower starting point.
Step4.
Target three to five of the most statistically promising markets for further assessment.
Consult with a Department of Commerce Export Assistance Center (see www.doc.gov),
business associates, freight forwarders, and others to further evaluate targeted markets.
2. Assess Targeted Markets
Step1.
Examine trends for company products as well as related products, that could influence
Page 2
demand. Calculate overall consumption of the product and the amount accounted for by
imports. The National Trade Data Bank (NTDB)and the National Technical Information
Service (NTIS) offer Industry Sector Analyses (ISAs), Country Commercial Guides
(CCGs), and other reports that give economic backgrounds and market trends for each
country. Demographic information (such as population and age) can be obtained from
World Population (Census) and Statistical Yearbook (United Nations).
Step2.
Ascertain the sources of competition, including the extent of domestic industry production
and the major foreign countries the firm is competing against in each targeted market by
using ISAs and competitive assessments. This information is available from the NTDB and
the NTIS. Look at each competitor's U.S. market share.
Step3.
Analyze factors affecting marketing and use of the product in each market, such as end-user
sectors, channels of distribution, cultural idiosyncrasies, and business practices. Again, the
ISAs and Customized Market Analyses (CMAs) offered by the Department of Commerce
are useful.
Step4.
Identify any foreign barriers (tariff or nontariff) for the product being imported into the
country (see Service Exports for an analysis of tariff and nontariff barriers). Identify any
U.S. barriers (such as export controls) that affect exports to the country.
Step5.
Identify any U.S. or foreign government incentives that promote exporting of your
particular product or service (see Service Exports).
3. Draw Conclusions
After analyzing the data, the company may conclude that its marketing resources would be
applied more effectively to a few countries. In general, if the company is new to exporting,
then efforts should be directed to fewer than ten markets. Exporting to one or two countries
will allow the company to focus its resources without jeopardizing its domestic sales
efforts. The company's internal resources should determine its level of effort.
Page 3
Market Research Approaches
This document is intended to help business owners better understand
market research and its importance. You will find a list of the more
commonly used research approaches that can support your market
research efforts. For each approach, there is a description, its best uses
and some methodological considerations you should ponder, should you
decide to use that approach.
Surveys
Find information on a commonly used research approach: surveys.
Focus Groups
Find information on a commonly used research approach: focus groups.
Personal Interviews
Find information on a commonly used research approach: personal
interviews.
Contextual Inquiry
Find information on a commonly used research approach: contextual
inquiry.
Task Analysis
Find information on a commonly used research approach: task analysis.
Usability Testing
Find information on a commonly used research approach: usability testing.
Pareto analysis
Pareto analysis is a statistical technique in decision making that is used for selection of a limited
number of tasks that produce significant overall effect. It uses the Pareto principle – the idea that
by doing 20% of work, 80% of the advantage of doing the entire job can be generated. Or in
Page 4
terms of quality improvement, a large majority of problems (80%) are produced by a few key
causes (20%).
Pareto analysis is a formal technique useful where many possible courses of action are
competing for attention. In essence, the problem-solver estimates the benefit delivered by each
action, then selects a number of the most effective actions that deliver a total benefit reasonably
close to the maximal possible one.[citation needed]
Pareto analysis is a creative way of looking at causes of problems because it helps stimulate
thinking and organize thoughts. However, it can be limited by its exclusion of possibly important
problems which may be small initially, but which grow with time. It should be combined with
other analytical tools such as failure mode and effects analysis and fault tree analysis for
example. This technique helps to identify the top 20% of causes that needs to be addressed to
resolve the 80% of the problems. Once the top 20% of the causes are identified, then tools like
the Ishikawa diagram or Fish-bone Analysis can be used to identify the root causes of the
problems. The application of the Pareto analysis in risk management allows management to
focus on the 20% of the risks that have the most impact on the project[1].
Steps to identify the important causes using Pareto analysis
Step 1: Form a table listing the causes and their frequency as a percentage.
Step 2: Arrange the rows in the decreasing order of importance of the causes (i.e., the most
important cause first)
Step 3: Add a cumulative percentage column to the table
Step 4: Plot with causes on x- and cumulative percentage on y-axis
Step 5: Join the above points to form a curve
Step 6: Plot (on the same graph) a bar graph with causes on x- and percent frequency on y-
axis
Step 7: Draw line at 80% on y-axis parallel to x-axis. Then drop the line at the point of
intersection with the curve on x-axis. This point on the x-axis separates the important causes
(on the left) and trivial causes (on the right)
Step 8: Review the chart to ensure that at least 80% of the causes are captured
Page 5
Ishikawa diagram
Ishikawa diagrams (also called fishbone diagrams, cause-and-effect diagrams or Fishikawa)
are diagrams that show the causes of a certain event -- created by Kaoru Ishikawa (1990)[1].
Common uses of the Ishikawa diagram are product design and quality defect prevention, to
identify potential factors causing an overall effect. Each cause or reason for imperfection is a
source of variation. Causes are usually grouped into major categories to identify these sources of
variation. The categories typically include:
People: Anyone involved with the process
Methods: How the process is performed and the specific requirements for doing it, such as
policies, procedures, rules, regulations and laws
Machines: Any equipment, computers, tools etc. required to accomplish the job
Materials: Raw materials, parts, pens, paper, etc. used to produce the final product
Measurements: Data generated from the process that are used to evaluate its quality
Environment: The conditions, such as location, time, temperature, and culture in which the
process operates
Page 6
Overview
Ishikawa diagram, in fishbone shape, showing factors
of Equipment, Process, People, Materials, Environment and Management, all affecting the
overall problem. Smaller arrows connect the sub-causes to major causes.
Ishikawa diagrams were proposed by Kaoru Ishikawa [2] in the 1960s, who pioneered quality
management processes in the Kawasaki shipyards, and in the process became one of the
founding fathers of modern management.
It was first used in the 1960s, and is considered one of the seven basic tools of quality control.[3] It is known as a fishbone diagram because of its shape, similar to the side view of a fish
skeleton.
Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car,
where the required result was "Jinba Ittai" or "Horse and Rider as One". The main causes
included such aspects as "touch" and "braking" with the lesser causes including highly granular
factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door". Every
factor identified in the diagram was included in the final design.
Causes
Causes in the diagram are often categorized, such as to the 8 M's, described below. Cause-and-
effect diagrams can reveal key relationships among various variables, and the possible causes
provide additional insight into process behavior.
Causes can be derived from brainstorming sessions. These groups can then be labeled as
categories of the fishbone. They will typically be one of the traditional categories mentioned
Page 7
above but may be something unique to the application in a specific case. Causes can be traced
back to root causes with the 5 Whys technique.
The 8 Ms (used in
manufacturing)
Machine (technology)
Method (process)
Material (Includes Raw Material, Consumables and Information.)
Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions
Measurement (Inspection)
Milieu/Mother Nature (Environment)
Management/Money Power
Maintenance
The 8 Ps (used in service industry)
Product=Service
Price
Place
Promotion
People
Process
Physical Evidence
Productivity & Quality
The 4 Ss (used in service industry)
Surroundings Suppliers
Systems Skills
Questions to ask while building a Ishikawa Diagram
Man
– Was the document properly interpreted? – Was the information properly disseminated? – Did the
recipient understand the information? – Was the proper training to perform the task administered to the
person? – Was too much judgment required to perform the task? – Were guidelines for judgment
available? – Did the environment influence the actions of the individual? – Are there distractions in the
workplace? – Is fatigue a mitigating factor? – How much experience does the individual have in
performing this task?
Machine
– Was the correct tool used? – Are files saved with the correct extension to the correct location? – Is the
equipment affected by the environment? – Is the equipment being properly maintained (i.e.,
daily/weekly/monthly preventative maintenance schedule) – Does the software or hardware need to be
updated? – Does the equipment or software have the features to support our needs/usage? – Was the
machine properly programmed? – Is the tooling/fixturing adequate for the job? – Does the machine have
an adequate guard? – Was the equipment used within its capabilities and limitations? – Are all controls
Page 8
including emergency stop button clearly labeled and/or color coded or size differentiated? – Is the
equipment the right application for the given job?
Measurement
– Does the gauge have a valid calibration date? – Was the proper gauge used to measure the part, process,
chemical, compound, etc.? – Was a guage capability study ever performed? - Do measurements vary
significantly from operator to operator? - Do operators have a tough time using the prescribed gauge? - Is
the gauge fixturing adequate? – Does the gauge have proper measurement resolution? – Did the
environment influence the measurements taken?
Material (Includes Raw Material, Consumables and Information )
– Is all needed information available and accurate? – Can information be verified or cross-checked? –
Has any information changed recently / do we have a way of keeping the information up to date? – What
happens if we don't have all of the information we need? – Is a Material Safety Data Sheet (MSDS)
readily available? – Was the material properly tested? – Was the material substituted? – Is the supplier’s
process defined and controlled? – Were quality requirements adequate for part function? – Was the
material contaminated? – Was the material handled properly (stored, dispensed, used & disposed)?
Milieu
– Is the process affected by temperature changes over the course of a day? – Is the process affected by
humidity, vibration, noise, lighting, etc.? – Does the process run in a controlled environment? – Are
associates distracted by noise, uncomfortable temperatures, fluorescent lighting, etc.?
Method
– Was the canister, barrel, etc. labeled properly? – Were the workers trained properly in the procedure? –
Was the testing performed statistically significant? – Was data tested for true root cause? – How many “if
necessary” and “approximately” phrases are found in this process? – Was this a process generated by an
Integrated Product Development (IPD) Team? – Was the IPD Team properly represented? – Did the IPD
Team employ Design for Environmental (DFE) principles? – Has a capability study ever been performed
for this process? – Is the process under Statistical Process Control (SPC)? – Are the work instructions
clearly written? – Are mistake-proofing devices/techniques employed? – Are the work instructions
complete? – Is the tooling adequately designed and controlled? – Is handling/packaging adequately
specified? – Was the process changed? – Was the design changed? – Was a process Failure Modes
Page 9
Effects Analysis (FMEA) ever performed? – Was adequate sampling done? – Are features of the process
critical to safety clearly spelled out to the Operator?
Discriminant Analysis
Discriminant Analysis may be used for two objectives: either we want to assess the adequacy of
classification, given the group memberships of the objects under study; or we wish
to assign objects to one of a number of (known) groups of objects. Discriminant Analysis may
thus have a descriptive or a predictive objective.
In both cases, some group assignments must be known before carrying out the Discriminant
Analysis. Such group assignments, or labelling, may be arrived at in any way. Hence
Discriminant Analysis can be employed as a useful complement to Cluster Analysis (in order to
judge the results of the latter) or Principal Components Analysis. Alternatively, in star-galaxy
separation, for instance, using digitised images, the analyst may define group (stars, galaxies)
membership visually for a conveniently small training set or design set.
Methods implemented in this area are Multiple Discriminant Analysis, Fisher's Linear
Discriminant Analysis, and K-Nearest Neighbours Discriminant Analysis.
Multiple Discriminant Analysis
(MDA) is also termed Discriminant Factor Analysis and Canonical Discriminant
Analysis. It adopts a similar perspective to PCA: the rows of the data matrix to be
examined constitute points in a multidimensional space, as also do the group mean
vectors. Discriminating axes are determined in this space, in such a way that optimal
separation of the predefined groups is attained. As with PCA, the problem becomes
mathematically the eigenreduction of a real, symmetric matrix. The eigenvalues represent
the discriminating power of the associated eigenvectors. The nYgroups lie in a space of
dimension at most nY - 1. This will be the number of discriminant axes or factors
obtainable in the most common practical case when n > m > nY (where n is the number of
rows, and m the number of columns of the input data matrix).
Linear Discriminant Analysis
Page 10
is the 2-group case of MDA. It optimally separates two groups, using the Mahalanobis
metric or generalized distance. It also gives the same linear separating decision surface
as Bayesian maximum likelihood discrimination in the case of equal class covariance
matrices.
K-NNs Discriminant Analysis
: Non-parametric (distribution-free) methods dispense with the need for assumptions
regarding the probability density function. They have become very popular especially in
the image processing area. The K-NNs method assigns an object of unknown affiliation
to the group to which the majority of its K nearest neighbours belongs.
There is no best discrimination method. A few remarks concerning the advantages and
disadvantages of the methods studied are as follows.
Analytical simplicity or computational reasons may lead to initial consideration of linear
discriminant analysis or the NN-rule.
Linear discrimination is the most widely used in practice. Often the 2-group method is
used repeatedly for the analysis of pairs of multigroup data (yielding decision
surfaces for k groups).
To estimate the parameters required in quadratic discrimination more computation and
data is required than in the case of linear discrimination. If there is not a great difference
in the group covariance matrices, then the latter will perform as well as quadratic
discrimination.
The k-NN rule is simply defined and implemented, especially if there is insufficient data
to adequately define sample means and covariance matrices.
MDA is most appropriately used for feature selection. As in the case of PCA, we may
want to focus on the variables used in order to investigate the differences between
groups; to create synthetic variables which improve the grouping ability of the data; to
arrive at a similar objective by discarding irrelevant variables; or to determine the most
parsimonious variables for graphical representational purposes.
COOL HUNTING
Page 11
One of the most interesting and effective recent primary research tools used in marketing
research is cool hunting. Cool hunting is the process of trying to determine what is new
and 'cool' in society so that marketing efforts may take advantage of new ideas before
they become mainstream. The process of cool hunting requires that 'cool hunters' observe
what is going on in society, knowing that new trends (or fads) often start with one or two
trendsetting people doing (or having) something unique.
Cool hunting is a term coined in the early 1990s referring to a new breed
of marketing professionals, called coolhunters. It is their job to
make observations and predictions in changes of new or existing cultural trends. The
word derives from the aesthetic of "cool".
In this they resemble the intuitive fashion magazine editors of the 1960s such as Nancy
White (Harper's Bazaar 1958–1971)[citation needed]. Coolhunters operate most notably in the world
of street fashion and design, but their work also blurs into that of futurists such as Faith Popcorn.
Many webloggers now serve as online coolhunters, in a variety of cultural and technological
areas. Pattern Recognition, a 2003 novel by William Gibson, features a coolhunter as its main
character.
Business
Cool hunters are found in many different places. The most popular are:
Firms
A cool hunting firm is a marketing agency whose exclusive purpose is to conduct research of
the youth demographic in the areas listed above. They then compile their data and produce
reports detailing emerging and declining trends in youth culture as well as predictions for future
trends. These reports are then sold to various companies whose products target the youth
demographic. They also offer consulting services. Cool hunting firms often provide services for
some of the largest corporations in the world.
Page 12
In-house
Rather than outsourcing their market research, some companies opt for in-house youth culture
marketing divisions. These divisions act in much the same way as a coolhunting firm but the
reports and data collected remain within the company and are used solely to promote its
products. A company will often prefer this form of coolhunting as a way to gain an advantage in
the valuable youth market since the research conducted by coolhunting firms is available to
anyone willing to pay for it. A prime example of a company that employs in-house
coolhunting is Viacom's MTV television network.
Methods and practices
Coolhunting is much more than simple market research because of the nature of the subjects. The
teen and preteen market is often referred to as a "stubborn" demographic in that they do not
respond well to blatant advertising and marketing campaigns targeted at them. Coolhunters
therefore must be more stealthy in their methods of gathering information and data.
Focus groups
Focus groups, though quite obvious in their attempts at gathering information, are very popular
among coolhunters as they provide direct insight into the thoughts and feelings of their
target demographic. Coolhunters will typically gather a group of randomly selected individuals
from their target demographic. While one or more market researchers interact with the group,
they are often being monitored and recorded by a non-visible group, because not only do
coolhunters want to hear what their subjects have to say, they also want to observe their simple
mannerisms.
Depending on the nature of the study, the methods of the information-gathering during a focus
group interview may be extremely broad, with questions relating to lifestyle and youth culture, or
more specific, like comparing certain brands and determining which brands the group is most
responsive to.
Participants in focus groups are usually rewarded for their participation, whether it be a cash
amount, free products, or other rewards.
Page 13
Undercover coolhunters
Coolhunters will often seek out individuals from within their target demographic who are
regarded as leaders or trendsetters. They will then hire these individuals to be undercover
coolhunters, who gather information secretly among their peers and report their findings back to
their employers. This is a popular method of coolhunting as it provides insight into their target
demographic within their natural environment.
Online coolhunting
There are a wide variety of methods for conducting market research online. Popular examples
are online surveys where upon completion, the participant will usually receive a prize or
monetary compensation. Other times coolhunters will enter chatrooms and webgroups posing as
an individual within the target demographic and gather information.
Market Segmentation Research
Satisfying people's needs and making a profit along the way is the purpose of marketing.
However, people's needs differ and therefore satisfying them may require different approaches.
Identifying needs and recognising differences between groups of customers is at the heart of
marketing. We cannot do everything, we cannot satisfy everybody; resources do not stretch that
far. This means we have to be clever in targeting our offers at people who really do want and
need them, and we have to be strong in setting aside those who do not. This early observation is
fundamental, as it requires us to think as hard about where we don't want to sell our product as
where we do.
In business-to-business markets the aim of market segmentation is to arrive at clusters of like-
minded companies so as to allow your marketing/sales programme to focus on the subset of
prospects that are "most likely" to purchase your offering. There is a very strong pressure to use
segmentation in business-to-business markets to win a competitive advantage as there is often
little to differentiate one product from another. Segmentation therefore links strongly with a
strategy to achieve a sustainable differentiated position.
Page 14
The benefits of market segmentation are not hard to grasp. After all, the top 20% of customers in
a business may generate as much as 80% of the company’s profit, half of which is then lost
serving the bottom 30% of unprofitable customers.
The challenge is arriving at the most effective groupings. Segmentation can take the form of a
'demographic' segmentation, sometimes referred to as 'firmographics' in business-to-business
markets. This type of segmentation is based on geography of location, size of company and
standard industrial classifications (SICs). However, firmographics don't offer a sustainable
competitive advantage that competitors cannot copy.
A more challenging market segmentation is one based on behaviour or needs. Behavioural
segmentation segments on what companies buy, what companies produce and how companies
produce it. Needs-based segmentation is obviously one of the most difficult to assess; what do
companies want? what drives companies in their actions?
B2B International and market research can help segment your market and so enable an effective
marketing strategy, leading to competitive advantage in the marketplace. We will show you how
to:
Differentiate products/services in line with your customers
Improve your competitive positioning
Shape your product offering and pricing strategy to fit the markets with most potential
Provide focus on your customers so that you can:
o Concentrate on providing profitable products or services
o Target marketing and selling effort
To learn more about how B2B International can help you get the most from your customers –
and in return make your customers feel they are getting more value from you, contact our
Segmentation Research Team.
Concept testing
Page 15
Concept testing is the process of using quantitative methods and qualitative methods to evaluate
consumer response to a product idea prior to the introduction of a product to the market.
It can also be used to generate communication designed to alter consumer attitudes toward
existing products. These methods involve the evaluation by consumers of product concepts
having certain rational benefits, such as "a detergent that removes stains but is gentle on fabrics,"
or non-rational benefits, such as "a shampoo that lets you be yourself." Such methods are
commonly referred to as concept testing and have been performed using field surveys, personal
interviews and focus groups, in combination with various quantitative methods, to generate and
evaluate product concepts.
The concept generation portions of concept testing have been predominantly qualitative.
Advertising professionals have generally created concepts and communications of these concepts
for evaluation by consumers, on the basis of consumer surveys and other market research, or on
the basis of their own experience as to which concepts they believe represent product ideas that
are worthwhile in the consumer market.
The quantitative portions of concept testing procedures have generally been placed in
three categories:
(1) concept evaluations, where concepts representing product ideas are presented to
consumers in verbal or visual form and then quantitatively evaluated by consumers by
indicating degrees of purchase intent, likelihood of trial, etc.,
(2) Positioning, which is concept evaluation wherein concepts positioned in the same
functional product class are evaluated together, and
(3) product/concept tests, where consumers first evaluate a concept, then the
corresponding product, and the results are compared.
Shortcomings of traditional concept testing
The traditional system of concept testing has been inadequate as a means to identify and quantify
the criteria upon which consumer preference of one concept over another was based. These
methods were insufficient to ascertain the relative importance of the factors responsible for or
governing why consumers, markets and market segments reacted differently to concepts
presented to them in the concept tests. Without such information, market researchers and
Page 16
advertisers, with their expertise, could generalize, on the basis of a concept test, as to how
consumers might react to the actual products or to variations of the tested concepts.
Communication of the concept, as embodied in a new product, has generally been left to the
creativity of the advertising agency. No systematic quantitative method was known, however,
which could accurately identify the criteria on which the consumer choices were based and the
contribution or importance of each criterion to the purchase decision. Therefore, previous
concept testing methods have failed to provide market researchers with the complete information
necessary for them to create products specifically tailored to satisfy a consumer group balance of
purchase criteria.
Moreover, traditional concept testing methods have failed to accurately quantify the relationships
between consumer response to concepts and consumer choice of existing products which
compete in the same consumer market. Thus, they were unable to provide a communication of
the benefits of a consumer product, closely representing the tested concept, to a high degree of
accuracy.
These problems of concept testing have been identified in business and marketing journals. For
example, Moore and William (1982) in a literature survey and review of concept testing
methodology, point out that concept tests have failed to account for changes between the concept
tested and the communication describing the benefits of the product which embodies the concept.
The Moore article reports that "no amount of improvement in current concept testing practices
can remedy these problems." This is reflective of the fact that none of the traditional methods
provided a quantitative means for ascertaining the relative importance of the underlying criteria
of concept choices as a means for identifying the visual and verbal expressions of the concepts
which best communicate the benefits sought by the consumer. Nor did the traditional methods
quantify the relationships between concepts and existing products offered in the same consumer
market.
The ability of a method to ameliorate or overcome the above shortcomings would provide
substantial improvement in communication of the concepts identified in testing and offered to
the market as a product.
One such method is conjoint analysis another is choice modeling
Page 17
Modern concept testing
Today, with the advent of the Internet, concept testing has experienced a resurgence. Armed with
the ability to show thousands of respondents images of an actual concept, many market
researchers, and organizations, have had their faith restored in this once questionable method.
Online survey takers now have the ability to view a potential product in a similar manner to how
they would view the same product in a retail environment. In addition, with online retailing
become increasingly prominent, many online respondents are also online consumers. Thus, they
are able to easily place themselves in the mindset of a consumer looking to buy goods or
services. Since the arrival of these methods, market researchers have been able to make better,
more accurate, suggestions to their clients regarding the decision to move forward, revise, or
start over with a product concept. Online Choice Modelling for example can produce
detailed econometric models of demand for various attributes of the new product such as feature,
packaging and price.
Question: what is "brand equity", and why would you measure it?
Answer: brand equity research has two elements:
1. Brand profiling - where your brand and its competitors are profiled against a set of
indicators and attributes. The indicators are usually fixed within the model, but attributes
may be specific to the brand or its category
2. Conversion model - where the model assesses the degree of strength or vulnerability you
have in your customer base in relation to competition
The usual core measures relate to:
Awareness
Familiarity
Favorability
Usage
Page 18
Loyalty
Individual brand/category attributes
These measures can be researched cheaply and effectively using only four sets of
questions:
1. Brand/category usage
2. Brand stature
3. Brand intimacy
4. Brand attributes
In more detail…………
Brand equity research is an attempt to put a value on the strength of a brand in the market, in the
same way that the shares/stocks put a value on the strength of the corporation in the eyes of the
investors. Indeed, brand equity research has shown that the two are related - the growth in brand
equity correlates with the growth in stock values, and also sales, profits, price premiums and
employee satisfaction. Given that brand value often accounts for a very significant proportion of
the value of the total company (75% for Ford, 51% for the Coca-Cola Corporation), and strong
brands drive profitability in several ways (additional sales, reduced costs, referals to new
customers), this does make sense.
Brand equity research has two elements:
1. Brand profiling - where your brand and its competitors are profiled against a set of
indicators and attributes. The indicators are usually fixed within the model, but attributes
may be specific to the brand or its category
2. Conversion model - where the model assesses the degree of strength or vulnerability you
have in your customer base in relation to competition. Credit card companies use this to
identify which competitive customers they should approach as they are open to
alternative offers, and which they should not waste their time on because they are loyal to
their existing suppliers
Page 19
The usual core measures relate to:
Awareness
Familiarity
Favorability
Usage
Loyalty
Individual brand/category attributes
Many of the brand equity research agencies have a tendency to over-complicate this type of
research, turning a 10-15 minute questionnaire into a 20-30 minute one. This usually leads to a
great deal more data than the audience can cope with, and considerably higher costs. However, it
can be very cheap to run if kept to a core number of questions. Indeed. by asking four sets of
questions, you can get a reading on:
1. brand/category usage - from usage questions2. awareness/familiarity - from brand trust and intimacy questions3. brand stature - from a brand stature question4. brand intimacy - from a brand intimacy question5. a categorisation of your customers and potential customers (suspect, potential, lost
customer, customer, advocate) - from brand stature/intimacy & usage questions6. performance against your key brand attributes - from brand attribute questions7. drivers of loyalty - by correlating your brand attribute performance scores with your
combined brand stature/intimacy scores8. association between brand loyalty & usage - by tabulating brand stature/intimacy scores
against usage categories (or using categorical statistical techniques)
These four sets of questions are:
Brand/category usage:
Which brands are you currently buying/using [within this sort of
environment/situation/mood]?
Brand stature:
Page 20
How would you rate the quality of the products/services of this brand?
Brand intimacy:
To what extent is this your kind of brand?
Performance against brand attributes:
To what extent do you agree with the following statements about these brands?
Imagine achieving so much more for so much less.
We can help you in two ways - we have a mass of smart strategic brand marketing tools,
processes and workshop techniques for you to use, and a mass of smart brand marketing agencies
as members across the world with niche knowledge and experience to support you thereafter.
Brand equity
Brand equity refers to the marketing effects and outcomes that accrue to a product with its brand name
compared with those that would accrue if the same product did not have the brand name [1][2][3][4]. And, at
the root of these marketing effects is consumers' knowledge. In other words, consumers' knowledge about
a brand makes manufacturers/advertisers respond differently or adopt appropriately adept measures for
the marketing of the brand [5][6]. The study of brand equity is increasingly popular as some marketing
researchers have concluded that brands are one of the most valuable assets that a company has [7]. Brand
equity is one of the factors which can increase the financial value of a brand to the brand owner, although
not the only one [8]. Elements that can be included in the valuation of brand equity include (but not limited
to): changing market share, profit margins, consumer recognition of logos and other visual
elements, brand language associations made by consumers, consumers' perceptions of quality and other
relevant brand values.
Page 21
Measurement
There are many ways to measure a brand. Some measurements approaches are at the
firm level, some at the product level, and still others are at the consumer level.
Firm Level: Firm level approaches measure the brand as a financial asset. In short, a calculation is made
regarding how much the brand is worth as an intangible asset. For example, if you were to take the value
of the firm, as derived by its market capitalization - and then subtract tangible assets and "measurable"
intangible assets- the residual would be the brand equity.[7] One high profile firm level approach is by the
consulting firm Interbrand. To do its calculation, Interbrand estimates brand value on the basis of
projected profits discounted to a present value. The discount rate is a subjective rate determined by
Interbrand and Wall Street equity specialists and reflects the risk profile, market leadership, stability and
global reach of the brand[9].
Product Level: The classic product level brand measurement example is to compare the price of a no-
name or private label product to an "equivalent" branded product. The difference in price, assuming all
things equal, is due to the brand[10]. More recently a revenue premium approach has been advocated [4].
Consumer Level: This approach seeks to map the mind of the consumer to find out what associations
with the brand the consumer has. This approach seeks to measure the awareness (recall and recognition)
and brand image (the overall associations that the brand has). Free association tests and projective
techniques are commonly used to uncover the tangible and intangible attributes, attitudes, and intentions
about a brand[5]. Brands with high levels of awareness and strong, favorable and unique associations are
high equity brands[5].
All of these calculations are, at best, approximations. A more complete understanding of the brand can
occur if multiple measures are used.
Positive brand equity vs. negative brand equity
A brand equity is the positive effect of the brand on the difference between the prices that the consumer
accepts to pay when the brand known compared to the value of the benefit received.
There are two schools of thought regarding the existence of negative brand equity. One perspective states
brand equity cannot be negative, hypothesizing only positive brand equity is created by marketing
activities such as advertising, PR, and promotion. A second perspective is that negative equity can exist,
due to catastrophic events to the brand, such as a wide product recall or continued negative press attention
(Blackwater or Halliburton, for example).
Page 22
Colloquially, the term "negative brand equity" may be used to describe a product or service where a brand
has a negligible effect on a product level when compared to a no-name or private label product. The
brand-related negative intangible assets are called “brand liability”, compared with “brand equity” [11].
Family branding vs. individual branding strategies
The greater a company's brand equity, the greater the probability that the company will use a family
branding strategy rather than an individual branding strategy. This is because family branding allows
them to leverage the equity accumulated in the core brand. Aspects of brand equity includes: brand
loyalty, awareness, association, and perception of quality .
Examples
In the early 2000s in North America, the Ford Motor Company made a strategic decision to brand all new
or redesigned cars with names starting with "F". This aligned with the previous tradition of naming all
sport utility vehicles since the Ford Explorer with the letter "E". The Toronto Star quoted an analyst who
warned that changing the name of the well known Windstarto the Freestar would cause confusion and
discard brand equity built up, while a marketing manager believed that a name change would highlight
the new redesign. The aging Taurus, which became one of the most significant cars in American auto
history, would be abandoned in favor of three entirely new names, all starting with "F", the Five
Hundred, Freestar andFusion. By 2007, the Freestar was discontinued without a replacement. The Five
Hundred name was thrown out and Taurus was brought back for the next generation of that car in a
surprise move by Alan Mulally. "Five Hundred" was recognized by less than half of most people, but an
overwhelming majority was familiar with the "Ford Taurus".
Brand Name Testing
The ultimate objective of the Branding Research is to deliver the branding research
information which helps in the better understanding of your brand position, and then, in
enhancing that brand position in the national and international markets. As a well established
and well-equipped branding research agency, we deliver customized brand studies which
give direction, insight, and uncover opportunities to boost the competitive position. We view
brand development as a blend of creativity and marketing information to uncover brand
positioning opportunities in market spaces generally cluttered with the brand noise. We offer
branding services such as the brand positioning, demand analysis, brand extension, brand
Page 23
identity & image, brand audit, market development, and sales promotion strategies. Through
all these means we provide qualitative brand advertising and sales promotional services.
Branding research studies begin with the Brand Base research, followed by the Brand
Qualitative research and targeted quantitative Brand Screening Survey studies. In the Brand
Base research, we gauze the landscape evaluating the existing available branding research,
client and competitive advertising, and the brand name architecture. We endeavor to uncover
the existing comparative brand equity marketing information and knowledge. As a part of
this brand equity discovery process, we conduct a multitude of sweeping interviews with
client management, field sales, product development and customer service staff.
Naming Consultants with a Research Edge
Whether you're naming a business or researching the perfect product name, our
proprietary Name DNA®methodologies can pinpoint the one brand name that
Fits the concept
Can be pronounced easily
Is durable and elastic across time, cultures and categories
Brand Name Research Components
Choosing a name is a delicate balance of art and science. As your naming consultants, we rely on
a proprietary algorithm that establishes a composite Name DNA Validation® score, utilizing the
following key measures:
Emotional Bonding Power - When naming a business or developing a product name,
are you connecting with your target market? Are you measuring the name's emotional
bonding power?
Memorability - Brand name research shows memorability is the true litmus test of
exceptional names. Can your target market recall the new product name after seeing it just
once?
Latent Association - What negative and positive associations exist with your new
corporate name or product name? What barriers have to be overcome with negative latent
Page 24
associations? How does sound symbolism or phonosemantics (the meaning of sounds)
affect the evaluation of a name's latent association?
Fit to Concept - Which brand name candidate best positions your company, product or
service offering?
Pronouncability - Brand name research reveals what should be common sense: If your
target market can't pronounce the brand name, they won't ask for it. Can you quantify your
product name pronouncability?
Sound Symbolism - Does the product name sound right to your target customers? If
you're naming a business, does the name sound powerful and established?
Five Best Practices for Brand Name Research
1. Do not rely on focus groups alone. Company names or product names that are
tested in focus groups tend to weed out the weak names, but do not guarantee selection of
the winning names.
2. Get emotional feedback, not just facts and figures. Even among the target
market, naming can be a popularity contest. But since research imitates life, the popularity
of company names and product names is usually due to a variety of emotional reasons. Our
proprietary quantitative Name DNA Validation® research technique makes the most of this
emotional intelligence.
3. Go for quality, not quantity. Brand name research for a large number of
corporate names or product names with your target is a powerful temptation. Yet experience
shows you will yield the most relevant results with only a handful of strong name
candidates.
4. Test the name with the right audience. Your new brand name must resonate
with your target market. Not the marketing department. Not upper management. Not the
founding fathers.
5. Use the latest available brand name research techniques. Get the most out of
your company name or product name investment by using state-of-the-art brand name
Page 25
research techniques to effectively measure and project emotional bonding power,
memorability, latent association, fit to concept, pronouncability and sonorous properties.
Eye tracking
What is Eye Tracking?
Eye tracking is a technique used to determine where a person is looking. The concepts
underlying eye tracking are deceptively simple: track the movements of the user's eyes and note
what the pupils are doing while the user is looking at a particular feature. In practice, however,
these measures are difficult to achieve and require high-precision instruments as well as
sophisticated data analysis and interpretation. Equipment which is used to do this are called eye
trackers. Eye movements made during reading and picture identification provide useful
information about the processes by which people understand visual input and integrate it with
knowledge and memory. Researchers have used eye tracking for studying how people read, solve
problems, look at pictures, scan instrument panels, and perform complex tasks.
Eye Tracking Research
To date, we use eyetrackers for several studies funded by the Department of Defense. One looks
at how naval officers use a new display designed to assist them in making tactical decisions, the
second investigates how novices acquire critical knowledge needed to make tactical decisions
and a third focuses on psychophysiological measures of cognitive workload.
Commercial Eye Tracking
Commercial eye tracking is now available through eyeTracking.com! eyetracking.com is a joint
venture with Dr. Sandra Marshall, her research team and San Diego State University. For the
past 15 years, Dr. Marshall has had research support from the U.S. Department of Defense to
Page 26
examine important questions about how individuals process information. Her recent work has
incorporated eye tracking, and in the course of carrying out her theoretical research, she has
made a number of exciting breakthroughs in measuring and interpreting eye data. These
breakthroughs are protected by patent and are licensed exclusively to eyeTracking.com, the
Internet Eye Tracking Company.TM To find out more about obtaining services from
eyeTracking.com please call Tim Drapeau at (619) 594-0370 or click here to email.
Eye tracking is the process of measuring either the point of gaze ("where we are looking") or the
motion of an eye relative to the head. An eye tracker is a device for measuring eye positions
and eye movement. Eye trackers are used in research on the visual system, in psychology,
in cognitive linguistics and in product design. There are a number of methods for measuring eye
movement. The most popular variant uses video images from which the eye position is extracted.
Other methods use search coils or are based on the electrooculogram.
Tracker types
Eye trackers measure rotations of the eye in one of several ways, but principally they fall into
three categories:
One type uses an attachment to the eye, such as a special contact lens with an embedded
mirror or magnetic field sensor, and the movement of the attachment is measured with the
assumption that it does not slip significantly as the eye rotates. Measurements with tight fitting
contact lenses have provided extremely sensitive recordings of eye movement, and magnetic
search coils are the method of choice for researchers studying the dynamics and underlying
physiology of eye movement.
The second broad category uses some non-contact, optical method for measuring eye motion.
Light, typically infrared, is reflected from the eye and sensed by a video camera or some other
specially designed optical sensor. The information is then analyzed to extract eye rotation from
changes in reflections. Video based eye trackers typically use the corneal reflection (the
first Purkinje image) and the center of the pupil as features to track over time. A more sensitive
type of eye tracker, the dual-Purkinje eye tracker[21], uses reflections from the front of the cornea
(first Purkinje image) and the back of the lens (fourth Purkinje image) as features to track. A still
more sensitive method of tracking is to image features from inside the eye, such as the retinal
blood vessels, and follow these features as the eye rotates. Optical methods, particularly those
Page 27
based on video recording, are widely used for gaze tracking and are favored for being non-
invasive and inexpensive.
The third category uses electric potentials measured with electrodes placed around the eyes.
The eyes are the origin of a steady electric potential field, which can also be detected in total
darkness and if the eyes are closed. It can be modelled to be generated by a dipole with its
positive pole at the cornea and its negative pole at the retina. The electric signal that can be
derived using two pairs of contact electrodes placed on the skin around one eye is
called Electrooculogram (EOG). If the eyes move from the centre position towards the
periphery, the retina approaches one electrode while the cornea approaches the opposing one.
This change in the orientation of the dipole and consequently the electric potential field results in
a change in the measured EOG signal. Inversely, by analysing these changes in eye movement
can be tracked. Due to the discretisation given by the common electrode setup two separate
movement components – a horizontal and a vertical – can be identified. A third EOG component
is the radial EOG channel[22], which is the average of the EOG channels referenced to some
posterior scalp electrode. This radial EOG channel is sensitive to the saccadic spike potentials
stemming from the extra-ocular muscles at the onset of saccades, and allows reliable detection
of even miniature saccades[23].
Due to potential drifts and variable relations between the EOG signal amplitudes and the
saccade sizes make it challenging to use EOG for measuring slow eye movement and detecting
gaze direction. EOG is, however, a very robust technique for measuring saccadic eye
movement associated with gaze shifts and detecting blinks. Contrary to video-based eye-
trackers, EOG allows recording of eye movements even with eyes closed, and can thus be used
in sleep research. It is a very light-weight approach that, in contrast to current video-based eye
trackers, only requires very low computational power, works under different lighting conditions
and can be implemented as an embedded, self-contained wearable system [24]. It is thus the
method of choice for measuring eye movement in mobile daily-life situations and REM phases
during sleep.
[edit]Technologies and techniques
The most widely used current designs are video-based eye trackers. A camera focuses on one
or both eyes and records their movement as the viewer looks at some kind of stimulus. Most
modern eye-trackers use contrast to locate the center of the pupil and use infrared and near-
infrared non-collimated light to create a corneal reflection (CR). The vector between these two
Page 28
features can be used to compute gaze intersection with a surface after a simple calibration for
an individual.
Two general types of eye tracking techniques are used: Bright Pupil andDark Pupil. Their
difference is based on the location of the illumination source with respect to the optics. If the
illumination is coaxial with the optical path, then the eye acts as a retroreflector as the light
reflects off the retina creating a bright pupil effect similar to red eye. If the illumination source is
offset from the optical path, then the pupil appears dark because the retroreflection from the
retina is directed away from the camera.
Bright Pupil tracking creates greater iris/pupil contrast allowing for more robust eye tracking with
all iris pigmentation and greatly reduces interference caused by eyelashes and other obscuring
features[citation needed]. It also allows for tracking in lighting conditions ranging from total darkness to
very bright. But bright pupil techniques are not effective for tracking outdoors as extraneous IR
sources interfere with monitoring[citation needed].
Eye tracking setups vary greatly; some are head-mounted, some require the head to be stable
(for example, with a chin rest), and some function remotely and automatically track the head
during motion. Most use a sampling rate of at least 30 Hz. Although 50/60 Hz is most common,
today many video-based eye trackers run at 240, 350 or even 1000/1250 Hz, which is needed in
order to capture the detail of the very rapid eye movement during reading, or during studies of
neurology.
Eye movement is typically divided into fixations and saccades, when the eye gaze pauses in a
certain position, and when it moves to another position, respectively. The resulting series of
fixations and saccades is called a scanpath. Most information from the eye is made available
during a fixation, but not during a saccade.[citation needed] The central one or two degrees of the
visual angle (the fovea) provide the bulk of visual information; the input from larger eccentricities
(the periphery) is less informative. Hence, the locations of fixations along a scanpath show what
information loci on the stimulus were processed during an eye tracking session. On average,
fixations last for around 200 ms during the reading of linguistic text, and 350 ms during the
viewing of a scene. Preparing a saccade towards a new goal takes around 200 ms.[citation needed]
Scanpaths are useful for analyzing cognitive intent, interest, and salience. Other biological
factors (some as simple as gender) may affect the scanpath as well. Eye tracking in HCI
typically investigates the scanpath for usability purposes, or as a method of input in gaze-
contingent displays, also known as gaze-based interfaces.
Page 29
[edit]Eye tracking vs. gaze tracking
Eye trackers necessarily measure the rotation of the eye with respect to the measuring system.
If the measuring system is head mounted, as with EOG, then eye-in-head angles are measured.
If the measuring system is table mounted, as with scleral search coils or table mounted camera
(“remote”) systems, then gaze angles are measured.
In many applications, the head position is fixed using a bite bar, a forehead support or
something similar, so that eye position and gaze are the same. In other cases, the head is free
to move, and head movement is measured with systems such as magnetic or video based head
trackers.
For head-mounted trackers, head position and direction are added to eye-in-head direction to
determine gaze direction. For table-mounted systems, such as search coils, head direction is
subtracted from gaze direction to determine eye-in-head position.
[edit]Eye tracking in practice
A great deal of research has gone into studies of the mechanisms and dynamics of eye rotation,
but the goal of eye tracking is most often to estimate gaze direction. Users may be interested in
what features of an image draw the eye, for example. It is important to realize that the eye
tracker does not provide absolute gaze direction, but rather can only measure changes in gaze
direction. In order to know precisely what a subject is looking at, some calibration procedure is
required in which the subject looks at a point or series of points, while the eye tracker records
the value that corresponds to each gaze position. (Even those techniques that track features of
the retina cannot provide exact gaze direction because there is no specific anatomical feature
that marks the exact point where the visual axis meets the retina, if indeed there is such a
single, stable point.) An accurate and reliable calibration is essential for obtaining valid and
repeatable eye movement data, and this can be a significant challenge for non-verbal subjects
or those who have unstable gaze.
Each method of eye tracking has advantages and disadvantages, and the choice of an eye
tracking system depends on considerations of cost and application. There is a trade-off between
cost and sensitivity, with the most sensitive systems costing many tens of thousands of dollars
and requiring considerable expertise to operate properly. Advances in computer and video
technology have led to the development of relatively low cost systems that are useful for many
applications and fairly easy to use. Interpretation of the results still requires some level of
Page 30
expertise, however, because a misaligned or poorly calibrated system can produce wildly
erroneous data.
[
[edit]Eye tracking of younger and elderly people in walking
Elderly subjects depend more on foveal vision than younger subjects during walking. Their
walking speed is decreased by a limited visual field, probably caused by a deteriorated
peripheral vision.
Younger subjects make use of both their central and peripheral vision while walking. Their
peripheral vision allows faster control over the process of walking.[28]
[edit]Choosing an eye tracker
One difficulty in evaluating an eye tracking system is that the eye is never still, and it can be
difficult to distinguish the tiny, but rapid and somewhat chaotic movement associated with
fixation from noise sources in the eye tracking mechanism itself. One useful evaluation
technique is to record from the two eyes simultaneously and compare the vertical rotation
records. The two eyes of a normal subject are very tightly coordinated and vertical gaze
directions typically agree to within +/- 2 minutes of arc (RMS of vertical position difference)
during steady fixation. A properly functioning and sensitive eye tracking system will show this
level of agreement between the two eyes, and any differences much larger than this can usually
be attributed to measurement error.
[edit]Applications
A wide variety of disciplines use eye tracking techniques, including cognitive
science, psychology (notably psycholinguistics, the visual world paradigm), human-computer
interaction(HCI), marketing research and medical research (neurological diagnosis). Specific
applications include the tracking eye movement in language reading, music reading,
human activity recognition, the perception of advertising, and the playing of sport.[29] Uses
include:
Cognitive Studies
Medical Research
Human Factors
Computer Usability
Translation Process
Research
Page 31
Vehicle Simulators
In-vehicle Research
Training Simulators
Virtual Reality
Adult Research
Infant Research
Adolescent
Research
Geriatric Research
Primate Research
Sports Training
fMRI / MEG / EEG
Commercial eye
tracking (web
usability,
advertising,
marketing,
automotive, etc)
Finding good clues
Communication
systems for disabled
Improved image and
video
communications
Computer Science:
Activity
Recognition [30] [31] [32]
Conjoint analysis
Conjoint analysis is a statistical technique used in market research to determine how people value
different features that make up an individual product or service.
The objective of conjoint analysis is to determine what combination of a limited number of attributes is
most influential on respondent choice or decision making. A controlled set of potential products or
services is shown to respondents and by analyzing how they make preferences between these products,
the implicit valuation of the individual elements making up the product or service can be determined.
These implicit valuations (utilities or part-worths) can be used to create market models that estimate
market share, revenue and even profitability of new designs.
Conjoint originated in mathematical psychology and was developed by marketing professor Paul Green at
the University of Pennsylvania and Data Chan. Other prominent conjoint analysis pioneers include
professor V. “Seenu” Srinivasan of Stanford University who developed a linear programming (LINMAP)
procedure for rank ordered data as well as a self-explicated approach, Richard Johnson (founder
Page 32
of Sawtooth Software) who developed the Adaptive Conjoint Analysis technique in the 1980s and Jordan
Louviere (Ph.D., University of Iowa) who invented and developed Choice-based approaches to conjoint
analysis and related techniques such as MaxDiff.
Today it is used in many of the social sciences and applied sciences including marketing, product
management, and operations research. It is used frequently in testing customer acceptance of new product
designs, in assessing the appeal of advertisements and in service design. It has been used in product
positioning, but there are some who raise problems with this application of conjoint analysis (see
disadvantages).
Conjoint analysis techniques may also be referred to as multiattribute compositional modelling, discrete
choice modelling, or stated preference research, and is part of a broader set of trade-off analysis tools
used for systematic analysis of decisions. These tools include Brand-Price Trade-Off, Simalto, and
mathematical approaches such as evolutionary algorithms orRule Developing Experimentation.
Conjoint Design
A product or service area is described in terms of a number of attributes. For example, a television may
have attributes of screen size, screen format, brand, price and so on. Each attribute can then be broken
down into a number of levels. For instance, levels for screen format may be LED, LCD, or Plasma.
Respondents would be shown a set of products, prototypes, mock-ups, or pictures created from a
combination of levels from all or some of the constituent attributes and asked to choose from, rank or rate
the products they are shown. Each example is similar enough that consumers will see them as close
substitutes, but dissimilar enough that respondents can clearly determine a preference. Each example is
composed of a unique combination of product features. The data may consist of individual ratings, rank
orders, or preferences among alternative combinations.
As the number of combinations of attributes and levels increases the number of potential profiles
increases exponentially. Consequently, fractional factorial design is commonly used to reduce the number
of profiles that have to be evaluated, while ensuring enough data is available for statistical analysis,
resulting in a carefully controlled set of "profiles" for the respondent to consider
Types of conjoint analysis
The earliest forms of conjoint analysis were what are known as Full Profile studies, in which a small set
of attributes (typically 4 to 5) are used to create profiles that are shown to respondents, often on individual
Page 33
cards. Respondents then rank or rate these profiles. Using relatively simple dummy variable regression
analysis the implicit utilities for the levels can be calculated.
Two drawbacks were seen in these early designs. Firstly, the number of attributes in use was heavily
restricted. With large numbers of attributes, the consideration task for respondents becomes too large and
even with fractional factorial designs the number of profiles for evaluation can increase rapidly.
In order to use more attributes (up to 30), hybrid conjoint techniques were developed. The main
alternative was to do some form of self-explication before the conjoint tasks and some form of adaptive
computer-aided choice over the profiles to be shown.
The second drawback was that the task itself was unrealistic and did not link directly to behavioural
theory. In real-life situations, the task would be some form of actual choice between alternatives rather
than the more artificial ranking and rating originally used. Jordan Louviere pioneered an approach that
used only a choice task which became the basic of choice-based conjoint and discrete choice analysis.
This stated preference research is linked to econometric modeling and can be linked revealed
preference where choice models are calibrated on the basis of real rather than survey data. Originally
choice-based conjoint analysis was unable to provide individual level utilities as it aggregated choices
across a market. This made it unsuitable for market segmentation studies. With newer
hierarchical Bayesian analysis techniques, individual level utilities can be imputed back to provide
individual level data.
Information collection
Data for conjoint analysis is most commonly gathered through a market research survey, although
conjoint analysis can also be applied to a carefully designed configurator or data from an appropriately
design test market experiment. Market research rules of thumb apply with regard to statistical sample size
and accuracy when designing conjoint analysis interviews.
The length of the research questionnaire depends on the number of attributes to be assessed and the
method of conjoint analysis in use. A typical Adaptive Conjoint questionnaire with 20-25 attributes may
take more than 30 minutes to complete. Choice based conjoint, by using a smaller profile set distributed
across the sample as a whole may be completed in less than 15 minutes. Choice exercises may be
displayed as a store front type layout or in some other simulated shopping environment.
Page 34
Analysis
Any number of algorithms may be used to estimate utility functions. These utility functions indicate the
perceived value of the feature and how sensitive consumer perceptions and preferences are to changes in
product features. The actual mode of analysis will depend on the design of the task and profiles for
respondents. For full profile tasks, linear regression may be appropriate, for choice based tasks, maximum
likelihood estimation, usually with logistic regression are typically used. The original methods were
monotonic analysis of variance or linear programming techniques, but these are largely obsolete in
contemporary marketing research practice.
In addition, hierarchical Bayesian procedures that operate on choice data may be used to estimate
individual level utilities from more limited choice-based designs.
Advantages
estimates psychological tradeoffs that consumers make when evaluating several attributes together
measures preferences at the individual level
uncovers real or hidden drivers which may not be apparent to the respondent themselves
realistic choice or shopping task
able to use physical objects
if appropriately designed, the ability to model interactions between attributes can be used to develop
needs based segmentation
Disadvantages
designing conjoint studies can be complex
with too many options, respondents resort to simplification strategies
difficult to use for product positioning research because there is no procedure for converting
perceptions about actual features to perceptions about a reduced set of underlying features
respondents are unable to articulate attitudes toward new categories, or may feel forced to think about
issues they would otherwise not give much thought to
poorly designed studies may over-value emotional/preference variables and undervalue concrete
variables
does not take into account the number items per purchase so it can give a poor reading of market
share
Page 35
Segmenting and positioning
A marketing strategy is based on expected customer behavior in a certain market. In order to know the
customer and its expected buying process of segmenting and positioning is needed. These processes are
chronological steps which are dependent on each other. The process of market segmentation and
of positioning are described elsewhere within the Wikipedia. This topic elaborates on the dependency and
relationship between these processes.
The process-data model
Below a generic process-data model is given for the whole process of segmenting and positioning as
a basis of deciding on the most effective marketing strategy and marketing mix.
This model consists of the three main activities: segmenting, targeting and positioning. It shows the
chronological dependency of the different activities. On the right side of the model the concepts resulting
from the activities are showed. The arrows show that one concept results from one or more previous
concepts; the concept can not be made when the previous activities have not taken place. Below the three
main activities are shortly described as well as their role as a basis for the next step or their dependency
on the previous step.
Targeting
After the most attractive segments are selected, a company should not directly start targeting all
these segments -- other important factors come into play in defining a target market. Four sub
activities form the basis for deciding on which segments will actually be targeted.
The four sub activities within targeting are:
1. defining the abilities of the company and resources needed to enter a market
2. analyzing competitors on their resources and skills
3. considering the company’s abilities compared to the competitors' abilities
4. deciding on the actual target markets.
Page 36
The first three sub activities are described as the topic competitor analysis. The last sub activity of
deciding on the actual target market is an analysis of the company's abilities to those of its competitors.
The results of this analysis leads to a list of segments which are most attractive to target and have a good
chance of leading to a profitable market share.
Obviously, targeting can only be done when segments have been defined, as these segments allow firms
to analyze the competitors in this market. When the process of targeting is ended, the markets to target are
selected, but the way to use marketing in these markets is not yet defined. To decide on the actual
marketing strategy, knowledge of the differential advantages of each segment is needed.
Positioning
When the list of target markets is made, a company might want to start on deciding on a good marketing
mix directly. But an important step before developing the marketing mix is deciding on how to create an
identity or image of the product in the mind of the customer. Every segment is different from the others,
so different customers with different ideas of what they expect from the product. In the process of
positioning the company:
1. identifies the differential advantages in each segment
2. decides on a different positioning concept for each of these segments. This process is described at the
topic positioning, here different concepts of positioning are given.
The process-data model shows the concepts resulting from the different activities before and within
positioning. The model shows how the predefined concepts are the basis for the positioning statement.
The analyses done of the market, competitors and abilities of the company are necessary to create a good
positioning statement. When the positioning statement is created, one can start on creating the marketing
mix.
B2C and B2B
The process described above can be used for both business-to-customer as well as business-to-business
marketing. Although most variables used in segmenting the market are based on customer characteristics,
business characteristics can be described using the variables which are not depending on the type of
buyer. There are however methods for creating a positioning statement for both B2C and B2B segments.
One of these methods is MIPS: a method for managing industrial positioning strategies by Muhlbacher,
Dreher an Gabriel-Ritter (1994).
Page 37
BRAND POSITIONING:
Product positioning is an important strategy for achieving differential advantage. Positioning reflects the
"place" a product occupies in a market or segment. A successful position has characteristics that are
both differentiating and important to consumers.
Every product has some sort of position — whether intended or not. Positions are based upon
consumer perceptions, which may or may not reflect reality. A position is effectively built by
communicating a consistent message to consumers about the product and where it fits into the market
— through advertising, brand name, and packaging.
Positioning is inextricably linked with market segmentation . You can’t define a good position until you
have divided the market into unique segments and selected your target segments. Three key research
issues must be addressed:
What is your current position?
What does the "space" look like — what are the most important dimensions in the
category?
What are the other products in that space and where are they?
Page 38
What are the gaps, unfilled positions or "holes" in the category?
Which dimensions are most important?
How do these attitudes differ by market segment?
What position do you want to have?
Some of the positioning opportunities for a product include:
Finding an unmet consumer need — or at least one that’s not being adequately met
now by competition
Identifying a product strength that is both unique & important
Determining how to correct a product weakness and thereby enhance a product’s
appeal. (e.g., legitimate "new & improved")
Changing consumer usage patterns to include different or additional uses for the
product
Identifying market segments, which represent the best targets for a product
How do you create a new positioning?
Creating a new positioning can come from two sources:
Physical product differences
Communications — finding a memorable and meaningful way to describe the product (e.g.,
calling 7-Up the "Uncola"). As Ries and Trout point out, "Positioning is not what you do to a
product; positioning is what you do to the mind of the prospect."
Page 39
Pricing research
Pricing is one of the more technical areas of market research. There are four main approaches, the Gabor-
Granger technique, van Westendorp, Brand Price Trade-off and Conjoint Analysis. Many companies sell
branded pricing research packages that are just a variation on one of these techniques, however selecting the
right technique ultimately depends on what the problem is you are trying to solve. Additional information is
on our help pages
Market context and positioning are also extremely important in setting prices. In technology markets, prices
are typically falling over time. Historically the price of PCs has been dropping at about 2% per month. In
business markets "value-in-use" or "total cost" may be more important than absolute price.
Price modelling and market models are a fundamental part of pricing research.
"I just wanted to say a big thank you for the pricing study work. The findings will certainly
help shape our thinking in this area and we now have a view of pricing that far extends our
knowledge in the market place."
Strategy Director, Macromedia 2005
Gabor-Granger (direct or likelihood of purchase pricing)
Gabor-Granger pricing research is named after the economists who invented it in the 1960s.
Customers are asked to complete a survey where they are asked to say if they would buy a
product at a particular price. The price is changed and respondents again say if they would buy or
not. From the results we can work out what the optimum price is for each individual. By taking a
sample of customers we can work out what levels of demand would be expected at each price
point across the market as a whole (the demand curve in the following graph). Using this
estimate of demand, the price elasticity (or expected revenue) can be calculated and so the
optimum price-point in the market established. Note that a revenue optimum may be different
from a profit optimum. The ability to model dynamically is extremely valuable in pricing studies.
Page 40
Gabor-Granger output
A weakness of Gabor Granger is that customers may understate the price they will pay (there are
also circumstances in which they will overstate the price). Consequently the phrasing of the
"would you buy" question is extremely important as are other contextual questions to place the
customer in the buying frame of mind. Typically, Gabor Granger is only used when considering
one product in isolation, whereas in real life they would face a choice about which product to
buy .
Van Westendorp
A more sophisticated variation is called Van Westendorp pricing which uses rates each price on
a scale from too cheap to too expensive. The resultant price "space" helps to determine the
options - and so pricing tactics - available. This is a technique which is more for price
positioning type studies than for estimating optimum pricing.
Conjoint analysis
The main major technique for pricing is based on Conjoint Analysis and is more sophisticated
and more reliable than other research techniques. Conjoint is excellent at looking at
understanding how choices are made and consequently the importance of price. For some,
conjoint analysis is the only way of carrying out pricing research. However, conjoint analysis is a
more technical form of research and requires higher levels of design skills. If pricing is to be
Page 41
conducted it is often advantageous to include it as part of a broad conjoint study into product and
service features.
In conjoint analysis, customers trade off price against other product features. By looking at how
customers make decisions, economic impact of price changes can be assessed as can 'balanced-
value' positions for price positioning
Brand Price Trade-offs (BPTO)
For brand specific studies measures of brand equity and category management Brand Price
Trade-off Studies (BPTO) can be used. Here customers evaluate a range of products and prices
are adjusted until customers stop purchasing.
For some markets where prices are very visible, or where there is a large amount of internal
pricing data, it is possible to use econometric methods to examine the impact of price and to
understand price elasticities. Using pricing tests, discounts and advanced statistical analysis the
impact of price can be assessed live in the real world.
The most common approach to pricing research is to rely on market intelligence and follow-my-
leader type pricing using a competitor as a benchmark. However a me-too approach leads to high
levels of competition, and it is important to consider the strategic impact of pricing as well as the
short term sales impact.
Some caution is needed when conducting pricing studies. Statistically speaking, where you are
looking to optimize prices where you are looking at relatively small price changes of 5-10%, you
will need larger than normal sample sizes to get the statistical accuracy you need. For many
companies this can make pricing research expensive, unless combined with a range of other
measurement.
Retail Audit
What is Retail Audit?
It is a tool that opens up new option for strategic move in the market. It
helps every Marketer to find an optimum Brand/Product portfolio for
target segment with finest communication vehicle and the flexible interiors
for high product accessibility.
Page 42
How to perform a Retail Audit?
To perform a retail audit the marketer should keep the following views in
mind.
1. Psychographics of consumers- To Draft Retail value chain
2. Brand Portfolio –To fill retail value chain draft with brands and products
3. Retail Format- To fill the retail value chain draft with resources.
4. Service blue prints- Connect the resources and brand/products in a retail
value chain.
Psychographic of consumers: The value chain blue prints of retail must be design with
consumer psychographic in mind. The following questions must be answered while designing
the value chain blue prints.
What is life style of your consumers?
How much time they plan for shopping?
What is the key driver for shopping (passing time, Money)?
What vehicle they use usually to travel?
What is their disposable income?
Are they brand, Price or value conscious or a subset of it (BPV
Analysis)?
The POC (Psychographic of consumers) helps to draft the value chain blue
print for a retail business. Moving further will serve the purpose to check
what brands /product mix we have to fill the draft of value chain.
Brand Portfolio: It defines the brand /product span that we have to click the consumers. Every
retail product line must have a combination of products for serving the two basic strategic
purposes.
a. Penetration builders
Page 43
b. Profit builders
Most of brand/product retail portfolio is based on proliferation but if you see
the effects in the long term it will cause PRODUCT PLACEMENT
INSUFFICIENCY. The symptoms can be concluded by the increasing footfall
but stagnation of conversion rate.
Just ask one question to yourself.
Are my products placed at the right placed inside my shop?
There is a complete psychological process starts when a consumer enters in
side the shop and till he buys and leave the shop. Your products and brand
are in a phase of advertisement for that time. In the first two minutes
consumer view a bunch of products and the combine brand position will
creates a mind set of consumers. For instance in a show room display Rado
and Omega’s latest wrist watch at the gate side (Guess the brand image
your shop).
The second thing comes in mind of a marketer is what should be the retail
product mix of
Penetration builders and profit builders.
Whom to target?
If you believe in buzz marketing then I suggest you to target those
consumers for penetration who can create a tipping point in your footfall.
Retail Format: It has to fill the value chain of retail by resources. The main objective of retail
format should be to make consumers process of shopping very cozy. The factors which affect the
process are:
a. People involved
Page 44
b. System Involved
c. Infrastructure (Show Room space, Parking space, interiors, sitting
arrangement)
Service Blue Prints: It connects the brand or product with the resources for accessibility to
consumers. Service is not only inside the show room but also outside (parking facilitating and
others).
How to analyze your performance of retail audit?
The end result of RETAIL AUDIT must be to increase the profit /square feet.
Behind this figure enhancement, you can guess the mechanism of each unit
and factor under a system.
Advertising research
Advertising research is a specialized form of marketing research conducted to improve the
efficiency of advertising. According to MarketConscious.com, “It may focus on a specific ad or
campaign, or may be directed at a more general understanding of how advertising works or how
consumers use the information in advertising. It can entail a variety of research approaches,
including psychological, sociological, economic, and other perspectives.” [1]
[edit]Types of advertising research
There are two types of research, customized and syndicated. Customized research is conducted
for a specific client to address that client’s needs. Only that client has access to the results of the
research. Syndicated research is a single research study conducted by a research company with
its results available, for sale, to multiple companies. [15] Pre-market research can be conducted
to optimize advertisements for any medium: radio, television, print (magazine, newspaper or
direct mail), outdoor billboard (highway, bus, or train), or Internet. Different methods would be
applied to gather the necessary data appropriately. Post-testing is conducted after the advertising,
either a single ad or an entire multimedia campaign has been run in-market. The focus is on what
Page 45
the advertising has done for the brand, for example increasing brand awareness, trial, frequency
of purchasing.
[edit]Pre-testing
Pre-testing, also known as copy testing, is a form of customized research that predicts in-market
performance of an ad, before it airs, by analyzing audience levels of attention, brand
linkage, motivation, entertainment, and communication, as well as breaking down the ad’s Flow
of Attention and Flow of Emotion.[16] Pre-testing is also used on ads still in rough form – e.g.,
animatics or ripomatics. Pre-testing is also used to identify weak spots within an ad to improve
performance, to more effectively edit 60’s to 30’s or 30’s to 15’s, to select images from the spot
to use in an integrated campaign’s print ad, to pull out the key moments for use in ad tracking,
and to identify branding moments.[17] piece
[edit]Campaign pre-testing
A new area of pre-testing driven by the realization that what works on TV does not necessarily translate
in other media. Greater budgets allocated to digital media in particular have driven the need for campaign
pre-testing. The first to market with a product to test integrated campaigns was OTX in association with
Sequent Partners with the introduction of Media CEP. The latest generation of this product incorporates
one of the leading media planning tools developed by a media modeling and software company Point
logic. The addition of a media planning tool to this testing approach allows advertisers to test the whole
campaign, creative and media, and measures the synergies expected with an integrated campaign [18].
[edit]Post-testing
Post-testing/Tracking studies provide either periodic or continuous in-market research
monitoring a brand’s performance, including brand awareness, brand preference, product usage
and attitudes. Some post-testing approaches simply track changes over time, while others use
various methods to quantify the specific changes produced by advertising—either the campaign
as a whole or by the different media utilized.
Overall, advertisers use post-testing to plan future advertising campaigns, so the approaches that
provide the most detailed information on the accomplishments of the campaign are most valued.
The two types of campaign post-testing that have achieved the greatest use among major
advertisers include continuous tracking, in which changes in advertising spending are correlated
Page 46
with changes in brand awareness, and longitudinal studies, in which the same group of
respondents are tracked over time. With the longitudinal approach, it is possible to go beyond
brand awareness, and to isolate the campaign's impact on specific behavioral and perceptual
dimensions, and to isolate campaign impact by medium [19].
Copy testing
Copy testing is a specialized field of marketing research. It is the study of television
commercials prior to airing them, and is defined as research to determine an ad’s
effectiveness based on consumers’ responses to the ad. It covers all media including print,
TV, radio, Internet etc. Although also known as copy testing, pre-testing is considered the more
accurate, modern name (Young, p.4) for the prediction of how effectively an ad will perform,
based on the analysis of feedback gathered from the target audience. Each test will either
qualify the ad as strong enough to meet company action standards for airing or identify
opportunities to improve the performance of the ad through editing. (Young, p.213)
Pre-testing is also used to identify weak spots within an ad campaign, to more effectively edit
60-second ads to 30-second ads or 30’s to 15’s, to select images from the spot to use in an
integrated campaign’s print ad, to pull out the key moments for use in ad tracking, and to identify
branding moments. [1]
Features of a Good Copy Testing system
In 1982, a consortium of 21 leading advertising agencies including N.W.Ayers, D’Arcy, Grey,
McCann-Erikson, Needham Harper & Steers, Ogilvy & Mather, J.Walter Thompson, Young &
Rubicam etc. released a public document where they laid out the PACT (Positioning Advertising
Copy Testing) Principles on what constitutes a good copy testing system. According to PACT, a
good copy testing system is one that meets the following criteria:
1. Provides measurements which are relevant to the objectives of the advertising
2. Requires agreements about how the results will be used in advance of each specific
test.
3. Provides multiple measurements – because single measurements are generally
inadequate to assess the performance of an advertisement/
4. Based on a model of human response to communications – the reception of a stimulus,
the comprehension of the stimulus and the response to the stimulus.
Page 47
5. Allows for consideration of whether the advertising stimulus should be exposed more
than once.
6. Recognizes that the more finished a piece of copy is, the more soundly it can be
evaluated and requires, as a minimum, that alternative executions be tested in the same
degree of finish.
7. Provides controls to avoid the biasing effects of the exposure context.
8. Takes into account basic considerations of sample definition.
9. Demonstrates reliability and validity.
Readership survey and viewer ship survey:
Audience measurement measures how many people are in an audience, usually in relation
to radio listenership and television viewership, but also in relation to newspaper and
magazine readership and, increasingly, web traffic on websites. Sometimes, the term is used
as pertaining to practices which help broadcasters and advertisers determine who is listening
rather than just how many people are listening. This broader meaning is also called audience
research.
Measurements are broken down by media market, which for the most part corresponds
to metropolitan areas, both large and small.
[edit]Methods
[edit]Diaries
The diary was the first and until recently the only method of recording information. However, this
is prone to mistakes and forgetfulness, as well as subjectivity. Data is also collected down to the
level of listener opinion of individual songs, cross referenced against their age, race, and
economic status in listening sessions sponsored by oldies and mix formatted stations.
[edit]Electronic
More recently, technology has been used to track listening and viewing
habits. Arbitron's Portable People Meter uses a microphone to pick up and record subaudible
tones embedded in broadcasts by an encoder at each station or network. It has even been used
to track in-store radio.
Page 48
[edit]Software
There are certain software applications being developed to monitor cable TV operators with full
passive and permissive viewer measurement functionality to monitor television channelratings.
The system tracks every time the channel is changed and records it accordingly. It allows what
was being viewed at the time and which channel the viewer changed to. This information allows
operators, broadcasters and advertising media to monitor audience TV usage habits.
[edit]New media
Nielsen//NetRatings measures Internet and digital media audiences through a telephone and
Internet survey. Nielsen BuzzMetrics measures consumer-generated media. Other companies
collecting information on internet usage include comScore and Hitwise, who measure hits on
internet pages. Companies like Visible Measures focus on measuring specific types of media, in
the case of Visible Measures, they measure online video consumption and distribution across all
video advertising and content. TruMedia, Quividi, and CognoVisionprovide real-time audience
data including size, attention span and demographics by using video analytics technology to
automatically detect, track and classify viewers watching digital displays. Networked Insights
measures online audiences, and released a report ranking television shows based on people's
interactions within social media. The study showed that half of the shows on Networked
Insights' top 10 list did not appear on the Nielsen list.
[edit]Ratings point
Ratings point is a measure of viewership of a particular television program.
One single television ratings point (Rtg or TVR) represents 1% of viewers in the surveyed area
in a given minute. As of 2004, there are an estimated 109.6 million television households in
the United States. Thus, a single national household ratings point represents 1%, or 1,096,000
households for the 2004-05 season. When used for the broadcast of a program, the average
rating across the duration of the show is typically given. Ratings points are often used for
specific demographics rather than just households. For example a ratings point among the key
18-49 year olds demographic is equivalent to 1% of all 18-49 year olds in the country.
A Rtg / TVR is different from a share point in that it is the percentage of all possible viewers,
while a share point is 1% of all viewers watching television at the time. Hence the share of a
Page 49
broadcast is often significantly higher than the rating, especially at times when overall TV
viewing is low.
Ad Tracking
Ad tracking, also known as post-testing or ad effectiveness tracking is in-market research
that monitors a brand’s performance including brand and advertising awareness, product trial
and usage, and attitudes about the brand versus their competition.
Depending on the speed of the purchase cycle in the category, tracking can be done
continuously (a few interviews every week) or it can be “pulsed,” with interviews conducted in
widely spaced waves (ex. every three or six months). Interviews can either be conducted with
separate, matched samples of consumers, or with a single (longitudinal) panel that is
interviewed over time.
Since the researcher has information on when the ads launched, the length of each advertising
flight, the dollars spent, and when the interviews were conducted, the results of ad tracking can
provide information on the effects of advertising.
[edit]Purpose of Ad Tracking
The purpose of ad tracking is generally to provide a measure of the combined effect of the
media weight or spending level, the effectiveness of the media buy or targeting, and the quality
of the advertising executions or creative.
Advertisers use the results of ad tracking to estimate the return on investment (ROI) of
advertising, and to refine advertising plans. Sometimes, tracking data are used to provide inputs
to Marketing Mix Models which marketing science statisticians build to estimate the role of
advertising, as compared to pricing, distribution and other marketplace variables on sales of the
brand.
[edit]Methodology
Today, most ad tracking studies are conducted via the Internet. Some ad tracking studies are
conducted continuously and others are conducted at specific points in time (typically before the
advertising appears in market, and then again after the advertising has been running for some
period of time). The two approaches use different types of analyses, although both start by
measuring advertising awareness. Typically, the respondent is either shown a brief portion of a
Page 50
commercial or a few memorable still images from the TV ad. Other media typically are cued
using either branded or de-branded visual of the ad. Then, respondents answer three significant
questions.
1. Do you recognize this ad? (recognition measure)
2. Please type in the sponsor of this ad. (unaided awareness measure)
3. Please choose from the following list, the sponsor of this ad. (aided awareness measure)
The continuous tracking design analyzes advertising awareness over time, in relation to ad
spending; separately, this design tracks brand awareness, and then develops indices of
effectiveness based on the strength of the correlations between ad spending and brand
awareness.
The most popular alternate approach to the continuous tracking design is
the Communicus System longitudinal design, in which the same people are interviewed at two
points in time. Changes in brand measures (for example, brand purchasing and future purchase
intentions) exhibited among those who have seen the advertising are compared to the changes
in brand measures that occurred among those unaware of advertising. By means of this
method, the researchers can isolate those marketplace changes that were produced by
advertising versus those that would have occurred without advertising.
[edit]Internet tracking
There are several different tools to effectively track online ads: banner ads, ppc ads, pop-up
ads, and other types. Several online advertising companies such as Google offer their own ad
tracking service in order to effectively use their service to generate a positive ROI. Third-party
ad tracking services are commonly used by affiliate marketers. Affiliate marketers are frequently
unable to have access to the order page and therefore are unable to use a 3rd-party tool. Many
different companies have created tools to effectively track their commissions in order to optimize
their profit potential. The information provided will show the marketer which advertising methods
are generating income and which are not. This information will allow the marketer to effectively
allocate his budget in the best possible way.
[edit]Measures
Here is a list of some of the data a post-test might provide:
Page 51
Top of mind awareness
Unaided brand awareness
Aided brand awareness
Brand fit
Brand image ratings
Brand trial
Repeat purchase
Frequency of use
Purchase intent
Price perceptions
Unaided advertising awareness
Aided advertising awareness
Unaided advertising message recall
Aided advertising message recall
Aided commercial recall
Ad wear out
Promotion awareness and usage
Market segment characteristics
Media habits
Lifestyle/Psychographics
Demographics
Viral Marketing Research
Viral Marketing Research is a subset of marketing research that measures and compares the
relative Return On Investment (ROI) of advertising and communication strategies designed to
exploit social networks.
Algorithms are used to derive respondent-level coefficients of Social Networking
Potential (SNP). These coefficients are integrated with respondent-level data measuring
Page 52
1. the selling effectiveness of specific communications and
2. the Viral Marketing Potential of those communications within specific media (e.g.,
Internet video, texting, print ads, television).
Results identify strategies that are likely to drive sales among the target audience and be
distributed throughout relevant social networks.
[edit]Examples
An electronics manufacture is about to launch a new video console and wants to maximize new
product potential. In advance of the launch, Viral Marketing Research is used to compare the
relative ROI of several strategies of among high SNP (Social Networking potential)
respondents within the target audience. Results help the manufacturer maximize sales by
identifying what needs to be communicated and through which media (e.g., print ads, Internet
videos, texting, television).
A pharmaceutical company has developed a new drug for an existing drug category and needs
to build brand recognition. Viral Marketing Research could be conducted among physicians or
patients to identify which communication strategies are most likely to be spread byword-of-
mouth, and which are likely to induce physicians/patients to prescribe/request the new drug.
The search engine Viral Sauce uses viral marketing research and statistical methods in order to
rank the results of search queries by the perceived viral potential of content (using a measure
which they call vRank), rather than by content's current popularity. In doing so they claim to be
able to retrieve content which is more interesting to niche users.
Marketing effectiveness
Marketing effectiveness is the quality of how marketers go to market with the goal of
optimizing their spending to achieve good results for both the short-term and long-term. It is also
related to Marketing ROI and Return on Marketing Investment (ROMI).
[edit]Introduction
Marketing effectiveness has four dimensions:
Page 53
Corporate – Each company operates within different bounds. These are determined by their
size, their budget and their ability to make organizational change. Within these bounds
marketers operate along the five factors described below.
Competitive – Each company in a category operates within a similar framework as
described below. In an ideal world, marketers would have perfect information on how they
act as well as how their competitors act. In reality, in many categories have reasonably good
information through sources, such as, IRI or Nielsen. In many industries, competitive
marketing information is hard to come by.
Customers /Consumers – Understanding and taking advantage of how customers make
purchasing decisions can help marketers improve their marketing effectiveness. Groups of
consumers act in similar ways leading to the need to segment them. Based on these
segments, they make choices based on how they value the attributes of a product and the
brand, in return for price paid for the product. Consumers build brand value through
information. Information is received through many sources, such as, advertising, word-of-
mouth and in the (distribution) channel often characterized with the purchase funnel,
a McKinsey & Company concept. Lastly, consumers consume and make purchase decisions
in certain ways.
Exogenous Factors – There are many factors outside of our immediate control that can
impact the effectiveness of our marketing activities. These can include the weather, interest
rates, government regulations and many others. Understanding the impact these factors can
have on our consumers can help us to design programs that can take advantage of these
factors or mitigate the risk of these factors if they take place in the middle of our marketing
campaigns.
There are five factors driving the level of marketing effectiveness that marketers can achieve:
1. Marketing Strategy – Improving marketing effectiveness can be achieved by employing
a superior marketing strategy. By positioning the product or brand correctly, the
product/brand will be more successful in the market than competitors’ products/brands.
Even with the best strategy, marketers must execute their programs properly to achieve
extraordinary results.
2. Marketing Creative – Even without a change in strategy, better creative can improve
results. Without a change in strategy, AFLAC was able to achieve stunning results with
Page 54
its introduction of the Duck (AFLAC) campaign. With the introduction of this new
creative concept, the company growth rate soared from 12% prior to the campaign to
28% following it. (See references below, Bang)
3. Marketing Execution – By improving how marketers go to market, they can achieve
significantly greater results without changing their strategy or their creative execution. At
the marketing mix level, marketers can improve their execution by making small
changes in any or all of the 4-Ps (Product, Price, Place and Promotion) (Marketing)
without making changes to the strategic position or the creative execution marketers
can improve their effectiveness and deliver increased revenue. At the program level
marketers can improve their effectiveness by managing and executing each of their
marketing campaigns better. It's commonly known that consistency of a Marketing
Creative strategy across various media (e.g. TV, Radio, Print and Online), not just within
each individual media message, can amplify and enhance impact of the overall
marketing campaign effort. Additional examples would be improving direct mail through
a better call-to-action or editing web site content to improve its organic search results,
marketers can improve their marketing effectiveness for each type of program. A
growing area of interest within (Marketing Strategy) and Execution are the more recent
interaction dynamics of traditional marketing (e.g. TV or Events) with online consumer
activity (e.g. Social Media). (See references below, Brand Ecosystems) Not only direct
product experience, but also any stimulus provided by traditional marketing, can
become a catalyst for a consumer brand "groundswell" online as outlined in the
book Groundswell.
4. Marketing Infrastructure (also known as Marketing Management) – Improving the
business of marketing can lead to significant gains for the company. Management of
agencies, budgeting, motivation and coordination of marketing activities can lead to
improved competitiveness and improved results. The overall accountability for brand
leadership and business results is often reflected in an organization under a title within a
(Brand management) department.
5. Exogenous Factors - Generally out of the control of marketers, external or exogenous
factors also influence how marketers can improve their results. Taking advantage of
seasonality, interests or the regulatory environment can help marketers improve their
marketing effectiveness.
To start to make these types of measurement you need to carry out stimulus-response type
measurements. These can be techniques such as conjoint analysis prior to market launch, or test and
Page 55
control or fractional factorial design experiments after market launch.
For much of this type of work we need to know far better how marketing works. Traditional models
such as Awareness-Interest-Desire-Action don't work well enough, or fails to capture enough of the
reality of decision-making. Our pattern-cascade model fits far better with the idea of stimulus-
response and leads into research and measurement techniques that look at how and what decisions are
made and the patterns that consumers link to their purchases which we explore using techniques like
our sensory-emotion process.
True "customer satisfaction" is an organization's ability to attract & retain customers and enhance the customer
relationship over time. It is not simple and the answer cannot be collapsed into a single "customer satisfaction
index." Every interaction a customer has with a company’s products & services is a reflection on quality
.
Customer satisfaction measurement (CSM) is a management information system that continuously captures the
voice of the customer through the assessment of performance from the customer's point of view. This
information provides a platform for the strategic alignment of organizational resources to deliver whatever is
most important to customers.
Customer satisfaction measurement is an evolving tool that is moving beyond early, basic measures of
satisfaction toward approaches that enable a business to compete more effectively in its targeted market.
Simple approaches to assessing customer satisfaction fail to measure:
6.
Page 56
Perceptions of non-customers
Tracking "market satisfaction" requires input from non-customers as well as customers.
Performance relative to competitors
Customers judge your product/service offering relative to offerings of your key competitors. If your
performance is improving, but your competitors are improving faster, your relative perceived quality
would actually decline.
7.
In contrast, market-perceived quality versus competitors involves a dramatic shift in focus — from
satisfying your current customers to beating competitors through customer value management. Firms
that succeed in holding onto their customer relationships:
8.
Seek out features that are both unique and worth a lot to customers
Differentiate their product/service offerings to meet differing segment needs better than their
competitors
Actively communicate these benefits, building a conviction by the customer that they are better
off continuing their relationship
True customer value management entails integration of total quality management with the company’s
classic management systems (strategic planning, budgeting & control, capital investment, competitive
analysis, performance measures & reward) to ensure that companies enter and invest only in businesses
where they can be quality & value leaders.
The payoff from customer satisfaction measurement comes from its ability to define & direct a company’s
quality improvement efforts, and its quality/value position in the marketplace. Customer satisfaction
measurement and quality impact profits by:
Reducing costs
Preventing erosion in revenues over time
Page 57
Increasing market share
Increasing gross margins
Mystery shopping
Mystery shopping or a mystery consumer is a tool used by Mystery Shopping Providers
and market research companies to measure quality of retail service or gather specific
information about products and services[citation needed]. Mystery shoppers posing as normal
customers perform specific tasks—such as purchasing a product, asking questions, registering
complaints or behaving in a certain way—and then provide detailed reports or feedback about
their experiences.
Mystery shopping was standard practice by the early 1940s as a way to measure employee
integrity. Tools used for mystery shopping assessments range from simple questionnaires to
complete audio and video recordings. Mystery shopping can be used in any industry, with the
most common venues being retail stores, hotels, movie theaters, restaurants, fast food chains,
banks, gas stations, car dealerships, apartments, health clubs and health care facilities. In
the UK mystery shopping is increasingly used to provide feedback on customer services
provided by local authorities, and other non-profit organizations such as housing
associations and churches.[1]
[edit]Methodology
When a client company hires a company providing mystery shopping services, a survey model
will be drawn up and agreed to which defines what information and improvement factors the
client company wishes to measure. These are then drawn up into survey instruments and
assignments that are allocated to shoppers registered with the mystery shopping company.
The details and information points shoppers take note of typically include:
number of employees in the store on entering
how long it takes before the mystery shopper is greeted
the name of the employees
whether or not the greeting is friendly, ideally according to objective measures
Page 58
the questions asked by the shopper to find a suitable product
the types of products shown
the sales arguments used by the employee
whether or how the employee attempted to close the sale
whether the employee suggested any add-on sales
whether the employee invited the shopper to come back to the store
cleanliness of store and store associates
speed of service
compliance with company standards relating to service, store appearance, and
grooming/presentation
Shoppers are often given instructions or procedures to make the transaction atypical to make
the test of the knowledge and service skills of the employees more stringent or specific to a
particular service issue (known as scenarios). For instance, mystery shoppers at a restaurant
may pretend they are lactose-intolerant, or a clothing store mystery shopper could inquire about
gift-wrapping services. Not all mystery shopping scenarios include a purchase.
While gathering information, shoppers usually blend in to the store being evaluated as regular
shoppers. They may sometimes be required to take photographs or measurements, return
purchases, or count the number of products, seats, people during the visit. A timer or a
stopwatch may be required. In some states in the USA, mystery shoppers must also be licensed
as private investigators in order to perform some of the tasks.
After the visit the shopper submits the data collected to the mystery shopping company, which
reviews and analyzes the information, completing quantitative or qualitative statistical [analysis]
reports on the data for the client company. This enables measurement against the previously
defined criteria.
[edit]Statistics
The mystery shopping industry had an estimated value of nearly $600 million in the United
States in 2004, according to a 2005 report commissioned by the Mystery Shopping Providers
Association (MSPA). Companies that participated in the report experienced an average growth
of 11.1 percent from 2003 to 2004, compared to an average growth of 12.2 percent. The report
estimates more than 8.1 million mystery shops were conducted in 2004. The report represents
the first industry association attempt to quantify the size of the mystery shopping industry.
Page 59
Similar surveys are available for European regions where mystery shopping is becoming more
embedded into company procedures.[2]
As a measure of its importance, customer/patient satisfaction is being incorporated more
frequently into executive pay. A study by a U.S. firm found more than 55% of hospital chief
executive officers surveyed in 2005 had "some compensation at risk," based on patient
satisfaction, up from only 8% to 20% a dozen years ago."[3]
CBC Television's news magazine program Marketplace ran a segment on this topic during a
January 2001 episode.[4]
Market analysis
A Market analysis is a documented investigation of a market that is used to inform a firm's
planning activities particularly around decisions of inventory, purchase, work
forceexpansion/contraction, facility expansion, purchases of capital equipment, promotional
activities, and many other aspects of a company.
[edit]Dimensions of market analysis
David A. Aaker outlined the following dimensions of a market analysis:[citation needed]
Market size (current and future)
Market growth rate
Market profitability
Industry cost structure
Distribution channels
Market trends
Key success factors
The goal of a market analysis is to determine the attractiveness of a market, both now and in
the future. Organizations evaluate the future attractiveness of a market by gaining an
understanding of evolving opportunities and threats as they relate to that organization's own
strengths and weaknesses.
Organizations use the findings to guide the investment decisions they make to advance their
success. The findings of a market analysis may motivate an organization to change various
aspects of its investment strategy. Affected areas may include inventory levels,a work
Page 60
force expansion/contraction, facility expansion, purchases of capital equipment, and promotional
activities.
[edit]Elements
[edit]Market size
The most common measure of market size is the sum of the revenues of its participants. The
following are examples of information sources for determining market size:
Government data
Trade association data
Financial data from major players
Customer surveys
[edit]Market trends
Changes in the market are important because they often are the source of new opportunities
and threats. Moreover, they have the potential to dramatically affect the market size.
Examples include changes in economic, social, regulatory, legal, and political conditions and in
available technology, price sensitivity, demand for variety, and level of emphasis on service and
support.
[edit]Market growth rate
A simple means of forecasting the market growth rate is to extrapolate historical data into the
future. While this method may provide a first-order estimate, it does not predict important turning
points. A better method is to study market trends and sales growth in complementary products.
Such drivers serve as leading indicators that are more accurate than simply extrapolating
historical data.
Important inflection points in the market growth rate sometimes can be predicted by constructing
a product diffusion curve. The shape of the curve can be estimated by studying the
characteristics of the adoption rate of a similar product in the past.
Ultimately, many markets mature and decline. Some leading indicators of a market's decline
include market saturation, the emergence of substitute products, and/or the absence of growth
drivers.
Page 61
[edit]Market segments
Markets are not uniform. Therefore it is also important for investors to identify and evaluate the
various segments that make up the total market. This analysis helps organizations determine
which areas account for the greatest share of the market's growth and are more susceptible to
change. This information, in turn, helps them pinpoint the most promising opportunities within
the overall market and guides the choice of specific investments.
[edit]Market profitability
While different organizations in a market will have different levels of profitability, they are all
similar to different market conditions. Michael Porter devised a useful framework for evaluating
the attractiveness of an industry or market. This framework, known as Porter's five forces,
identifies five factors that influence the market profitability:
Buyer power
Supplier power
Barriers to entry
Threat of substitute products
Rivalry among firms in the industry[citation needed]
[edit]Industry cost structure
The cost structure is important for identifying key factors for success. To this end, Porter's value
chain model is useful for determining where value is added and for isolating the costs.
The cost structure also is helpful for formulating strategies to develop a competitive advantage.
For example, in some environments the experience curve effect can be used to develop a cost
advantage over competitors.
[edit]Distribution channels
Examining the following aspects of the distribution system may help with a market analysis:
Existing distribution channels - can be described by how direct they are to the customer.
Trends and emerging channels - new channels can offer the opportunity to develop a
competitive advantage.
Channel power structure - for example, in the case of a product having little brand equity,
retailers have negotiating power over manufacturers and can capture more margin.
Page 62
[edit]Success factors
The key success factors are those elements that are necessary in order for the firm to achieve
its marketing objectives. A few examples of such factors include:
Access to essential unique resources
Ability to achieve economies of scale
Access to distribution channels
Technological progress
It is important to consider that key success factors may change over time, especially as the
product progresses through its life cycle.
[edit]Applications
The literature defines several areas in which market analysis is important. These include: sales
forecasting, market research, and marketing strategy. Not all managers will need to conduct a
market analysis. Nevertheless, it is important for managers that use market analysis data to how
analysts derive their conclusions and what techniques they use to do so.
Sales Analysis MethodsBy Charles Pearson, eHow Contributor
updated: August 3, 2010
1.Choosing sales representatives and sales strategies plays a large role in determining a company's success.
Sales managers conduct sales analysis to decide how companies can effectively increase
sales. Sales analysis involves analyzing markets, examining the sales process, recruiting sales
representatives, evaluating the sales skills needed to be effective and determining the
appropriate size of the sales force.
Page 63
Recruiting
2. Sales managers must first recruit sales representatives before analyzing their potential for
success, according to the University of West Florida. Recruiting sources include sales
conferences, professional development organizations for salespeople and job boards for sales
representatives.
Staff Size
3. Sales managers must determine how many sales representatives they need. Methods of
determining this involve assessing the number of customers the company expects to contact
and how long it takes for sales pitches to reach customers, according to the University of West
Florida. However, another method is to estimate how much sales volume each representative
will generate and to compare that to how much the sales representative will cost the company.
Job Description
4. A job description is usually needed when recruiting a sales representative, regardless of
whether he is hired through social networking or a job posting, according to the University of
West Florida. In selecting candidates to interview, the sales manager uses criteria such as
experience, education and people skills.
Evaluation
5. Several methods can be used to analyze how successful a sales representative is. The amount
of sales she generates can be determined by the average sales quota for the region. This
analysis gives the company realistic expectations of how many sales the representative will
likely make, according to the University of West Florida. Also, the representative's sales can be
compared to the industry average, which is more feasible when the sales representative's
coverage extends beyond territories.
Sales Process
6. The effectiveness of the sales process needs to be assessed for training and mentoring
purposes. Successful sales methods can be determined by comparing the success of past
marketing techniques.
Market Research
7. Market research is necessary to make sales decisions. Market research can be conducted
through phone interviews and surveys of customers. Market research can also be conducted by
studying sales statistics. Products that continue to sell or sell out might need to be expanded.
Page 64
Products that are waning in sales might need a more aggressive advertising campaign or need
to be discontinued so sales representatives can focus on selling other products.
Read more: Sales Analysis Methods |
eHow.com http://www.ehow.com/list_6810402_sales-analysis-
methods.html#ixzz18ObFfAER
The main types of qualitative research are
Depth Interviews
interview is conducted one-on-one, and lasts between 30 and 60 minutes
best method for in-depth probing of personal opinions, beliefs, and values
very rich depth of information
very flexible
probing is very useful at uncovering hidden issues
they are unstructured (or loosely structured)- this differentiates them from survey interviews in
which the same questions are asked to all respondents
can be time consuming and responses can be difficult to interpret
requires skilled interviewers - expensive - interviewer bias can easily be introduced
there is no social pressure on respondents to conform and no group dynamics
start with general questions and rapport establishing questions, then proceed to more purposive
questions
laddering is a technique used by depth interviewers in which you start with questions about
external objects and external social phenomena, then proceed to internal attitudes and feelings
hidden issue questioning is a technique used by depth interviewers in which they concentrate
on deeply felt personal concerns and pet peeves
symbolic analysis is a technique used by depth interviewers in which deeper symbolic
meanings are probed by asking questions about their opposites
Focus Groups
an interactive group discussion lead by a moderator
Page 65
unstructured (or loosely structured) discussion where the moderator encourages the free flow of
ideas
usually 8 to 12 members in the group who fit the profile of the target group or consumer but may
consist of two interviewees (a dyad) or three interviewees (a triad) or a lesser number of
participants (known as a mini-group)
usually last for 1 to 2 hours
usually recorded on video/DVD
may be streamed via a closed streaming service for remote viewing of the proceedings
the room usually has a large window with one-way glass - participants cannot see out, but the
researchers can see in
inexpensive and fast
can use computer and internet technology for on-line focus groups
respondents feel a group pressure to conform
group dynamics is useful in developing new streams of thought and covering an issue thoroughly
see focus group for a more detailed description
Projective Techniques
these are unstructured prompts or stimulus that encourage the respondent to project their
underlying motivations, beliefs, attitudes, or feelings onto an ambiguous situation
they are all indirect techniques that attempt to disguise the purpose of the research
examples of projective techniques include:
word association - say the first word that comes to mind after hearing a word - only some of
the words in the list are test words that the researcher is interested in, the rest are fillers - is
useful in testing brand names - variants include chain word association and controlled word
association
sentence completion - respondents are given incomplete sentences and asked to complete
them
story completion - respondents are given part of a story and are asked to complete it
cartoon tests - pictures of cartoon characters are shown in a specific situation and with
dialogue balloons - one of the dialogue balloons is empty and the respondent is asked to fill it
in
thematic apperception tests - respondents are shown a picture (or series of pictures) and
asked to make up a story about the picture(s)
role playing - respondents are asked to play the role of someone else - researchers assume
that subjects will project their own feelings or behaviours into the role
Page 66
third-person technique - a verbal or visual representation of an individual and his/her situation
is presented to the respondent - the respondent is asked to relate the attitudes or feelings of
that person - researchers assume that talking in the third person will minimize the social
pressure to give standard or politically correct responses
Random Probability Sampling
This type of qualitative research conducts random interviews within a defined universe, e.g. a
city- to understand consumer behavior beyond basic age-gender variables.
Examples of random sample interviewing include telephone interviewing, mailing-
questionnaire's/booklets, personal interviewing,
Consumer response for this type of qualitaitve research could be product usage, personal
opinion, events and activities consumers participate in.
One key benefit of the random probability sampling technique is the ability to project your results
as they are reflected back to or representitive of your universe. For example how many
consumers in a city are republican, democrat, independent, or indifferent.
[edit]Newer Methods
Observational & Ethnographic Research
One of the more fundamental uses of qualitative research is understanding fundamental consumer
behaviour through Observational research. The roots for this come from Anthropological studies where
trained researchers went to observe tribes / cultures / societies - for periods as long as a couple of years.
Nowadays, this kind of research is being supplemented by more cutting edge fields like neuro-science
where the observation is accompanied by measuring brain activity. This is under the assumption that very
often our brain reacts without us even knowing it and asking questions or pure observation by themselves
are not enough to really pinpoint what goes on.
Another application is longitudinal studies, a correlational research study that involves repeated
observations of the same items over long periods of time.
Psychological Research
Qualitative marketing research comes in a lot of different guises but qualitative psychological
research has cristalised as one the most effective ways of gathering insight into the behaviours, attitudes
and decision-making processes of consumers and customers. Most qualitative research companies in the
world will claim that they employ psychologists and base their findings on psychological theories. The
psychology backed methodologies applied in qualitative marketing research are continuously changing
Page 67
and being further developed. One of the examples of psychology theory developed specifically for use in
marketing research is morphological psychology.
[edit]Ethics in qualitative marketing research
Like all research involving human participants, implementing qualitative marketing research raises ethical
considerations. Some research designs employ a very direct approach: they clearly disclose the
objectives of the study, the organization that commissioned it, and utilize transparent questions. Other
designs conceal the study objectives and/or the commissioning organization, or utilize questions that
stymie participants' attempts to learn of the study design.
Some researchers have ethical misgivings about the deceit involved in some approaches. They argue
that if disguised methods are used, all respondents should, on completion, attend a debriefing session in
which the true purpose of the research is given and the reason for the deception explained.
In commercial qualitative marketing research, ethical questions center on protecting the privacy of the
participant and the privacy of the research sponsor. For this reason, qualitative marketing research firms
are often employed to execute the research and guard privacy throughout the process. Firms protect the
privacy of participants by promising that the data collected will be presented to the sponsor either in
aggregate or in a format stripped of any personally identifiable information. Likewise, firms protect the
privacy of sponsors by serving as a liaison between the sponsor and the research participant, which
eliminates a situation that would otherwise invite much deceit. Further, most research firms join
associations where membership is subject to compliance with industry standards.
Grounded theory (GT) is a systematic qualitative research methodology in the social sciences emphasizing generation of theory
from data in the process of conducting research.[1]
It is a research method that operates almost in a reverse fashion from traditional research and at first may appear to be in
contradiction of the scientific method. Rather than beginning by researching and developing a hypothesis, the first step is data
collection, through a variety of methods. From the data collected, the key points are marked with a series of codes, which are
extracted from the text. The codes are grouped into similar concepts in order to make them more workable. From these
concepts, categories are formed, which are the basis for the creation of a theory, or a reverse engineered hypothesis. This
contradicts the traditional model of research, where the researcher chooses a theoretical framework, and only then applies this
model to the studied phenomenon.[2]
Introduction to Grounded Theory
By Steve Borgatti
Page 68
Discussion drawn from:
Glaser and Strauss. 1967. The Discovery of Grounded Theory. Strauss and Corbin. 1990. Basics of Qualitative Research.
Goals and Perspective
The phrase "grounded theory" refers to theory that is developed inductively from a corpus of data. If done well, this means that the resulting theory at least fits one dataset perfectly. This contrasts with theory derived deductively from grand theory, without the help of data, and which could therefore turn out to fit no data at all.
Grounded theory takes a case rather than variable perspective, although the distinction is nearly impossible to draw. This means in part that the researcher takes different cases to be wholes, in which the variables interact as a unit to produce certain outcomes. A case-oriented perspective tends to assume that variables interact in complex ways, and is suspicious of simple additive models, such as ANOVA with main effects only.
Part and parcel of the case-orientation is a comparative orientation. Cases similar on many variables but with different outcomes are compared to see where the key causal differences may lie. This is based on John Stuart Mills' (1843, A system of logic: Ratiocinative and Inductive) method of differences -- essentially the use of (natural) experimental design. Similarly, cases that have the same outcome are examined to see which conditions they all have in common, thereby revealing necessary causes.
The grounded theory approach, particularly the way Strauss develops it, consists of a set of steps whose careful execution is thought to "guarantee" a good theory as the outcome. Strauss would say that the quality of a theory can be evaluated by the process by which a theory is constructed. (This contrasts with the scientific perspective that how you generate a theory, whether through dreams, analogies or dumb luck, is irrelevant: the quality of a theory is determined by its ability to explain new data.)
Although not part of the grounded theory rhetoric, it is apparent that grounded theorists are concerned with or largely influenced by emic understandings of the world: they use categories drawn from respondents themselves and tend to focus on making implicit belief systems explicit.
Methods
The basic idea of the grounded theory approach is to read (and re-read) a textual database (such as a corpus of field notes) and "discover" or label variables (called categories, concepts and properties) and their interrelationships. The ability to perceive variables and relationships is
Page 69
termed "theoretical sensitivity" and is affected by a number of things including one's reading of the literature and one's use of techniques designed to enhance sensitivity.
Of course, the data do not have to be literally textual -- they could be observations of behavior, such as interactions and events in a restaurant. Often they are in the form of field notes, which are like diary entries. An example is here.
Open Coding
Open coding is the part of the analysis concerned with identifying, naming, categorizing and describing phenomena found in the text. Essentially, each line, sentence, paragraph etc. is read in search of the answer to the repeated question "what is this about? What is being referenced here?"
These labels refer to things like hospitals, information gathering, friendship, social loss, etc. They are the nouns and verbs of a conceptual world. Part of the analytic process is to identify the more general categories that these things are instances of, such as institutions, work activities, social relations, social outcomes, etc.
We also seek out the adjectives and adverbs --- the properties of these categories. For example, about a friendship we might ask about its duration, and its closeness, and its importance to each party. Whether these properties or dimensions come from the data itself, from respondents, or from the mind of the researcher depends on the goals of the research.
It is important to have fairly abstract categories in addition to very concrete ones, as the abstract ones help to generate general theory.
Consider what is implied in the following passage of text (Strauss and Corbin pg. 78):
Text Fragment 1
Pain relief is a major problem when you have arthritis. Sometimes, the pain is worse than other times, but when it gets really bad, whew! It hurts so bad, you don't want to get out of bed. You don't feel like doing anything. Any relief you get from drugs that you take is only temporary or partial.
One thing that is being discussed here is PAIN. Implied in the text is that the speaker views pain as having certain properties, one of which is INTENSITY: it varies from a little to a lot. (When is it a lot and when is it little?) When it hurts a lot, there are consequences: don't want to get out of bed, don't feel like doing things (what are other things you don't do when in pain?). In order to solve this problem, you need PAIN RELIEF. One AGENT OF PAIN RELIEF is drugs (what are other members of this category?). Pain relief has a certain DURATION (could be temporary), and EFFECTIVENESS (could be partial).
Page 70
One can see that this sort of analysis has a very emic cast to it, even though I think that most grounded theorists believe they are theorizing about how the world *is* rather than how respondents see it.
The process of naming or labeling things, categories, and properties is known as coding. Coding can be done very formally and systematically or quite informally. In grounded theory, it is normally done quite informally. For example, if after coding much text, some new categories are invented, grounded theorists do not normally go back to the earlier text to code for that category. However, maintaining an inventory of codes with their descriptions (i.e., creating a codebook) is useful, along with pointers to text that contain them. In addition, as codes are developed, it is useful to write memos known as code notes that discuss the codes. These memos become fodder for later development into reports.
Page 71
An example of a code note is found here.
Axial Coding
Axial coding is the process of relating codes (categories and properties) to each other, via a combination of inductive and deductive thinking. To simplify this process, rather than look for any and all kind of relations, grounded theorists emphasize causal relationships, and fit things into a basic frame of generic relationships. The frame consists of the following elements:
Element Description
PhenomenonThis is what in schema theory might be called the name of the schema or frame. It is the concept that holds the bits together. In grounded theory it is sometimes the outcome of interest, or it can be the subject.
Causal conditions
These are the events or variables that lead to the occurrence or development of the phenomenon. It is a set of causes and their properties.
Context
Hard to distinguish from the causal conditions. It is the specific locations (values) of background variables. A set of conditions influencing the action/strategy. Researchers often make a quaint distinction between active variables (causes) and background variables (context). It has more to do with what the researcher finds interesting (causes) and less interesting (context) than with distinctions out in nature.
Intervening conditions
Similar to context. If we like, we can identify context with moderating variables and intervening conditions with mediating variables. But it is not clear that grounded theorists cleanly distinguish between these two.
Action strategies
The purposeful, goal-oriented activities that agents perform in response to the phenomenon and intervening conditions.
ConsequencesThese are the consequences of the action strategies, intended and unintended.
In the text segment above, it seems obvious that the phenomenon of interest is pain, the causal conditions are arthritis, the action strategy is taking drugs, and the consequence is pain relief. Note that grounded theorists don't show much interest in the consequences of the phenomenon itself.
Page 72
It should be noted again that a fallacy of some grounded theory work is that they take the respondent's understanding of what causes what as truth. That is, they see the informant as an insider expert, and the model they create is really the informant's folk model.
Selective Coding
Selective coding is the process of choosing one category to be the core category, and relating all other categories to that category. The essential idea is to develop a single storyline around which all everything else is draped. There is a belief that such a core concept always exists.
I believe grounded theory draws from literary analysis, and one can see it here. The advice for building theory parallels advice for writing a story. Selective coding is about finding the driver that impels the story forward.
Memos
Memos are short documents that one writes to oneself as one proceeds through the analysis of a corpus of data. We have already been introduced to two kinds of memos, the field note and the code note (see above). Equally important is the theoretical note. A theoretical note is anything from a post-it that notes how something in the text or codes relates to the literature, to a 5-page paper developing the theoretical implications of something. The final theory and report is typically the integration of several theoretical memos. Writing theoretical memos allows you to think theoretically without the pressure of working on "the" paper.
An example of a theoretical memo is here.
Process
Strauss and Corbin consider that paying attention to processes is vital. It is important to note that their usage of "process" is not quite the same as Lave and March, who use process as a synonym for "explanatory mechanism". Strauss and Corbin are really just concerned with describing and coding everything that is dynamic -- changing, moving, or occurring over time -- in the research setting.
Participant observation
Participant observation is a type of research strategy. It is a widely used methodology in many
disciplines, particularly, cultural anthropology, but also sociology, communication studies,
and social psychology. Its aim is to gain a close and intimate familiarity with a given group of
individuals (such as a religious, occupational, or sub cultural group, or a particular community)
and their practices through an intensive involvement with people in their natural environment,
usually over an extended period of time. The method originated in field work of social
anthropologists, especially the students of Franz Boas in the United States, and in the urban
research of the Chicago School of sociology.
Page 73
In anthropology, participant-observation is organized so as to produce a kind of writing
called ethnography. It can be applied or academic in nature. A key principle of the method is that
one may not merely observe, but must find a role within the group observed from which to
participate in some manner, even if only as "outside observer." Overt participant-observation,
therefore, is limited to contexts where the community under study understands and permits it.
Critics of overt participant observation argue that study is subsequently restricted to the public
fronts socially constructed by actors. Gate-keepers ensure that known research never goes
backstage, making covert strategies necessary especially when conducting studies on
government entities or criminal organisations.[1]
[edit]Method and practice
Such research usually involves a range of methods: informal interviews,
direct observation, participation in the life of the group, collective discussions, analyses
of personal documentsproduced within the group, self-analysis, and life-histories. Although the
method is generally characterized as qualitative research, it can (and often does)
include quantitative dimensions. Participant observation is usually undertaken over an extended
period of time, ranging from several months to many years. An extended research time period
means that the researcher will be able to obtain more detailed and accurate information about the
people he/she is studying. Observable details (like daily time allotment) and more hidden details
(like taboobehavior) are more easily observed and understandable over a longer period of time.
A strength of observation and interaction over long periods of time is that researchers can
discover discrepancies between what participants say—and often believe—should happen
(the formal system) and what actually does happen, or between different aspects of the formal
system; in contrast, a one-time survey of people's answers to a set of questions might be quite
consistent, but is less likely to show conflicts between different aspects of the social system or
between conscious representations and behavior.[2]
Page 74