msu.edu · web viewdocuments and written material : content analysis, literature reviews. 3....

69
PRR 844 Research Methods in Parks, Recreation & Tourism Daniel J. Stynes TOPIC HANDOUTS TOPIC 1. INTRODUCTION TO PHILOSOPHY OF SCIENCE 3 TOPIC 2. RECREATION, TOURISM & LEISURE AS RESEARCH AREAS. 8 TOPIC 3. RESEARCH PROPOSAL 11 TOPIC 4. RESEARCH DESIGN OVERVIEW/ RESEARCH PROCESS 12 TOPIC 5. DEFINITION & MEASUREMENT 17 TOPIC 6. SAMPLING 26 TOPIC 7A. SURVEY METHODS 28 TOPIC 7B. EXPERIMENTAL DESIGN 30 TOPIC 7C. SECONDARY DATA DESIGNS 32 EXAMPLES: USING SECONDARY DATA TO ESTIMATE 33 TOPIC 7D. OBSERVATION & OTHER METHODS 33 TOPIC 8. DATA GATHERING, FIELD PROCEDURES AND DATA ENTRY (INCOMPLETE) 34 TOPIC 9. DATA ANALYSIS AND STATISTICS 35 TOPIC 10. RESEARCH ETHICS : ACCEPTABLE METHODS & PRACTICES 39 TOPIC 11. RESEARCH WRITING & PRESENTATIONS 39 TOPIC 12. SUPPLEMENTAL MATERIAL 40 APPLYING RESEARCH 40 EVALUATING RECREATION SURVEYS - A CHECKLIST 41

Upload: dangcong

Post on 14-Mar-2019

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Research Methods in Parks, Recreation & TourismDaniel J. Stynes

TOPIC HANDOUTS

TOPIC 1. INTRODUCTION TO PHILOSOPHY OF SCIENCE 3TOPIC 2. RECREATION, TOURISM & LEISURE AS RESEARCH AREAS. 8TOPIC 3. RESEARCH PROPOSAL 11TOPIC 4. RESEARCH DESIGN OVERVIEW/ RESEARCH PROCESS 12TOPIC 5. DEFINITION & MEASUREMENT 17TOPIC 6. SAMPLING 26TOPIC 7A. SURVEY METHODS 28TOPIC 7B. EXPERIMENTAL DESIGN 30TOPIC 7C. SECONDARY DATA DESIGNS 32EXAMPLES: USING SECONDARY DATA TO ESTIMATE 33TOPIC 7D. OBSERVATION & OTHER METHODS 33TOPIC 8. DATA GATHERING, FIELD PROCEDURES AND DATA ENTRY (INCOMPLETE) 34TOPIC 9. DATA ANALYSIS AND STATISTICS 35TOPIC 10. RESEARCH ETHICS : ACCEPTABLE METHODS & PRACTICES 39TOPIC 11. RESEARCH WRITING & PRESENTATIONS 39TOPIC 12. SUPPLEMENTAL MATERIAL 40APPLYING RESEARCH 40EVALUATING RECREATION SURVEYS - A CHECKLIST 41

Page 2: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 2

TOPIC 1. Introduction to Philosophy of Science

A. Science can be viewed as **A body of knowledge that is:systematic: propositions related within body of theoryabstract: doesn't explain everything, simplifications, assumptions.general: aim for general laws, not explanation of isolated events.parsimonious: prefer simpler explanation

OR ** A method of inquiry that is:logical: hypothetico-deductive system, deduction, inductionself-corrective: iterative search for knowledgeempirical: ultimate test of truth requires testing with real world observations

no single scientific method: many different approaches--must fit method to the purpose and characteristics of the inquiry.

B. Alternatives to the scientific method -- C.S. Pierce - methods of fixing belief:Tenacity: repeat belief until acceptedAuthority: rely on an accepted or noted authorityIntuition: rely on common sense or intuitionScientific method : systematic and objective gathering of empirical data to support or refute ideas.

Example : beliefs about the benefits of recreation or leisure. Can base these on repetition to convince people of the benefits, reliance on a noted authority (NRPA, Academy of Leisure Science, or endorsement by sports figure or scholar) or tradition (we’ve known this for many years), our intuition that recreation is good for you, or studies to measure what the benefits actually are.

What is the basis for the following beliefs? Recreation reduces juvenile delinquency.Tourism is beneficial for a community.Recreation facilities should be accessible to those with disabilities.

Notice that the last statement is a value judgment rather than a statement of fact that can be empirically tested. Beliefs including those about management practices rest on a mix of tradition, intuition, authority and science. A good scientist asks for empirical support for propositions - this means making objective observations in real world and letting the facts determione "truth".

C. Epistemology is a branch of philosphy dealing with the theory of knowledge. Philosophy of science is a part of epistemology. Major contributors include Plato, Aristotle, Descartes, Bacon, Locke, Hume, Berkeley, Carnap, Russell, Pierce, Dewey, and James. See any introductory philosophy text or set of readings.

Ontology: what can be known about the worldEpistemology: philosophy of how we come to know things, relations between knower and knownMethodology : practical aspects of how we know, the science of methods

Current philosophy directly relevant to research methods divides into two camps that roughly parallel quantitive vs qualitative methods. Quantitatve methods come from traditional science and the positivist philosophy. Positivism was a reaction to metaphysical explanations of the natural world. Positivists subscribe to Lord Kelvin’s statement that “measurement is the beginning of scence”. If we can’t measure something we really can’t study it scientifically. The stereotypical scientist in a lab coat conducting experiments and making measurements is a positivist. Post-positivism is a term that refers to modern views of science. These range from modest updates of positivism that more fully recognize the fallibility of scientific methods to interpretive paradigms that completely reject the notion of absolute truth. Qualitative methods are grounded in phenomenology, hermaneutics, and related epistemologies.

2

Page 3: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 3

D. Qualitative vs Quantitative Methods

We will primarily cover what are known as quantitative research methods in this course. These are based in logical positivism and encompass "traditional science". A somewhat distinct set of methods are based in phenomonology and other non-positivist theories of knowledge. These are loosely termed "qualitative research methods". With roots in sociology and anthropology, qualitative methods are now used in almost all sciences. They have become increasingly popular in leisure science within the past ten years.

Qualitative methods include: ethnography, focus groups, in-depth interviews, case studies, historical analysis, participant observation and related techniques.

There are three key differences between qualitative (QL) and quantitative (QN) methods:

1. Purposes: QN seeks general laws, testing & verification, and tends to focus on behavior. QL studies particular instances, seeks understanding (verstehen), and focuses more on intentions and meanings.

2. Perspective: QN takes an outsider (scientist's perspective). The researcher is an objective observer. QL tries to measure and study phenomona from an insider perspective- observe from the subject or actor's frame of reference.

3. Procedures : QN uses standardized and structured procedures, operational definitions, probability sampling. QN tends to be reductionist (reduces things to their most important aspects). Interpretation is separate from analysis. QL uses unstandardized procedures, actor defined concepts, and non-probability samples. QL tends to be holistic. Interpretation tends to be interlinked with procedures and analysis and hence somewhat inseparable from them.

Both sets of methods are useful, although usually for quite different purposes. The key is matching the methods with the purpose. Qualitative methods tend to be used more for exploratory research, although this is not their only use. Ideally QL and QN methods should be integrated to study a particular problem. One may start with a qualitative approach talking with key informants observing as a participant, etc. From this are generated hypotheses that can be tested formally within a QN approach, maybe a survey. Results of QN suggest additional study for a more in-depth understanding of the phenomonon. This may suggest further QL type research, maybe followed again by QN. While QN and QL are often presented as competing scientific paradigms, they are complementary more than competitive.

QL research, being less standardized, is more difficult for the novice researcher. QL data is more difficult to analyze and to write up. It is harder to publish. Some QL work is more properly labeled philosophy than research. Good QL research requires first a good understanding of QN methods. There is a tendency for those researchers who are uncomfortable with statistics and mathematics to gravitate toward QL methods, while people who are good at mathematics and analytical thinking favor QN.

Examples of recreation research problems QL is well-suited to:

1) Evaluating effects of a unique recreation program on a specific group. eg. Has pet therapy been successful in my nursing home. May be no need here to generalize beyond a given home or group of patients.

2). Focus groups to understand people's motivations for recreation, the benefits they obtain from particular products or experiences, etc. Results might yield a list of motivations, attributes or benefits for a structured questionnaire. Focus groups widely used to evaluate advertising.

3). Understanding small group dynamics in recreation settings 4). Case studies of particular programs or events.5). Participant observation by a researcher prior to a QN investigation. eg. spend a day in a wheelchair before

designing a study about handicapper accessibility.6). Studying the meaning of leisure to people.

Remember that most of the basic information commonly used to support management, planning, marketing and evaluation is inherently quantititaive and requires careful, objective measurements, i.e. numbers of visitors, days of recreation, costs, spatial distribution of supply and demand, market characteristics, spending, etc. Qualitative methods add to but do not substitute for quantitative type analyses.

3

Page 4: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 4

SELECTED REFERENCES ON QUALITATIVE METHODS: need to update, particularly applications move to later, substitute general science/philos references here.

Bogdan, R.C. & Biklen, S.K. 1992. Qualitative research for education: An introduction to theory and methods. Boston: Allyn & Bacon.

Creswell, John W. 1994. Research Desugn: Qualitative and Quantitative Approaches. Thousand Oaks, CA: Sage.Dezin, N.KJ. and Lincoln, Y.S. (eds). 1994. Handbook of qualitative research. Thousand Oaks, CA: Sage.Glaser and Strauss. 1967. The Discovery of Grounded Theory.Chicago: Aldine.Henderson, Karla. 1991. Dimensions of Choice: A qualitative approach to recreation, parks and leisure. State

College, PA: Venture.Howe, C.Z.1985. Possibilities of using a qualitative approach in the sociological study of leisure. Journal of

Leisure Research 17(3): 212-224. - a review paper on qualitative methods.Lincoln, Y.S. and Guba, E.G. 1985. Naturalistic inquiry. Beverly Hills, CA: Sage. Maanen (Ed). 1983. Special issue of Administrative Science Quarterly on qualitative methods.Marshall, C. and Rossman, G.B. Designing qualitative research. Newbury Park, CA: Sage.Miles, M. and Huberman, A. 1984. Qualitative data analysis. Beverly Hills, CA: Sage. Patton, M.Q. 1987. How to use qualitative methods in evaluation. Newbury Park, CA: Sage.Rosenau, P.M. 1992. Post-modernism and the social sciences. Princeton NJ: Princeton Univ. Press. -

philosophical treatment of post-modern theories of knowledge.Schwartz and Jacobs. 1979. Qualitative Sociology. Free Press: NY.Strauss, A. and Corbin, J. 1990. Basics of Qualitative Research. Newbury Park, CA: Sage.Chirban, J.T. 1996. Interviewing in depth. Thousand Oaks, CA: Sage.Krueger, R.A. 1994. Focus Groups; A practical guide for applied research. 2nd edition. Thousand Oaks, CA:

Sage.Rubin, H.J. & Rubin, I.S. 1995. Qualitative interviewing. Thousand Oaks, CA: Sage.Yin, R.K. 1989. Case Study Research: Design and Methods. Revised edition. Newbury Park, CA: Sage

SELECTED RECREATION AND TOURISM APPLICATIONS

Brandmeyer JLR 1986 18 (1): 26-40 - baseball fantasy campGlancy JLR 1986 18(2): 59-80. - participant observation in a recreation settingGlancy, M. 1988. JLR 20(2):135-153. Play world setting of the auctionHartmann, R. 1988. Combining field methods in tourism research. Annals of Tourism Research 15: 88-105.Henderson, K.A. 1987. Journal of Experiential Education 10(2): 25-28. woman’s work experience

.1988. Leisure Sciences 10(1); 41-50 . Farm women & leisureHummell JLR 1986 18(1): 40-52.Shaw, S. 1985. Leisure Sciences 7(1): 1-24. Meaning of leisure. . Also see works of Denzin, Filstead, Sage Series on Qualitative methods, methods texts in sociology and anthropology, books on ethnography, participant observation, focus group interviewing, etc.

Exercise : Find and read a research article using a qualitative approach. Write up a brief summary (one page or less) including :

a). Complete citation in APA formatb). Brief summary of topic, study objectivesc). Description of methods usedd). Principal conclusionse). Your observations on the study and pros and cons of the qualitative approach.f). Also hand in a copy of the article.

4

Page 5: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 5

E. Sciences may be divided into the natural sciences (physics, chemistry, biology), social sciences (sociology, psychology, economics, geography, political science, communication, anthropology) and a variety of applied and management sciences (e.g., management, marketing, social work, forestry, engineering, urban planning, recreation and tourism). Over time, the basic disciplines have subdivided into specialty areas and formed hybrids such as economic geography and social psychology. Many of the applied sciences are interdisciplinary, borrowing from the more basic disciplines while also developing their own unique bodies of theory and methods. Recreation, leisure and tourism are applied sciences that borrow both from basic disciplines as well as other applied sciences.

F. Research is the application of scientific methods, the search for knowledge, controlled inquiry directed toward the establishment of truth. Purposes of research (Stoltenberg):

1. answer questions of fact arising during management (problem solving)2. develop new alternatives for management (developmental research)3. answer questions of fact arising during research (methods)(4.) answer questions for sake of knowing (pure research, theory)

G. Pure vs applied research -as a continuum-pure or basic research: for its own sake, more general, longer time to use-applied research: to solve a problem, for client, immediate application.

Most recreation and tourism research is applied research although some basic research is conducted, for example on the nature and meaning of leisure. Think of basic and applied research as endpoints of a continuum. An individual study may be placed along this continuum based on the degree to which it is done for its own sake or to solve a problem, the generality of the research and the client and timeframe for application. The use of complex models or statistical tools does not make a study basic or applied, as both simple and complex analysis tools may be used in either type of study. Theories and models are equally useful in applied and basic research, although theory development itself tends toward the basic end of the continuum.

H. Stages in the development of a science

exploratory to descriptive to explanatory to predictive researchdefinition, measurement, quantification, theory building

Thomas Kuhn - Structure of Scientific Revolutions examines how sciences evolve and change over time. He stresses the importance of paradigms to help structure and guide research. A paradigm is an accepted set of assumptions, theories, and methods within a given science. In the pre-paradigm phase of a science investigation is somewhat random and disorganized. Once a set of methods, assumptions and theories are accepted, research proceeds in a more organized and cumulative fashion as the paradigm identifies the most important variables to measure and suggests the appropriate methods, while also providing the structure into which results may be integrated. Scientific revolutions occur when a competing paradigm is adopted and “overthrows” the old ideas. In astronomy the switch from an earth centered solar system to the present model was a paradigm shift. What paradigms guide recreation and tourism research?

I. Research communication chain. As applied fields of study, recreation and tourism research involve considerable interaction between scientists and practitioners . MSU, as a land grant school, is grounded in a philosophy of using research to help solve problems. This requires a chain of communication between research centers and the field. In the simple model, problems flow up from the field to the University where they are addressed by researchers. Solutions are then extended back to the field where the results are applied. Today one more frequently finds researchers working directly with clients in the field to help identify and solve problems. pure research---applied research----extension---practitioner

5

Page 6: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 6

J. Some mistaken notions1. Theoretical means not useful - theories play very important roles both in science and management.

Theories help to organize knowledge and direct further investigation. Management decisions, like research, are often based on "theories" of people's behavior, how markets work, etc. Almost all actions and decisions are based on some theory or set of assumptions. Question is whether theories/assumptions are well-founded and useful.

2. Some things are measureable and others are not. Measurement is as much a function of our creativity and success in developing measurement instruments than characteristics inherent in the things being measured. Individual measures almost always capture only a narrow aspect of the thing being measured and there are always many different potential measures of any characteristic. For example, the usual measure of age captures only the time that has passed since birth. While this measure may be correlated with physical health, attitudes etc., it doesn't measure these traits. Don't criticize a measure for not measuring something it doesn't purport to measure. Suggest a different measure. Many things we measure today were deemed "unmeasureable" in the past until suitable instruments and procedures were developed, e.g. the thermometer to measure temperature.

Classification is a form of meaurement. When we place a person into a religious, racial or ethnic group or classify people based on hair color, we are making a measurement of the underlying attribute. We regularly measure "qualities" with quantitative measurements, e.g. personality, intelligence, beauty. You can argue how good these measures are or which dimensions of the underlying attributes they capture, but arguing they can't be measured suggests either an anti-scientific bias or "giving up". Remember that no measure is perfect or complete. Science and application progress by improving our measurements and using these to better understand relationships between variables.

3. Can we measure the beauty of a sunset? Yes, if we have a clear understanding of what the concept "beauty" means and have devised suitable instruments/procedures to measure it. If the concept rests on people's perceptions, then attitude surveys and rating techniques can be used --just as we "rate" the performance of a gymnast or ice skater, we could rate sunsets. Perhaps one can identify the characteristics that define the beauty of a sunset and develop rating guidelines that "experts" can apply, as they do in evaluating skaters. Or if beauty can be defined by physical features, perhaps an instrument/formula can be devised based on wavelengths of light emitted. Remember that prior to themometer's, temperature was measured by people's perceptions of hot/cold. Only when we understood the relationship between temperature and expansion of mercury could a more precise and standard quantitative measure be developed. Is the notion of warmth inherently more measureable than beauty?

6

Page 7: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 7

TOPIC 2. Recreation, Tourism & Leisure as Research Areas.

In this section, we explore the nature of recreation, parks, leisure, and tourism as research areas and also as sciences. Are these sciences and if so, what kind of science(s)? Before we can discuss these questions, we will need to become familiar with the research literature, some of the history of recreation and tourism research, and some of the important research topics, theories and methods.

A. Characteristics of the recreation field1. Relatively new field for research-start about 1960 with the ORRRC studies in US.2. Interdisciplinary field.3. Applied Field4. Divisions between resource-oriented, program oriented; urban-rural, recreation, parks, tourism5. Status of recreation as research area is low 6. Few highly trained researchers7. Little theory of its own8. Heavy reliance on surveys and descriptive studies

B. Characteristics of tourism as a "science"1. Lacks clear academic home, cuts across business, planning, recreation, geography,…2. Divisions between international-domestic, marketing-social-environmental, hospitality-resource-based3. More & better research outside U.S. 4. Heavy dose of consulting and commercial studies relative to academic ones5. Strong political influence on research 6. Marketing studies and advertising evaluation dominate7. Convergence of tourism and recreation research since 1985.

C. WHO is doing recreation & tourism research:

1. UNIVERSITIES (North America): About a dozen recreation/park/leisure studies/tourism curricula making major contributions: Michigan State, Texas A&M, Illinois, Waterloo, Oregon State, Penn State, Clemson, Indiana., Colorado State. One or two individuals at many other institutions. Scattered, but significant contributions from individuals in disciplines (geography, sociology, psychology, economics,...) or related applied fields (HRI, forestry, fish & wildlife, business, marketing, agric. economics, health and medicine,...) within Universities.

2. GOVERNMENT:a. Federal: Over 30 agencies involved in recreation-related research. e.g. USDA, Forest Service, National

Park Service, Sports fisheries & Wildlife, US Army Corps of Engineers, Bureau of Land Mgmt, Bureau of Reclamation, NOAA-Sea Grant Program, Dept. of Commerce, Health, Education, Welfare, NIH, NSF, Transportation. Federal role in tourism research mainly periodic surveys in Transportation (BTS) and in-flight surveys of international travelers.

b. State: Considerable planning-related research as part of SCORP (recreation) and other planning. Also as part of forestry, fish & wildlife, water resource, land use, tourism, and economic development. State travel offices a primary funder of travel research, some from CVB and industry associations.

c. Local & municipal: mostly planning studies with small research component. Larger efforts in major metropolitan park systems like Hennepin county, MN, Huron Clinton and Cleveland metroparks. Metro CVB's.

3. FOUNDATIONS and Non-Profit Organizations: Ford, Rockefeller, Resources for the Future, Wilderness Society, Sierra Club, Appalachian Mt. Club, NOLS etc. Tourism - TIA, WTO, WTTC.

4. NRPA, National Recreation & Park Assoc: A little & growing. TTRA = Travel & Tourism Research Assoc.

7

Page 8: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 8

5. COMMERCIAL/PVT SECTOR: Large market surveys, industry sponsored research, small scale in-house research. Use of consultants. Examples from campground, recreation vehicle, ski, fishing equipment, boating, lodging, and other recreation/tourism industries. Numerous consulting companies, particularly in tourism (D.K. Shifflet, Smith Travel, …) conduct large scale travel surveys and custom research.

6. INTERNATIONAL: Canada, Australia, United Kingdom, Netherlands have research similar to US programs; UK, Netherlands focus on land use & planning. Europe: sociology of sport, cultural studies. USSR, Eastern Europe: time budget studies. Tourism research more common globally.

Reference: Stanley Parker. A review of leisure research around the world. WLRA and International Handbook of Leisure Studies and Research. See Graefe and Parker's Recreation & Leisure; An Introductory Handbook or Parker's chapter in Barnett (1988), Research About Leisure. Ritchie & Goeldner. Travel, Tourism & Hospitality Research.

D. WHAT - Selected Recreation Research Themes by Problem (with key contributors prior to 1990)

1. Landscape aesthetics, visual quality: Daniel, Zube, Schroeder, Anderson, Brown, Buhyoff, Westphal, Vining. See Taho conference -Our National Landscape.

2. Use estimation, demand modeling, valuation. economists: Clawson, Knetsch, VK Smith, GL Peterson, Randall, Brookshire, Walsh, Dwyer, Cicchetti, Loomis, Sorg, Hoehn, Stoll, Talhelm, Brown, See PCAO... geographers: Ewing, Baxter, Cesario, Fesenmaier, Timmermans.

3. Costs/Supply : Reiling, Gibbs, Echelberger, S. Daniels, Cordell, Harrington. See 1985, 1990 Trends Symposia, Harrington RFF book.

4. Forecasting: Van Doren, Stynes, Bevins, Moeller, Shafer, BarOn, Guerts, Witt, Calantone, Sheldon, See Stynes in PCAO.

5. Marketing : Crompton, Mahoney, Snepenger, Woodside, Etzel, Goodrich, Perdue. See JTR, Perdue chapter in Barnett (1988).

6. Carrying capacity, crowding & satisfaction : Stankey, Lime, Lucas, Graefe, Heberlein, Schreyer, Shelby, Manning book, Graefe review in LS, Stankey in PCAO.

7. Environmental. psych : Knopf, Hammitt, Fridgen, Williams, Schreyer8. Recreation Motivations: Driver, Knopf, Brown et al.9. Leisure theory: Tinsley, Pierce, Iso-Ahola, Kleiber, Ragheb, Mannell, Kelly, Godbey, Goodale.10. Play & Human development: Barnett, Klieber, M. Ellis11. Therapeutic Recreation: C. Peterson, Austin, Dickason, Witt, G. Ellis. See TRJ.12. Methodology/Statistics: J. Christensen, Tinsley, Stynes, Samdahl, Fesenmeier13. Tourism: Broad category emcompassing most of the others in context of travel-related use of leisure.

Perdue, Crompton, Gitelson, Fridgen, Hunt, Burke, Gartner, Uysal, Goeldner, Ritchie, Archer, Rovelstad, Jafari, O'Leary, Becker, Var, Sheldon. See Ritchie & Goeldner, PCAO.

14. Evaluation of recreation programs : Howe, Theobald, van der Smissen, McKinney, Russell15. Gen'l forest recreation mgmt: Brown, Driver, Lucas, Lime, Stankey, Harris16. Depreciative behavior: Clark, Christiansen, Westover17. Interpretation/communication: Roggenbuck, McDonough, Machlis, Ham18. Social psychology of leisure. Iso-Ahola, Mannell, Csikszentmihalyi, Weissinger, Fridgen,...19. Modeling : VK Smith, J. Ellis, Stynes, Levine, GL Peterson, Cesario, Fesenmaier, Timmermans, Louviere,

Ewing.20. Economic impact : Archer, Propst, Alward, Schaffer, Maki, Stevens. See Propst (1984).21. Rec Choice : Dwyer, Stynes, Peterson, Louviere, Timmermans, Vaske, see Choice Symposium.22. Environmental impacts : Cole, van Wagtendonk, see PCAO and Wilderness Mgmt Proceedings.23. Leisure time use: Robinson in PCAO24. Pricing: Driver in 1985 Trends, Manning, 25. Others: Policy processes, Sports, Arts & culture, Social impacts, public involvement.26. Benefits of leisure . See book edited by Driver, Brown & Peterson.27. Constraints to Leisure - see book and artilcles by Edgar Jackson

8

Page 9: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 9

Major Tourism Research Themes: marketing, conversion studies, advertising evaluation, social, environmental and economic impacts, segmentation, destination images, information search & processing, destination choice, community attitudes. - See Ritchie and Goeldner, JTR, Annals, Tourism Management.

E. Selected Theories from disciplines:

Economics Utility theory, choice, exchange, market behavior, decisionmaking, risk, uncertainty, and value.

Geography Spatial behavior, location theories, envir. perception, central place theory.Psychology Choice, perception and cognition, learning, cognitive dissonance, attitude & behavior,

personality, locus of control, Flow Sociology Social structure and change, group behavior, institutions, status, norms, conflictAnthropology Norms, institutions, cultural change.Political Science Collective action, gaming, small group decisionmaking, power and influence.Communications Verbal and nonverbal behavior, information processing, mass media and interpersonal

communication, diffusion theoriesBiological Territoriality, predator-prey, adaptation, migration, population growthMarketing Theories that combine economic, psychology, communications and geographic theories

to explain market behavior.

E. IMPORTANT DATES AND EVENTS IN RECREATION RESEARCH (prior to 1990)

1. Samuel Dana Problem analysis in forest recreation research, 19572. ORRRC Commission Report 19623. Conference on research in Ann Arbor, 19634. Economics of Outdoor Recreation, Clawson & Knetsch 1966

RFF Program on outdoor recreation research during 1960's.5. National Acad. of Sciences, Program for outdoor rec. research, 19696. Journal of Leisure Research founded, 19697. First Nationwide Outdoor Recreation Plan 19738. Harper's Ferry Research Needs Workshop, 19749. NAS Report Assessing Demand for Outdoor Recreation, 197510. CORDS Reports, 197611. Leisure Sciences founded 197712. First NRPA/SPRE Leisure Research Symposium 197713. First National Outdoor Recreation Trends Symposium, 198014. NRPA/BOR research agenda project, 198215. PCAO Reports 198616. Wilderness research Conference 198517. Anniversary Leisure Research Symposium - Barnett (1988) book18. Benchmark conference, RPA planning 198819. Recreation Benefits Workshop 1989. 20. 1990 Trends Symposium Regular Symposia with published Proceedings/AbstractsNRPA/SPRE Leisure Research Symposia; Since 1977; Abstracts published since 1981.Canadian Leisure Research Symposia: 1975 (Laval); 1979 (Toronto);1983 (Ottawa); 1987 (Halifax); 1990

(Waterloo); 1994 (Winnepeg).Southeast region symposia (SERR): since 1979 Northeast region symposia:since 1989Travel and Tourism Research Assoc. since about 1973 CenStates, TTRA: Last few yearsWorld Leisure and Recreation Assoc. (WLRA). Research in the National Parks, since 1979.IUFRO, SAF, Assoc. of American Geographers, Recreation Working groups Trends every five years since 1980; Social Sci & Res Mgmt every other year since 1986?

9

Page 10: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 10

TOPIC 3. RESEARCH PROPOSAL

A. Purpose of research proposal1. focus your research efforts, organize it into blocks.2. structure final paper or report3. communicate to client exactly what will be done. View proposal as a contract to be carried out.4. Allow client or reviewers to make suggestions, review design, point out potential problems, oversights,

etc.; decide if this study can and should be done.B. Suggested Proposal Format

1. Problem Statement:a. put project in context, work from broad problem area to specific part you will tackle. Identify the

client, key dependent and independent variables.b. establish importance of problem, justify the study, potential importance and uses of results.c. delimit the problem: narrow problem down, define your terms.d. Brief sketch of proposed approach.e. Anticipated products and uses/users of results.

2. Objectivesa. Be specific, a one, two, three listing is preferred, itemizing objectives in some logical order.b. State as testable hypotheses or answerable questionsc. Objectives should flow clearly from problem statement.

3. Review of Literaturea. Provide background theory and evidence of your knowledge of the subject area. Show you have

done homework, are capable in given area.b. Show relationship of your study to other studies (completed and on-going). Link other research

specifically to your study in a logical way. Show how you will fill a gap, advance knowledge.c. Help in defining problem, justifying importance, and identifying best approach.d. This isn't an annotated bibliography. Review only most relevant studies and pinpoint linkages to

this one. A handful of best and most closely related studies usually sufficient.4. Procedures : What are you going to do, when, how, to whom, and why?

a. Design: independent, dependent variables, how control for threats to validity and reliability.b. Define variables, measures, instrumentation.c. Define Population, sampling plan, sample size.d. Data collection, field procedures.e. Analysis; data processing, planned analysis and statistical tests.

5. Attachments: Time sequence of activities, Budget, Vita of personnel, Instruments6. Overall Considerations

a. All sections should fit together like a puzzle and be presented in a logical fashion.b. Be preecise, concise, and to the point.c. Write for the intended audience

Identifying Research questions/ problemsTo be added

10

Page 11: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 11

TOPIC 4. RESEARCH DESIGN OVERVIEW/ RESEARCH PROCESS

Five basic steps in a research study.

1. Define the research problem/ objectives of the study.2. Choose appropriate research designs for a given problem3. Collect data (secondary or primary)4. Process and analyze data to meet each objective5. Communicate results with clients/publics

1. Defining the problem is the most difficult and most important step in any study. Good research results from good questions/problems that are clearly understood. This is especially true of applied research, which must begin from clear problems that have been translated into one or more specific research questions or hypotheses. Given that research resources are limited, it is important that priorities be established and resources are directed at questions where the expected benefit-cost ratio is high. The most difficult task for both students and managers is often that of defining clear, do-able research problems.

2. Once a problem has been defined, the research design task may begin. Most problems may be addressed in a variety of ways, each offering advantages and disadvantages. The research design task is to choose the most promising design, based upon a clear understanding of different approaches, the particular characteristics of the problem at hand, and resource constraints. Almost all research design questions involve tradeoffs between costs and accuracy. Choose the design to fit the problem and resources at hand, not necessarily based on what has been done in past or your personal preferences.

3. Data collection tends to be a tedious task, but requires strict adherence to procedures established in the design stage. Your analysis will be no better than the data you gather so careful attention to details here will pay off later. In primary data collection interviewers must be trained and supervised. Procedures and data should be checked as you proceed to identify and solve problems that may occur. When gathering secondary data, sources must be carefully documented and the validity and reliability of all data must be assessed and hopefully cross-checked.

4. Data processing includes coding, data entry, cleaning and editing, file creation & documentation, and analysis. In the first stage, data is transferred from surveys or records into the computer. Begin with the identification and naming of variables, develop a codebook (including missing codes) and a data entry form/format. Clean and edit data as it is entered. Eventually, you will create a special "system file" suitable for use by whatever statistical package you are using. It is a good practice to write out the first 10 and last 10 cases prior to starting your formal analysis to verify that data have been correctly entered or converted. Check that you have the correct number of cases and variables.

Next run some exploratory data analysis procedures to get a feel for the data and further check for possible problems. Run frequencies on nominal variables and other variables assuming a limited set of values. Run descriptive statistics on interval scale variables. Check these results carefully before beginning any bivariate or multivariate analysis. Analysis should proceed from simple univariate description to bivariate descriptive tables (means by subgroup or two way tables) to explanation, hypothesis testing and prediction.

5. Identify the audience and their needs before preparing oral or written reports of results. In most studies you should plan out a series of reports which might include

a comprehensive technical report or series of such reports, an executive summary journal articles for research audiences reports and recommendations for management articles and information for the general public/lay audiences

Large studies often contain many pieces that must be packaged for different purposes and audiences.

11

Page 12: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 12

Research Approaches

1. Primary data gathering:

a. Surveys (self-administered, telephone, personal interview): description, exploring relationships among variables.

b. Experiments (field, lab): test cause-effect relationships, evaluating interventions, impacts.c. Observation: restricted to data that is accessible to observation, describing observable phenomona.d. Qualitative (focus groups, in-depth interviews, partic. observation): explore interpret, and discover, study

meanings and intentions, describe unique phenomona.

2. Secondary data: Use of any data collected for some purpose other than that for which the data were originally gathered. Original source may be any of the above. Most commonly, data are: Surveys: re-analysis of survey dataGovernment statistics: analysis of various population and economic censuses, or other regularly gathered social,

economic and environmental indicators.Agency data : use, registration data, budget, personnel, client/patient records, etc.Documents and written material : content analysis, literature reviews.

3. Measurement instruments:

Questionnaires-- elicit verbal or written reports from verbal or written questions or statements.Observation -- one or more observers (people) record informationPhysical instruments-- a variety of measuring devices from yardsticks to thermometers, to electronic counting

devices, cameras, tape recorders, barometers, various sensors, instruments to measure various components of water, air, and environmental quality, ...

4. Other dimensions of research designsa. Cross sectional or longitudinal (Time) - study phenomona at one point in time or more than one point. Trend,

cohort, and panel designs are examples of longitudinal approaches.b. Structured vs unstructured approaches (formal vs informal)c. Direct vs indirect approaches d. Exploratory vs confirmatory: explore vs teste. Descriptive, explanatory, predictivef. Correlational vs cause-effect relationships

12

Page 13: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 13

Alternative Research Designs

Where Data are Collected

How Data are Gathered

Household On-Site Laboratory Othera

Personal Interview

Surveys Surveys & Field Experiments

Focus Groups Surveys & Field Experiments

Telephone or Computer Inteview

Survyes Computer Interviews

Computer Interviews

Self-Administered Questionnaire

Surveys & Field Experiments

Experiments Surveys

Observation & Traces

NA Observable Characteristics

Observable Characteristics

Observable Characteristics

Secondary Sources

NA Internal Records NA Gov't , Industry & Other external sources

a. Other locations include highway cordon studies, mall intercept surveys, surveys at outdoor shows & other special events, etc.

A few simple guidelines on when to use different methods:

1. Describing a population - surveys2. Describing users/visitors - on-site survey3. Describing potential users or general population - household survey4. Describing observable characteristics of visitors - on-site observation5. Measuring impacts, cause-effect relationships - experiments6. Anytime suitable secondary data exists - secondary data7. Short, simple household studies - telephone surveys8. Captive audience or very interested population - self-administered surveys9. Testing new ideas - experimentation or focus groups10. In-depth study - in-depth personal interviews, focus groups.

13

Page 14: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 14

STEPS IN RESEARCH PROCESS

1. Define the Problem - Problem Analysisa. Identify problem areab. Immerse yourself in it

Literature: Relevant concepts, theory, methods, previous researchTalk with people involvedObserve

c. Isolate more specific research problem(s)Identify key variables (dependent, independent), hypothesized relationshipsIdentify relevant populationsIdentify key decisionmakers/clients/users or research

d. Specify research objectives and/or hypotheses2. Select Research Design - Develop Research Proposal

a. Define key elements of the research problemInformation needed, variablesPopulation(s) to be studiedResources available (time, expertise,money)

b. Evaluate alternative designsCross sectional, longitudinal, panel study,...Qualitative or quantitative approachSurvey, experiment, secondary data, ...Mail, telephone, personal interview, on-site or householdInstruments, questionnaires, observation, traces...Census, sample, probability?, stratify?, cluster?

c. Specify measurement proceduresDefine concepts, variables, measurement of each variableMeasurement scales & instrumentationAssess reliability & validity of measures

d. Specify population and sampling designDefine population, identify sampling frameChoose sampling approachChoose sample size

e. Specify analysis approachData processingStatistical analysis

DescriptiveInferential - hypotheses to test

Intended tables, figures, format of anticipated resultsf. Assess threats to reliability & validity, possible errorsg. Assess feasibility of the design - time, costs, expertiseh. Assess ethical issues, human subjects review

3. Implement the research design - Data gathering/Field proceduresa. Pre-testing of instruments and proceduresb. Choose sample and administer measurement proceduresc. Monitor the process, solve problems as they occur

4. Analysis and Reporting - Resultsa. Data entry - coding, cleaning, ...b. Preliminary analysisc. Descriptive analysisd. Hypothesis testing e. Preparation of tables and figuresf. Writing/presenting results

5. Put it all togethera. Final report(s) and articlesb. Drawing conclusions, assessing limitationsc. Applying the results, implications for intended users

14

Page 15: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 15

POTENTIAL SOURCES OF ERROR IN A RESEARCH STUDY

Whether you are designing research or reading and evaluating it, it is useful to approach the task as one of controlling for or looking for errors. The following is a list of the types of errors to watch for when designing research or reading research reports. The principles and methods for research design are largely to control for error.

1. Problem Definition: Conceptualization of research problem may not adequately or accurately reflect the real situation.

- use of a theory or assumptions that are faulty or do not apply- research problem doesn't address the management questions- reductionism - omission of key variables

2. Surrogate information error: variation between the information required to solve the problem and the information sought by researcher.

3. Measurement error: variation between information sought and information produced by the measurement process. (reliability and validity)

4. Population specification error: variation between the population required to provide needed information and the population sought by the researcher. (rule for clearly defining the study population)

5. Frame error: variation between the population as defined by the researcher and list of population elements used by the researcher.

6. Sampling error: variation between a representative sample and the sample generated by a probability sampling method (sampling error estimates, checking for representativeness).

7. Selection error: variation between a representative sample and the sample obtained by a nonprobability sampling method. (check for representativeness)

8. Nonresponse error: variation between the sample that was selected and the one that actually participated in the study. (evaluating non-response bias).

9. Experimental error: variation between the actual impact of treatment and the impact attributed to it based on an experimental design (pre-measurement, interaction, selection, history, maturation, instrumentation, mortality, reactive error, timing, surrogate situation - will define later when we cover experimental design). (experimental design)

10. Data processing errors: errors in coding and handling of data. (cleaning)

11. Analysis errors: Covers a variety of errors including violation of assumptions of statistical procedures, use of inappropriate or incorrect procedures, mis-handling of missing values, calculation errors, and faulty interpretation of results.

12. Reporting and Communication errors: Errors made in preparing oral or written reports including both typographic and logical errors. Faulty interpretation of results made by users. (editing)

13. Application errors: Inappropriate or faulty application of the research results to a management problem. Over-generalizing the results to situations where they may not apply is a common error in applying research results.

15

Page 16: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 16

TOPIC 5. DEFINITION & MEASUREMENT

1. TYPES OF DEFINITIONSNominal or Conceptual definitions define concepts in terms of other concepts.Operational definitions define concepts in terms of a set of measurement procedures for generating the

concept.

EXAMPLE 1. Conceptual definition - Length of table is the distance from one end to the other. Operational definition - Length of table is the number that results from the following procedure : Place a yardstick along one edge of the table, place additional yardsticks end to end until one extends over the other edge. Read the numeric marking for inches on the final yardstick at the point where it is exactly over the other edge of the table (call this X inches). Count the number of yardsticks you have used (call this N). Compute (N-1) * 36 + X. Repeat this process on the edge perpendicular to this one. The length of the table in inches is the larger of these two numbers.

EXAMPLE 2. Age- Conceptual definition- the number of years that have passed since a person's date of birth. Operational definitions: A. Ask an individual for his name, place of birth (county and state), and names of both parents. Go to the county records office for the given county and find the record for the person with the given name. If there are more than one such records, check for names of parents. Identify the date of birth from this record. Subtract the year of birth from 1991. If the month and day of birth has not yet been reached in the current year, subtract 1. DEF B. Give the subject a slip of paper with the following question. ENTER YOUR AGE IN YEARS AT YOUR LAST BIRTHDAY ______. The number written in the blank is the person's age.

2. LEVELS OF MEASUREMENT

NOMINAL: classify objects into categories with no ordering, the special case of two categories is termed a dichotomous variable. examples: Religion, State of Birth.

ORDINAL: classify objects into categories with an ordering (less than, equal to, or greater than) but not neccesarily equal distances between levels. examples; high, medium, low ; small, medium, large; hardness scale for minerals.

INTERVAL: An ordered scale where distances between categories are meaningful. For example there is 10 years difference in age between someone age 10 and age 20, or someone age 70 and age 80. Examples - anything measured using the real number system is an interval scale, income in dollars, age in years, temperature in degrees Celsius.

RATIO: An interval scale with a "natural zero". A natural zero is the total absence of the attribute being measured. eg. Kelvin temperature scale is a ratio scale (0 degrees Kelvin = absence of any molecular motion), while Farenheit and Celsius scales are interval, but not ratio scales. A ratio scale is required to make statements like "X is twice as warm as Y”. Otherwise, ratio and interval scales have similar properties.

Examples: Measures of income at each level of measurement.

NOMINAL- Middle income if income is between $30,000 and $50,000Not middle if less than $30 or more than $50.

ORDINAL - LOW is less than 30MID if between 30 and 50HIGH if greater than 50

RATIO - Income in dollars from Line 17 of 1995 tax return.

INTERVAL but not RATIO scale - Income in dollars + $10,000. This contrived measure doesn't have a natural zero, but it is an interval scale. Note how the shift changes interpretations like "twice or half as large".

16

Page 17: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 17

3. STATISTICS APPROPRIATE TO THE LEVEL OF MEASUREMENT

Descriptive Inferential

NOMINAL mode, frequency tables, percentages Chi square

ORDINAL median, percentile, Rank order statisticsrange, interquartile deviation

INTERVAL mean T-test, ANOVA, etc.standard deviation

A dichotomous variable measured as 0 or 1 can be considered to be any of these scales and all of the above statistics can be applied.

4. RELIABILITY AND VALIDITY OF MEASURES

RELIABILITY is the absence of random error in a measure, the degree to which a measure is repeatable - yields the same answer each time we measure it. We assess reliability by test-retest, split half method for indexes & scales (Cronbach's alpha), and alternative forms.

VALIDITY is the absence of systematic error (bias) in a measure, the degree to which we are measuring what we purport to measure. Types of validity are content (or face), criterion-related and construct validity

ACCURACY of a measure typically implies the absence of both systematic and random error, i.e a measure that is both reliable and valid is called accurate. Also note the distinction between precision (fineness of distinctions, number of decimal places) vs accuracy (how close the measure is to the "true" value". The "true" measure is something we can never know with certainty. Scientists therefore refer to reliability and validity of measures vs accuracy.

The differences between reliability and vailidity are illustrated by looking at the targets below. Think of taking 100 independent measures (shots at a target here) and displaying the results graphically. Note that validity and reliability are two independent concepts. Reliability is indicated by the repeatability of the measure (a tight shot group) while validity refers to whether the measures center on the true measure (or target).

17

Page 18: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 18 5. Assessing Reliability and Validity of a measure

a. Reliability1. test-retest2. alternative forms3. split half/ Cronbach's alpha

b. Validity1. content or face validity2. criterion-related validity

a. concurrentb. predictive

3. construct validity6. Souces of measurement error

a. Sources of bias1. Due to researcher or measurement instrument; expectations, training, deception2. Due to subjects, e.g. Hawthorne effect, reactivity3. Due to research design – biased samples, nonresponse

b. Sources of noise (random error)1. Differences among people2. Fuzzy definitions, criteria, procedures yielding inconsistent interpretations3. Mixing processes

18

Page 19: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 19

QUESTIONNAIRE DESIGN

A. Kinds of Information1. Demographics, socioeconomics and other personal attributes: age, race, education, gender, income,

family size/structure, location of residence, years living in the area, place where person was born, grew up, personality, ...

2. Cognitive - what do they know, beliefs (what do they think is true or false)3. Affective - attitudes, how do they feel about something, preferences, likes and dislikes, evaluations,...4. Behavioral - what have they done in past, are doing in present, expect to do in future (behavioral

intentions).

B. Question structures1. Open ended2. Close ended with ordered choices, eg. Likert scale3. Close ended with unordered choices4. Partially close ended, ("other" option)

C. Question content1. Is this question necessary? useful?2. Are several questions needed on this subject? Double barrelled questions.3. Do respondents have information to answer the question? Should a filter question be included. How

precise can subjects answer? Is question too demanding?4. Does question need to be more concrete, specific and related to subject's personal experience? Is a

time referent provided?5. Is question sufficiently general? Do you want recent behavior or "typical behavior"?6. Do replies express general attitudes or specific ones?7. Is content loaded or biased8. Are subjects willing to answer?9. Can responses be compared with existing information?

D. Question wording1. Will words be uniformly understood? Simple language. Beware of technical phrases, jargon and

abbreviations.2. Does question adequately express the alternatives?3. Is the question misleading due to unstated assumption or unseen implications.4. Is wording biased, emotional, or slanted?5. Will wording be objectionable to respondents?6. Use more or less personalized wording.7. Ask in more direct or more indirect way?

E. Form of Response1. Open or closed2. If closed, ordered or unordered; number of categories, type of queue,forced or unforced choice3. Be sure categories are mutually exclusive.

F. Sequencing of questions

1. Will this question influence responses to others?2. Is question led up to in a natural way?3. Placement to create interest, improve response rate.4. Branching, Skipping, and transitions on questionnaires.

19

Page 20: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 20G. Most Common Problems in Questionnaire Design

1. Lack of specificity.

a.. Not indicating a timeframe for questions about behavior (within past day, week, month, etc)b. Asking for what people do in general or on average vs what they did on a particular trip or during a

particular week. The concern is usually that the last trip or a particular day or trip may not be a "typical one". Proper sampling usually covers this potential problem. Remember that we are usually not interested in a particular subject's response, but will likely report averages or percentages across population subgroups. If we randomly sample trips, chances are that if for one person the trip we choose happens to be longer than their usual trip, for others the random choices will yield the opposite. For a large enough sample and proper random sampling no single observation is representative, but collectively the sample will be.

c. Too much aggregation: to measure complex phenomona we usually must break them into component parts. e.g. How many days of recreation participation, hours of leisure last week? Concepts too vague and aggregate to yield reliable information. Ask for individual activities and then add up responses.

2. Not using filter/branch/contingency questions for questions that may not apply to all subjects. Don't assume everyone is aware of everything or fulfills the requirements for answering a particular question.

a. Question doesn't apply : e.g. if not stay overnight don't ask number of nights or lodging typeb. Subject not informed adequately about a topic : e.g. don’t ask attitudes or opinions about things

without first assessing awareness.

3. Use of technical terms or complex language. Don't use technical terms like carrying capacity, sustained yield, ADA, etc. on non-technical audiences. What you put in research proposal must be translated and phrased for the population being surveyed.

4. Asking questions subjects likely cannot answer.

a. Recall of details from distant or even recent past. "How many fish did you catch in 1985? "How many people were in the pool this morning?

b. Looking for deep motivations and attitudes on topics subjects really haven't thought about very much or don't know much about. " To visitors - Should we use more volunteers or regular park staff at the visitor center?"

5. Trying to handle complex matters in a single question vs breaking it down into smaller pieces. Often termed "double barreled" questions.

6. Using questionnaires to gather data that is better collected via observation, physical instruments, or from available records. e.g. What was the temperature or weather conditions today, in personal interview are you male or female, for a park manager - how many visitors did you have last week (without opportunity to look it up)..

7. Implied alternatives: State them. e.g. Do you feel Lansing should charge $2 for entrance to the zoo OR should the zoo be free of charge.

8. Unstated assumptions: How far did you drive to reach the park? Assumes subject arrived by car.

9. Frame of reference: particularly important when asking for attitudes or evaluations. e.g. How would you evaluate the performance of the Lansing Parks Department?

- as a user, taxpayer, parent, - Am I satified?, Do I think most people are satisfied, Are they doing best job

possible with resources they have

20

Page 21: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 21

Sample Question Formats

Kinds of variables typically measured in surveys1. Demographic and socioeconomic characteristics 2. Behavior 3. Attitudes, Interests,Opinions, Preferences, Perceptions

a. Cognitive : Beliefs and knowledgeb. Affective : Feelings, emotional responsesc. Behavioral intentions

1. Simple fill in the blank. Obtaining a straightforward number or other easily understood response.

How old are you? ___________. In what county is your permanent residence? __________years county

How much money did you spend on this trip? $ __________________.

2. Open ended: To avoid leading subject, to obtain wide range of responses in subject’s own words, or when you don’t know kinds of responses to expect.

What is your primary reason for visiting the park today? _______________________________________.

3. Partially closed ended. List major response categories while leaving room for others. If you have exhaustive set of categories question can be completely “closed-ended”. Usually check a single reponse, but can also allow multiple responses (see checklist below).

Which of the following community recreation facilities do you most frequently use? (check one). neighborhood parks/playgrounds swimming pools community centers natural areastennis courtsother (please specify) ___________________

4. Checklists: Allow subjects to check multiple responses. Categories exhaustive & mutually exclusive

Which of the following winter recreation activities have you participated in during the past month?(check all that apply)

Cross-country skiing Downhill skiing Snowmobiling Ice Skating Sledding or Toboganning

5. Likert Scales: Versatile format for measuring attitudes. Can replace “agree” with “importance” “satisfaction”, “interest” “preference” and other descriptors to fit the attitude you wish to measure.

Please check the box that best represents your level of agreement or disagreement with each of the following statements about downhill skiing:

Strongly agree Agree Neutral Disgaree Strongly disagreeDownhill skiing is exciting Downhill skiing is dangerous Downhill skiing is expensive

21

Page 22: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 22

6. Rank Ordering: To measure preferences/priorities . Limit to short lists.

Rank the following states in terms of your interest as possible travel destinations for a summer vacation trip. Place a 1 beside the state you would most like to visit, place a 2 besides your second choice, and a 3 beside your third choice.

______ Michigan______ Wisconsin______ Minnesota

7. Filter Question. To screen for eligibility or knowledge prior to asking other questions. Make sure each question applies to all subjects or use filters and skips to direct respondents around questions that don’t apply.

Did you stay overnight on your most recent trip? NO YES IF YES, How many nights did you spend

away from home? ______

8. Semantic Differential scale. Measure perception or image of something using a set of polar adjectives.

For each of the characteristics listed below, mark an X on the line where you feel downhill skiing falls with respect to that characteristic. (Could repeat with cross country ski, snomobiling and compare perceptions) (or Coke and Pepsi). exciting _____ ______ ______ ______ ______ ______ dullexpensive _____ ______ _____ ______ ______ ______ inexpenivesafe _____ ______ _____ ______ ______ ______ dangerous

SOME EXAMPLES OF BAD QUESTIONS (THINGS TO AVOID).1. Loaded questions

Most people are switching to brand X. Have you switched yet?Do you agree with (famous person or authority) that our parks are in terrible shape?How satisfied were you with our service: Very satisfied Quite Satisfied Satisfied

2. Double-barrelled (two questions in one)Do you use the city parks or recreation centers? Should Lansing build a new baseball stadium by raising taxes?

3. Not specific enoughDo you downhill ski? - Have you downhill skiied within the past 12 months.Have you made a trip within the past three months? - Have you taken any overnight trips of 50 miles

or more (one-way)...How much did you spend on this trip?- Please estimate the amount of money spent by you or any

member of your travel party within 30 miles of this park.How many hours of leisure did you have last week. (How define leisure?)

4. Subject able to answer?. Do you think the American’s with Disabilities Act (ADA) has been effective?How would you rate the job done by our new Parks & Recreation Director.How much time do your teenagers spend on homework. How many fish did you catch last year, by species, location, month, .... How many trips would you make to MI State Parks if entrance fees were eliminated? What is zipcode of your travel destination?

5. Sensitive questionsHave you committed any crimes within the past week? Yes No. When did you stop beating your wife?

REFERENCES:

Sudman, S. and Bradburn, N.M. (1982). Asking Questions: A Practical Guide to Questionnaire Design. San Francisco: Jossey-Bass.

Hogarth, R.M. (ed). (1982). Question framing and response consistency. San Francisco: Jossey-Bass.

22

Page 23: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 23 Indices and Scales

Indices and scales are defined and used in a variety of ways. Generally they are ordinal measures of some construct that are based on a composite of items. They are most often used to measure a construct or latent (vs manifest) variable that is not directly observable.

Babbie distinguishes an index from a scale by index = simple sum of itemsscale = based on patterns of response

Devillis distinguishes them based on causal relationshipsindex= cause indicator - the items determine level of the construct, e.g. SESscale = effect indicator - all items result from a common cause, e.g. alienation

Indicators are often measured via an index ( e.g. economic index of leading indicators, quality of life indicators, ...)

Indices and scales may be uni-dimensional (one underlying construct) or multi-dimensional (two or more distinct facets or dimensions of the construct , e.g (verbal, quantitative &

analytical parts of GRE score.)Off-the-shelf index/scale vs “home-built” : There are thousands of scales/indices measuring almost any

concept you are interested in. (See Miller or Bearden et. al. for numerous examples). Existing scales have the advantage of having their reliability & validity tested (most of them anyway). They may however be too long, or not exactly what you need. In that case you develop your own scale or index or modify an existing one, but then you must do your own assessment of validity & reliability.

STEPS FOR DEVELOPING AN INDEX

1. Define the concept /construct-- theory, specificity, what to include in the measure2. Identify items that measure it - Generate an item pool to choose from. Items should reflect the scale’s purpose:

Can include some redundancy of itemsTips on Good & bad items

avoid lengthy itemsaim at reading level of audience/populationavoid multiple negatives, avoid double barrelled itemswatch for ambiguous pronouns, misplaced modifiers, adjective formsreversing polarity of some items avoids agreement bias but may confuse

Determine format for measuresThurstone - items to represent each level of the attributeGuttman - progressively more difficult to passLikert, Standard formats/issues - number of cues, balanced?, weights

Decide whether a unidimensional or multidimensional scale is appropriate3. Evaluate face validity of items

Have the item pool reviewed by experts - face validity & item clarity/wordingConsider including some validation items in the scale (predictive or criterion-related)

4. Administer scale/index to a test sample and evaluate the scalea. Item analysis: means, variances, correl, coefficient alpha, b. Check bivariate relationships between items - correlationsc. Check multivariate relationshipsd. split samples to check stability across samples

5. Finalize the scale - assign scores to items, optimize scale length- alpha depends on co-variaton among the items and number of items, handle missing data if necessary

23

Page 24: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 24INDEX EVALUATION - RELIABILITY AND VALIDITY OF THE SCALE

1. Reliability (SPSS Reliability analysis computes alpha and inter-item correlations): Internal consistency or homogeneity, item analysis

a. Coefficient alpha measures the proportion of total variance in a set of items that is due to the latent variable (in common across the scale items).

= [k/(k-1)] * [1 - (I 2 / y

2)],

where k= number of items in scale, I 2 =sum of variances of each item, and

y 2 = total variance of scale

b. Spearman Brown formula -- reliability = k r / [ 1+(k-1) r], where r = the average inter-item correlation and k = number of items.

Note that longer scales automatically are more reliable. example.1: suppose r for a set of items is .5. Then a scale with 3 items has = .75. For n =10 and r=.5, = .91. Example 2: Suppose r =.25, then for n=3, = .5 and for n=10 =.77.

Good scale if reliability above .7, .8 better , if above .9 consider shortening the scale.

c. Other types of reliabilityAlternative forms reliabilityTemporal stability - test-retest reliablityGeneralizability Theory

2. ValidityContent or Face ValidityCriterion-related, validityConstruct validityMulti-trait, multi-method validition - MTMM

3. Factor Analysis is a multivariate procedure often used in developing scales. It can assess how many latent variables (dimensions) underlie a set of items, condensing info, & helping define the content or meaning of each dimension

SELECTED REFERENCES

Ajzen, I. & Fishbein, M. 1980. Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice-Hall.

Bearden, W.O., Netemeyer, R.G., & Mobley, M.F. 1993. Handbook of Marketing Scales. Newbury Park, CA.: Sage. - whole book of marketing-related scales

Devellis, R.F. 1991. Scale Development: Theory & Applications. Newbury Park, CA.: Sage. Eagly, A.H. & Chaiken, S. 1993. The Psychology of Attitudes. Fort Worth, TX: Harcourt Brace Jovanovich College

Publishers.Henerson, M., Morris, L.L. and Fitz-Gibbon, C.T. 1987. How to Measure Attitudes. Newbury Park, CA.: Sage. Miller, D.C. 1991. Handbook of Research Design & Social Measurement 5th ed. Newbury Park, CA.: Sage. -

Chapter 6 is guidebook to social scalesNunnally, J.C. 1978. Psychometric Theory. 2nd edition. New York: McGraw Hill.Robinson, J.P., Shaver, P.R, & Wrightsman, L.S. 1991. Criteria for scale selection and evaluation. in Measures of

Personality and Social Psychological Attitudes. Ann Arbor: Univ. of Michigan Survey Reseasrch Center Institute for Social Research.

24

Page 25: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 25

TOPIC 6. SAMPLING

1. Census vs sample2. Steps in sampling process

a. Define study populationb. Specify sampling framec. Specify sampling unitd. Specify sampling methode. Determine sample sizef. Specify sampling plang. Choose sample

2. Study population: Define population in terms of element, sampling unit, extent and time.Element Adults 12 years of age and olderSampling unit In vehiclesExtent Entering Yogi Bear ParkTime Between July 1 and August 31, 1993

3. Sampling frame : a perfect one lists every element of the population once4. Sampling unit is element or set of elements considered for selection at some stage in sampling5. Sampling method

Probability - each element has a known chance of selection, can estimate sampling error when probability sampling is used.

Non-probability - don't know probabilities & can't estimate sampling errors. Examples: Judgement, convenience, quota, purposive, snowball

Probability sampling methods:Simple random sample (SRS),Systematic sample.Stratified vs Cluster sampleProportionate vs disproportionateSingle vs multistage

6. Stratified samples : Stratify to a) ensure enough samples in designated population subgroups and to increase efficiency of sample by taking advantage of smaller subgroup variances. In stratified sample you divide population into subgroups (strata) and sample some elements from each subgroup. Subgroups should be formed to be homogeneous - people in same group are similar to each other on variables you intend to measure and people in different groups are different on these variables.

7. Cluster or Area Sample : Cluster to reduce costs of gathering data. Cluster samples are less efficient than simple random samples in terms of the sampling error for a given sample size. In cluster sampling, you divide population into subgroups (clusters) and sample elements only from some of the clusters. When cluster sampling, form heterogeneous subgroups so that you will not miss any particular type of person/element because you didn't select a particular cluster. Generally groups are formed based on geographic considerations in cluster sampling and it is therefore also called area sampling.

8. Disproportionate sampling. Note that all elements need NOT have an EQUAL chance of selection to be a probability sample, only a KNOWN probability. However, to have a representative sample each population subgroup should be represented proportionately to their occurence in the population. The best way to assure this is for each element to have equal chance of selection. If sampling is disproportionate, you must weight the resulting sample to adjust for different probabilities of selection. See attached example.

9. Determine sample size: Sample size is based on a) budget, b) desired accuracy (confidence level/interval), and c) amount of variance in population on variable(s) being measured. It is also affected by whether or not you wish to do subgroup analyses. Tables provide the sampling errors associated with different sample sizes for sampling from a binomial distribution. More on this later.

25

Page 26: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 26

TOPIC 6A. Sampling populations of visitors, trips and nights - Weighting

Recreation and travel studies often sample populations of visitors using household and on-site sampling designs. Care must be exercised in such studies to avoid a number of potential biases in the sample that can result from the unequal sampling probabilities caused by variations in the length of stay or frequency of visitation across population subgroups.

Length of stay bias is a common problem in on-site studies. For example, if motel visitors are sampled by randomly choosing occupied rooms each night, visitors staying short periods will be less likely to be chosen than visitors staying for a longer time. Someone staying two nights would have twice the chance of being chosen as someone staying only one night. This bias in the sample can be corrected by weighting cases in inverse proportion to their probability of being chosen. For example, two night stay visitors would be weighted 1/2 that of single night visitors to adjust for the unequal selection probabilities.

The existence of such sampling biases depends on how the population has been defined and the sampling frame chosen. Three common populations may be defined for these kinds of recreation and travel surveys:

(1) Population of visitors = any individual visiting a site for one or more days during a given time period.(2) Population of trips or visits = defined as either person or party visits, the population in this case is the visit or

trip, which may consist of several days/nights.(3) population of nights (person or party) = an individual or party staying one night. Someone staying 3 nights on

a trip would be treated as three distinct members of the population of nights.

Biases enter when a sampling frame appropriate for one definition of the population is used to generate a sample for a different population definition. For example, if the population of interest is visitors (definition 1) and we sample nights or trips, repeat and longer stay visitors will be overrepresented in the sample. Similar biases can result when household surveys are used to generate a sample of trips or nights. For example, travel surveys that ask households to report their most recent trip and then analyze trip characteristics, will overrepresent the kinds of trips that less frequent travelers take.

If the researcher is aware of the problem , it can usually be corrected easily by asking length of stay and frequency of visit questions and then applying appropriate weights based on these measures. A simple example helps illustrate.Note that in the example we KNOW the full population(s) and therefore any bias in the sample is readily evident by simply comparing a sample estimate with the population value.

General rule for weighting cases is1. Identify unequal probabilities of selection for population subgroups.2. Weight cases in inverse proportion to their selection probabilities. e.g. if one type of visitor is twice as likely to be

chosen as another, assign the former a weight of 1/2 relative to the latter.3. It is sometimes also desirable to normalize weights so that the weighted sample is the same size as the original one.

Do this by making the weights sum to one.

EXAMPLE: Population of 1,000 Visitors divided equally between two types (say non-resident=Type A and resident=Type B). Assume TYPE A visitors take 1 trip per year, averaging 4 nights per trip . TYPE B visitors take 4 trips per year averaging 2 nights per trip. We can completely specify the population of visitors, trips and nights as follows:

Popln of Visitors Popln of Trips Popln of NightsN Pct N Pct N Pct

TYPE A 500 50% 500 20% 2000 33%TYPE B 500 50% 2000 80% 4000 67%TOTAL 1000 2500 6000

Suppose we want to Draw sample to estimate the following:1. Percent of population who are Type A2. Percent of Trips by Type A visitors3. Percent of Nights by Type A visitors

26

Page 27: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 27OPTION 1. CHOOSE RANDOM SAMPLE OF 200 PEOPLE USING HOUSEHOLD SURVEY, respondents report their last trip. This gives us a sample of 200 visitors and 200 trips. If random sampling works reasonably, we will get a sample of roughly 100 TYPE A's and 100 Type B's, with one trip reported for each visitor. Note that the sampling frame is for the population of visitors. Be careful if you start doing analyses with trips (or nights) as the unit of analysis. You won’t necessarily have a representative sample of trips or nights. Let’s look:

From this sample, the percent of trips by TYPE A = 1/2, a biased (wrong) answer, since we know in the population of trips that only 20% are by Type A’s. Problem is we do not have a representative sample of trips - a given trip by a Type A is four times more likely to be chosen as a given trip of a Type B person.

We can choose trips, by first choosing person (these are equal), but gathering one trip per visitor introduces a bias since it gives different kinds of trips unequal probabilities of being sampled. Correct by weighting the sample to adjust for unequal probability of selection. .

Weight cases inversely proportional to their probability of selection. Prob of choosing given trip = prob of choosing person x prob trip is selected given person

Type A trip prob = 100/500 * 1/1 = 1/5Type B trip prob = 100/500 * 1/4 = 1/20

Weight inversely proportional to these probabilities, e.g. since probability ratio is 4:1 make weights 1:4. This reflects fact that type A trips are four times more likely to be chosen.

Type A Weight = 1 Type B Weight =4

NOW WEIGHT CASES IN SAMPLE IN ORDER TO GET A REPRESENTATIVE SAMPLE OF TRIPS

SAMPLE TRIPS X Weight = Corrected sample Pct of tripsTYPE A 100 X 1 = 100 20%TYPE B 100 X 4 = 400 80%TOTAL 200 500

Note the percents are the correct values as observed in population above. If we had not weighted, we would irroniously estimate half of trips are Type A. Normalized weights would be 1/5 and 4/5 - two weights in proprotions of 1:4 that add to one.

OPTION 2. RANDOM SAMPLE OF NIGHTS. Suppose instead of sampling households (visitors) we take our sample on-site by randomly selecting every 10th occupied site/room in motels or campgrounds. This sampling frame is for the population of nights. Unweighted estimates of Type A nights would be correct, but estimates of Type A visitors or trips would be biased. Use same procedures as above to calculate appropriate weights. Check your result to see if you obtain the correct percentages - in this caseyou know these from the population figures above.

TOPIC 7A. SURVEY METHODS

1. Survey research attempts to measure things as they are. The researcher wishes to measure phenomona without intruding upon or changing the things being measured. This is in contrast with experiments, which intentionally manipulate at least one variable (the treatment) to study its effects on another (dependent variable or effect).

2. Survey research generally means gathering data from people by asking questions - self administered, phone, or in person personal interviews (Interview Surveys). See Babbie, Trochim or any methods text for strengths and weaknesses of these three approaches. Key is choosing the most appropriate approach in a given situation. This will depend on a) budget, b) turnaround time, c) content and number of questions, and d) population being measured.

3. Survey designs best suited to describe a population- demographic & socioeconomic characteristics, knowledge & beliefs, attitudes, feelings & preferences, and behaviors and behavioral intentions.

27

Page 28: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 284. Types of survey designs

a. Cross sectional : study phenomonon at a single point in time (snapshot of the study population).b. Longitudinal : study phenomona at more than one point in time.

i.. Trend study: Measure same general population at two or more times.ii. Cohort Study : Measure same specific population at two or more timesiii. Panel study: Measure same individuals (same sample) at two or more times.

c. Approximating longitudinal study with cross sectional designs.i. replication of previous studies.ii. using recall or expectation questionsiii. using cohorts as surrogates for a temporal dimension

5. Key Survey Research Issuesa. Study population: identifiable, reachable, literacy, languageb. Sampling, obtaining representative samples, sample size, sampling unitc. Choice of mail, interview, phone; household, on-site, internet, etc.d. Questionnaire development & testinge. Follow-up procedures & non-response bias e. Interviewer/instrument effects and other biasesf. Field procedures, interviewer training, costs, facilities, time, personnelg. Data handling & analysis

6. Steps in Survey 1. Assess Feasibility2. Set objectives, identify variables & population3. Choose approach/design 4. Develop instruments (write questions, format, sequencing, cover letters…)5. Pretest instruments and procedures, train interviewers6. Sampling plan and sample size7. Gather data/execute survey8. Data entry, processing and analysis9. Reporting

28

Page 29: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 29

TOPIC 7B. EXPERIMENTAL DESIGN

A. Introduction

Experiments are designed to test for relationships between one or more dependent variables and one or more independent variables. They are uniquely qualified to determine cause-effect relationships.

Consider experimental or quasi-experimental approaches whenever you are interested in studying the effects of one variable on another as contrasted with studying the relationship between the two variables.

Experiments are distinguished from surveys in that the researcher consciously manipulates the situation in order to study a cause-effect relationship. Survey procedures measure things as they are and try to avoid any researcher caused changes in the objects under study.

FOR EXAMPLE : A pollster asking who people intend to vote for in the next election is conducting a survey -- trying to measure what the result woud be if the election were held today. His questioning is designed to measure voter preference without influencing the choice in any way. This research could become an experiment if the study is investigating how pre-election polling influences voting patterns. In this situation, the act of polling is an intervention or treatment, consciously introduced for the purpose of manipulating the situation and then measuring the effects on voters.

Typical examples of experiments in recreation & tourism:

1. Effects of an information or promotion programs on knowledge, attitudes, or behavior. eg. Is a promotional program successful in increasing brand awareness, image, or sales? Which media, messages, etc. are more effective with which subgroups of potential customers? What is the effect of interpretation, environmental education, or outdoor adventure programs on environmental knowledge, attitudes, or behaviors? Does an outdoor adventure program increase a person's self-concept, improve family bonding, increase social cohesion, etc.

2. Consumer reaction to price changes. Bamford, Manning et. al. (JLR 20,4, 324-342) a good example. Response of consumers to product changes or location/distributional approaches. With promotion above, this covers 4 P's of marketing.

3. Effectiveness of various TR interventions.4. Impacts of tourism on community attitudes; social, economic, and environmental impacts. 5. Positive and negative consequences of recreation and tourism - physical health, mental health, family bonding,

economic impacts, learning, etc. Consequences for individuals, families, social groups, communities, societies, global consequences.

6. Experiments have been used in studying people's preferences for landscapes and more generally to measure the relative importance of different product attributes in consumer choices. e.g. conjoint analysis (see Tull & Hawkins pp. 359-370.

B. There are many alternative experimental designs, but the basic principles behind an experiment can be illustreated with the Classic Pre-test, post-test with control (also called before after with control).

This experimental design involves two groups (experimental and control) and measurements before and after the "treatment". It can be diagrammed as follows:

Before After

R MB1 X MA1 Experimental group

R MB2 MA2 Control Group

R denotes random assignment to experimental and control groupsX denotes administration of the treatmentOther entries denote measures made before (B) or after (A) the administration of the treatment.

Measure of the effect = (MA1-MB1) - (MA2-MB2)

29

Page 30: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 30with - without ; NOT simply after (MA1) - before (MB1)

Note the effect is the change in experimental group adjusted for any change in the control group. Without the adjustment for the control group, you have a before-after measure rather than a with vs without measure. The experimental group change is the change with the treatment, while control group change is the change without the treatment. The difference in the two is a "with minus without" measure. This design controls for all major internal validity errors except interaction (see below).

In program evaluation, the program is the treatment or "cause" and program outcomes or impacts are the "effects" to be measured. Critical to understanding experiments is understanding of the role of the control group and the potential sources of error that experiments control for (covered in D below).

C. Characteristics of a true experiment

1. Sample equivalent experimental and control groups2. Isolate and control the treatment3. Measure the effect

In the classic Pre-test -Post-test with control, design above, note how these characteristics are met:1. random assignment to groups to assure "equivalence"2. treatment administered to experimental group and withheld from control group3. Effect is measured by change in experimental group - change in control group.

Quasi-experimental designs fail on one or more of these characteristics:A simple before-after design lacks a control group. Ex post facto designs form groups "after the fact". These usually do not involve "equivalent" groups

and may not isolate or control the intended "treatment". People who volunteer for a program may be different than those who do not in many respects.

D. Sources of Experimental Error - these are the errors that experiments tryto control for in evaluating a cause-effect relationship.

Internal Validity: errors internal to a specific experiment, designs control for these*1. Premeasurement (Testing) : effect of pre-measurement on dependent variable (post-test)*2. Selection: nonequivalent experimental & control groups, (statistical regression a special case)*3. History: impact of any other events between pre- and post measures on dependent variable*4. Interaction: alteration of the “effect” due to interaction between treatment & pre-test.5. Maturation: aging of subjects or measurement procedures6. Instrumentation: changes in instruments between pre and post.7. Mortality: loss of some subjects

External validity: errors in generalizing beyond the specific experiment8. Reactive error - Hawthorne effect - artificiality opf experimental situation9. Measurement timing - measure dependent variable at wrong time10. Surrogate situation: using popln, treatment or situation different from “real” one.

E. OTHER EXPERIMENTAL DESIGNS

1. After Only with Control : This design is used widely in evaluating changes in knowledge or attitudes due to an information, education, or advertising program. It is used instead of pre-test post-test with control due to the likelihood of interaction affects when testing knowledge or attitudes. It omits the pre-test and relies on large samples or careful assignment to groups to achive “equivalent” experimental and control groups. No pre-measure avoids interaction effects. Sacrifice is weaker control over selection errors.

R X MA1R MA2

30

Page 31: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 312. Other DesignsQuasi Experimental

a. Ex Post Facto b..After Only (no control) c. Before-After (no control)

Experimentala. Simulated before-after with control R MB

R X MA

controls for pre-meas/interaction; an alternative to after-only with control - simply varies timing of measurement on control group (possible history & selection errors)- this design avoids contaminating control group with the treatment, say in evaluating a large mass media campaign).

b. Solomon 4 group - controls for everything (internal) , but expensive, two control & two experimental groups.

3.Advanced designsRandom Block Designs (RBD) - one major intervening variable, block (stratify) on this variable.Latin Squares- two non-interacting extraneous variablesFactorial Design - effects of two or more indep. variables (multiple treatments), including interactions

F. LABORATORY vs Field (Natural) experiments, tradeoff between external and internal validity LAB - high internal validity, but may not generalize to real worl settingsFIELD - high external validity, but may be problems obtaining adequate controls (internal validity).

G. Steps in Conducting an Experiment

1. Identify the relevant study population, the primary "treatment(s)" or independent variable (s) and the measures of the effect or dependent variables.

2. Choose a suitable design based on an analysis of the potential threats to validity in this situation.3. Form groups and assign subjects to groups - remember you want to form "equivalent groups".

Random assignmentMatching characteristics

4. Develop measurement procedures/instruments, measures of the effect5. Run the experiment - make pre-measurements, administer treatment to experimental group, make post-measurements6. Compute measure of the effect

G. Quasi- Experimental Designs - Examples (Find potential flaws here):1. Travel Bureau compares travel inquiries in 1991 and 1994 to evaluate 1992 promotion efforts. 2. To assess effectiveness of an interpretive exhibit, visitors leaving park are asked if they saw exhibit or not,

Two groups are compared relative to knowledge, attitudes etc.

TOPIC 7C. Secondary data designs

Secondary data analysis : use of data gathered by someone else for a different purpose – reanalysis of existing data. See methods links page for links to secondary sources of data about recreation & tourism

Sources: Government agencies: e.g. Population, housing & economic censuses, tax collections, traffic counts,

employment, environmental quality measures, park use, … Internal records of your organization – sales, customers, employees, budgets…Private sector - industry associations often have data on size and characteristics of industryPrevious surveys – as printed reports or raw data, survey research firms sell dataLibrary & Electronic sources – the WWW, on-line & CD-ROM literature searches, …Previously published research – reports have data in summary form, original data often available from author.

Issues in using secondary data.1) data availability – know what is available & where to find it

31

Page 32: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 322) relevance – data must be relevant to your problem & situation3) accuracy – need to understand accuracy & meaning of the data4) sufficiency – often must supplement secondary data with primary data or judgement to completely

address the problem

Since you did not collect the secondary data it is imperative that you fully understand the meaning and accuracy of the data before you can intelligently use it. This usually requires you to know how it was collected and by whom. Find complete documentation of the data or ask about details from source of data. At Gov. Info site choose INFO links to see documentation of each data source.

Examples: using secondary data to estimatea) Trends – Compare surveys in different years or plot time series data - many tables in Spotts Travel &

Tourism Statistical Abstract, Michigan county tourism profiles, economic time series at BLS site, REIS data at Gov Info Clearing House.

b) Spatial variations - gather data across spatial units, map the resultc) Recreation participation – apply rates from national, state and local surveys to local population data

from Census, rates at NSGA, ARC web pages (Roper-Starch study), NSRE 1994-95 survey d) Tourism spending – Stynes estimates tourism spending by county in Michigan using a variety of

secondary data and some judgement – lodging room use taxes, motel, campground and seasonal home inventories, occupancy rates by region, average spending by segment and statewide travel counts. See my economic impact web site. Also see Leones paper on measuring tourism activity.

TOPIC 7D. OBSERVATION & OTHER METHODS

Unobtrusive Measurement – content analysis, physical instruments, and observation (See Webb et. al. Unobtrusive Measures book.

Observation : gathering data using human observers. There are a range of observational methods from highly structured quantitative counts to qualitative participant observation. Babbie distinguishes a continuum of approaches:

complete participant -- participant as observer -- observer as participant-- complete observer

Quantitative versions of observation employ probability sampling techniques (time or event sampling) and quantitative measures of the behaviors being observed (generally by means of a highly structured observation form). Studies may employ multiple observers and evaluate the reliability of observations by comparing independent observers. E.g. measuring use of a park or trail by counting visitors and observing their characteristics during randomly selected periods.

Qualitative forms of observation are more interpretive with the observer making field notes and interpreting what they observed.

Content analysis is a special set of technique for analyzing documentary evidence (letters, articles, books, legal opinions, brochures, comments, TV programs, …). It is used quite widely in communication and media studies to study media and messages in human communication. (See Babbie Chapter 12; A Sage monograph by Weber, Robert Philip. 1990. Basic content analysis. Newbury Park, CA: Sage and a book by Ole Holsti . 1969. Content analysis for the social sciences and humanities. Reading,Mass: Addison-Wesley Publ, Co. are good referencesContent analysis is the application of scientific methods to documentary evidence. Makes inferences via objective and systematic identification of specified characteristics of messages.

REFS: Holsti, Ole R. 1969. Content analysis for the social sciences and humanities. Reading Mass: Addison Wesley. Also see Babbie's Chapter 12.

Applications in recreation to coding open ended survey questions, analyzing public involvement, analysis of societal (or individual) values, opinions and attitudes as reflected in written documents, etc.

32

Page 33: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 33

TOPIC 8. Data Gathering, Field Procedures and Data Entry (incomplete)

There are many tedious but important procedures involved in gathering data. These should be clearly thought out in advance, tested, and included in a study proposal.

For surveys: mailing options, follow-ups, interviewer training,..Data entry options: CATI systems,Coding and cleaning

33

Page 34: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 34

TOPIC 9. DATA ANALYSIS AND STATISTICS

1. Functions of statisticsa. description: summarize a set of data b. inference: make generalizations from sample to population. parameter estimates, hypothesis tests.

2. Types of statisticsi. descriptive statistics: describe a set of data

a. central tendency: mean, median (order statistics), mode.b. dispersion: range, variance & standard deviation,c. Others: shape -skewness, kutosis.d. EDA procedures (exploratory data analysis).

Stem & leaf display: ordered array, freq distrib. & histogram all in one.Box and Whisker plot: Five number summary-minimum,Q1, median, Q3, and maximum.Resistant statistics: trimmed and winsorized means,midhinge, interquartile deviation.

ii. inferential statistics: make inferences from samples to populations.iii. Parameteric vs non-parametric statistics

a. parametric : generally assume interval scale measurements and normally distributed variables.b. nonparametric (distribution free statistics) : generally weaker assumptions: ordinal or nominal

measurements, don't specify the exact form of distribution.

3.Steps in hypothesis testing.1. Make assumptions & choose the appropriate statistic. Check measurement scale of variables.2. State null hypothesis; and the alternative3. Select a confidence level for the test. Determine the critical region - values of the statistic for which you will

reject the null hypothesis.4. Calculate the statistic.5. Reject or fail to reject null hypothesis.6. Interpret results.

Type I error: rejecting null hypothesis when it is true.Prob of Type I error is 1-confidence level.

Type II error: failing to reject null hypothesis when it is false.Power of a test = 1-prob of a type II error.

4. Null hypotheses for simple bivariate tests.a. Pearson Correlation rxy =0.b. T-Test mx =myc. One Way ANOVA M1=M2=M3=...=Mnd. Chi square : No relationship between x and y. Formally, this is captured by the "expected table", which

assumes cells in the X-Y table can be generated completely from row and column totals.

5. EXAMPLES OF T-TEST AND CHI SQUARE

(1) T-TEST. Tests for differences in means (or percentages) across two subgroups. Null hypothesis is mean of Group 1 = mean of group 2. This test assumes interval scale measure of dependent variable (the one you compute means for) and that the distribution in the population is normal. The generalization to more than two groups is called a one way analysis of variance and the null hypothesis is that all the subgroup means are identical. These are parametric statistics since they assume interval scale and normality.

(2) Chi square is a nonparametric statistic to test if there is a relationship in a contingency table, i.e. Is the row variable related to the column variable? Is there any discernible pattern in the table? Can we predict the column variable Y if we know the row variable X?

The Chi square statistic is calculated by comparing the observed table from the sample, with an "expected" table derived under the null hypothesis of no relationship. If Fo denotes a cell in the observed table and Fe a corresponding cell in expected table, then

34

Page 35: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 35

Chi square ( c2 ) = å (Fo -Fe)2/Fecells

The cells in the expected table are computed from the row (nr ) and column (nc ) totals for the sample as follows:

Fe =nr nc / n .

CHI SQUARE TEST EXAMPLE: Suppose a sample (n=100) from student population yields the following observed table of frequencies:

GENDERMale Female Total

IM-USEYes 20 40 60No 30 10 40Total 50 50 100

EXPECTED TABLE UNDER NULL HYPOTHESIS (NO RELATIONSHIP)

GENDERMale Female Total

IM-USEYes 30 30 60No 20 20 40Total 50 50 100

c2 = (20-30)2/30 + (40-30)2/30 + (30-20)2/20 + (10-20)2/20

100/30 + 100/30 + 100/20 +100/20 = 13.67

Chi square tables report the probability of getting a Chi square value this high for a particular random sample, given that there is no relationship in the population. If doing the test by hand, you would look up the probability in a table. There are different Chi square tables depending on the number of cells in the table. Determine the number of degrees of freedom for the table as (rows-1) X (columns -1). In this case it is (2-1)*(2-1)=1. The probability of obtaining a Chi square of 13.67 given no relationship is less than .001. (The last entry in my table gives 10.83 as the chi square value corresponding to a probability of .001, so 13.67 would have a smaller probability).

If using a computer package, it will normally report both the Chi square and the probability or significance level corresponding to this value. In testing your null hypothesis, REJECT if the reported probability is less than .05 (or whatever confidence level you have chosen). FAIL TO REJECT if the probability is greater than .05.

For the above example : REVIEW OF STEPS IN HYPOTHESIS TESTING:(1) Nominal level variables, so we used Chi square.(2) State null hypothesis. No relationship between gender and IM-USE

(3) Choose confidence level. 95%, so alpha = .05, critical region is c2 > 3.84 (see App E -p. A-28).

(4) Draw sample and calculate the statistic; c2 = 13.67(5). 13.67 > 3.84, so inside critical region, REJECT null hypothesis. Alternatively, SIG= .001 on computer

printout, .001<.05 so REJECT null hypothesis.

Note we could have rejected null hypothesis at .001 level here.

35

Page 36: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 36

WHAT HAVE WE DONE? We have used probability theory to determine the likelihood of obtaining a contingency table with a Chi square of 13.67 or greater given that there is no relationship between gender and IMUSE. If there is no relationship (null hypothesis is true), obtaining a table that deviates as much as the observed table does from the expected table would be very rare - a chance of less than one in 1000. We therefore assume we didn't happen to get this rare sample, but instead our null hypothesis must be false. Thus we conclude there is a relationship between gender and IMUSE.

The test doesn't tell us what the relationship is, but we can inspect the observed table to find out. Calculate row or column percents and inspect these. Percents below are row percents obtained by dividing each entry on a row by the row total.

Row percents:GENDER

Male Female TotalIM-USE

Yes .33 .67 1.00No .75 .25 1.00Total .50 .50 1.00

To find the "pattern" in table, compare row percents for each row with the "Totals" at bottom. Thus, half of sample are men, whereas only a third of IMusers are male and three quarters of nonusers are male. Conclusion - men are less likely to use IM.--------------------------------------------------------------

Column Percents: Divide entries in each column by column total.GENDER

Male Female TotalIM-USE

Yes .40 .80 .60No .60 .20 .40Total 1.00 1.00 1.00

PATTERN: 40% of males use IM, compared to 80% of women. Conclude women more likely to use IM. Note in this case the column percents provide a clearer description of the pattern than row percents.

6. BRIEF NOTES AND SAMPLE PROBLEMS

a. Measures of strength of a relationship vs a statistical test of a hypothesis. There are a number of statistics that measure how strong a relationship is, say between variable X and variable Y. These include parametric statistics like the Pearson Correlation coefficient, rank order correlation measures for ordinal data (Spearman's rho and Kendall's tau), and a host of non-parametric measures including Cramer's V, phi, Yule's Q, lambda, gamma, and others. DO NOT confuse a measure of association with a test of a hypothesis. The Chi square statistic tests a particular hypothesis. It tells you little about how strong the relationship is, only whether you can reject a hypothesis of no relationship based upon the evidence in your sample. The problem is that the size of Chi square depends on strength of relationships as well as sample size and number of cells. There are measures of association based on chi square that control for the number of cells in table and sample size. Correlation coefficients from a sample tell how strong the relationship is in the sample, not whether you can generalize this to the population. There is a test of whether a correlation coefficient is significantly different from zero that evaluates generalizability from the sample correlation to the population correlation. This tests the null hypothesis that the correlation in the population is zero.

b. Statistical significance versus practical significance. Hypothesis tests merely test how confidently we can generalize from what was found in the sample to the population we have sampled from. It assumes random sampling-thus, you cannot do statistical hypothesis tests from a non-probability sample or a census. The larger the sample, the easier it is to generalize to the population. For very large sample sizes, virtually ALL hypothesized relationships are statistically significant. For very small samples, only very strong relationships will be statistically significant. What is practically significant is a quite different matter from what is statistically significant. Check to see how large the differences really are to judge practical significance, i.e. does the difference make a difference?.

36

Page 37: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 37c. SOME SAMPLE/SIMPLE STATISTICAL PROBLEMS:

1. Calculate mean, median, standard deviation, variance from a set of data. 2. Compute Z scores to find areas under normal distribution.

3. Find an alpha (95%) percent confidence interval for the mean.(alpha given). Must estimate the standard error of mean, 95% CI = 2 S.E.'s either side of mean.

4. Given a confidence level, accuracy desired, and estimate of variance, determine the required sample size for a survey. n=Z22/ e2 , where Z is number of standard errors assoc. with confidence level, is an estimate of standard deviation of variable in the population, and e is size of error you can tolerate.

5. Similar problems for proportions rather than means. Simply replace standard deviation by sqrt(p(1-p)), the standard deviation of a binomial distribution with probability p. n=Z2p(1-p)/ e2

6.Chi square test of relationship between two nominal scaled variables.

7. Brief Summary of Multivariate Analysis Methods. SPSS procedure in Capitals.

1. Linear Regression: Estimate a linear relationship between a dependent variable and a set of independent variables. All must be interval scale or dichotomous (dummy variables).(See Babbie p 437, T&H, p 619, Also JLR 15(4). Examples: estimating participation in recreation activities, cost functions, spending. REGRESSION.

2. Non-linear models : Similar to above except for the functional form of the relationship. Gravity models, logit models, and some time series models are examples. (See Stynes & Peterson JLR 16(4) for logit, Ewing Leisure Sciences 3(1) for gravity. Examples: Similar to above when relationships are non-linear. Gravity models widely used in trip generation and distribution models. Logit models in predicting choices. NLR

3. Cluster analysis : A host of different methods for grouping objects based upon their similarity across several variables. (See Romesburg JLR 11(2) & book review same issue.) Examples: Used frequently to form market segments or otherwise group cases. See Michigan Ski Market Segmentation Bulletin #391 for a good example.CLUSTER QUICK CLUSTER

4. Factor analysis. Method for reducing a large number of variables into a smaller number of independent (orthogonal) dimensions or factors. (See Kass & Tinsley JLR 11(2); Babbie p 444, T&H pp 627). Examples: Used in theory development (e.g. What are the underlying dimensions of leisure attitudes?) and data reduction (reduce number of independent variables to smaller set). FACTOR.

5. Discriminant analysis: Predicts group membership using linear "discriminant" functions. This is a variant of linear regression suited to predicting a nominal dependent variable. (See JLR 15(4) ; T&H pp 625). Examples: Predict whether an individual will buy a sail, power, or pontoon boat based upon demographics and socioeconomics. DISCRIMINANT

6. Analysis of Variance (ANOVA): To identify sources of variation in a dependent variable across one or more independent variables. Tests null hypothesis of no difference in means of dependent variable for three or more subgroups (levels or categories of independent variable). The basic statistical analysis technique for experimental designs. (See T&H pp 573, 598). Multivariate analysis of variance (MANOVA) is the extension to more complex designs. (See JLR 15(4)). ANOVA, MANOVA.

7. Multidimensional scaling (MDS): Refers to a number of methods for forming scales and identifying the structure (dimensions) of attitudes. Differ from factor analysis in employing non-linear methods. MDS can be based on measures of similarities between objects. Applic in recreation & tourism- mapping images of parks or travel destinations. Identifying dimensions of leisure attitudes.RELIABILITY (See T&H pp 376.)

8.Others: Path analysis (LISREL) (Babbie p. 441), canonical correlation, conjoint analysis (See T& H 359, App C), multiple classification analysis, time series analysis (Babbie p. 443), log linear analysis (LOGLINEAR HILOGLINEAR) linear programming, simulation.

37

Page 38: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 38

Topic 10. Research Ethics : acceptable methods & practices

1. Voluntary Participation : Informed Consent2. No Harm to Subjects, Deception, Right to Privacy3. Anonymity & Confidentiality4. Open & Honest reporting5. Client confidentiality6. Don't use research as guise for selling something or for other purposes

Examples in Babbie: Trouble in Tearoom & Human Obedience Study

Political and social issues: substance & use of research

1. Objectivity of scientist2. Client-researcher relationships3. Understanding range of stakeholders and the likely uses/misuses of research results4. Scientific truth & knowledge as goal vs other practical matters5. Social & political factors particularly important in evaluation research6. Keeping clients & stakeholders informed7. Political correctness & science8. Researcher & organizational reputations9. Relationships within and outside the organization

Topic 11. Research Writing & Presentations

1. Research style more impersonal, objective, concise and to the point, avoid embellishments2. Complex subject requires clear organization, careful definition of terms, effective use of tables, graphs, charts. 3. Use a standard style guide, e.g. APA or Chicago Manual of Style. Follow style of outlet paper or talk will appear in. 4. Literature review, citing others work, plagiarism5. Major headings of research article or report are ABSTRACT or EXEC SUMMARY, INTRO., PROBLEM,

OBJECTIVES, LITERATURE REVIEW, METHODS, RESULTS, DISCUSSION or IMPLICATIONS, LIMITATIONS & SUGGESTIONS FOR FURTHER RESEARCH.

6. Typical Sub-headings in Methods Section: Study population, sampling procedures; definition of concepts, measurement, special sections for questionnaire design, experimental design where appropriate, field procedures including pretesting, data gathering, follow-ups etc., plan of analysis, hypotheses and hypothesis testing procedures if appropriate.

7. General guidelines same as any communications:a. Know your subjectb. Know your audiencec. Choose most effective way of reaching audienced. Multiple outlets and formats for most researche. Rewrite or practice, have paper/talk reviewed, rewrite again.f. Learn to edit/criticize your own workg. Use tools you have available - e.g. spell checkers, enlargement for overheadsh. Make effective use of tables, diagrams, charts, bulleted lists, etc.

38

Page 39: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 39

Topic 12. Supplemental Material

APPLYING RESEARCH

In some cases, one has a problem in search of solution, in other cases a research result in search of an application, in others some of both.

PROBLEM TO SOLUTION:

1. Know what to look for. First must define what the problems/questions are. Identify key parts of problem. General and specific information needs.

* If specific information need, consider how you might generalize the question. You are more likely to find research relevant to a more generally stated question. Questions that are very specific, particularly to time and place seldom have answers in existing research. Information would either exist locally (in your organization or closely related place) or you would need to begin your own study.

Example : Need to know where my visitors come from. Possible answers found in a) your organizations records (maybe registration forms or visitor log book). b) Design a study to find out, or c) Is there research at nearby facilities or similar ones that measures visitor origins or how far visitors travel for this type of facility or activity.

2. Where to find it? Know sources of research information/data. Internal and external. Familiar with recent and on-going studies in the organization/area, related organizations or nearby areas. Who to contact about research relevant to the problem. People in organization, in area, elsewhere.

3. How to evaluate it? Is it good research? Evaluate the research report/article. See research review/evaluation checklists.

4. Does it apply to my situation? This involves comparing the study situation with your own. Generalizability over:

a) Time: Have things changed substantially since the study was done? or is this a reasonably stable phenomona? Can I adjust for time using simple indices, eg. price or cost index.

b) Space: Is the site/area similar in respects that would alter the results?c) Population: Is study population similar to the one to which I intend to apply the results.d) Situation/setting : Is the situation/setting similar? Identify any other variables that might change the results.

Are these variables similar enough between the study and the application to validly apply the results? Does the research identify relationships or models that let me adjust the results to my situation? What assumptions underlie these models and do they hold in this situation?

5. How to apply it.a. Understand the research and its limits.b. Understand the situation.c. Consider factors not addressed in research that might yield unintended consequences.d. Will changes/decisions be politically acceptable and accepted by all relevant stakeholders.e. Know how to best implement different types of decisions within your organization/situation/market.f. Establish an implementation plan.g. Consider alternatives. Research seldom "makes" a decision. It may suggest several alternatives.

39

Page 40: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 40

EVALUATING RECREATION SURVEYS - A CHECKLIST

1. CLEAR PROBLEM AND OBJECTIVES: The study should have a clearly defined problem that leads to specific study objectives. The objectives should identify the questions to be answered by the survey. There should be methods and results for each stated objective.

a. What is the general topic? Motivation or reason for study?b. Who conducted the study?c. Is purpose primarily exploratory, description, explanation, prediction? If this is an evaluation

study, what is being evaluated using what criteria?

2. APPROPRIATE METHODS: The methods should be appropriate for the study's purposes in light of cost and time constraints.

a. STUDY POPULATION should be defined in terms of content, units, extent and time Does sampling frame represent/cover this population?

b. SAMPLE. If a sample is drawn, probability samples are needed to make inferences from the sample to the study population. If not non-probability samples may be appropriate.

1. Evaluate if sample is representative of populationa. By comparisons with known characteristics of populationb. By evaluating sampling methodsc. Checking for non-response bias

2. Is sample size adequate? Estimate sampling error by using tables of sampling error for given sample sizes. Are confidence intervals or estimates of sampling error reported? Watch for small samples, if study reports results for popln subgroups.

3. What is the response rate? Does author address possible non-response bias? Is it likely to be large or small? Were/did some groups have higher response than others?

4. Were sampling procedures carried out properly? What is time and place the data represent?

c. MEASUREMENT 1. VARIABLES : Are variables appropriate to questions/objectives and have they been

operationally defined? Has study ignored any important variables?2.. Evaluate reliability of measures. 3. Evaluate validity of measures. Start with content or face validity. Then look for

consistency among different measures or with similar studies.4. How were specific questions worded? Any possible question sequencing effects? 5. Is a telephone, personal interview or self-administered approach used? Is this

appropriate? What possible errors might occur from data gathering procedures?

d. ANALYSIS. Are appropriate analysis procedures used and are results reported clearly?1. Are statistics appropriate to measurement scales2. Are all Tables and Figures clear and easy to follow. Does text match tables?3. Is the analysis appropriate to the study objectives? Does report present some basic descriptive

results before launching into more complex analyses.4. If hypothesis testing is done, does author distinguish between statistical and practical significance

(importance)? Watch sample sizes in hypothesis tests.5. Have any important variables been ignored in drawing conclusions about relationships between

variables? How strong are the relationships? e. REPEATABILITY: Are methods presented in enough detail that you could repeat this study?

3. CONCLUSIONS AND IMPLICATIONSa. Does author draw meaningful implications for managers, researchers, or whomever the intended

audience may be.b. Do conclusions stay within the empirical results or does the author go beyond the findings?c. How generalizable are results and to what kinds of situations?d. Are study limitations, problems and errors clearly noted and discussed?e. Is report well written, objective, and at the level of intended audience?

40

Page 41: msu.edu · Web viewDocuments and written material : content analysis, literature reviews. 3. Measurement instruments: ... Tips on Good & bad items avoid lengthy items aim at reading

PRR 844 Topic Outlines Page 41

Steps in a Research or Evaluation Study (Alternative Handout)1. Identify Purpose and Constraints

a. define purposeb. clarify purposes and information needs with decision-makers/clientsc. determine time and cost constraints and error toleranced. obtain agreement on desired information and study purposes

2. Develop Research Plan (includes steps 3-6)a. identify & evaluate alternativesb. identify available/existing information (literature & data)c. clarify research objectivesd. write research proposale. have proposal reviewed and evaluated, including human subjects reviewf. final plan/proposal

3. Choose Overall Approacha. review alternatives & select approach(s)b. obtain agreement and plan steps 4-6

4. Develop Sampling Plana. define study populationb. identify sampling frame (if applicable)c. determine sample size taking into account response ratesd. evaluate alternative sampling designs & choose beste. draw the sample

5. Develop Measurement Instruments and Proceduresa. identify information neededb. develop operational measures for each variablec. design questions or choose measurement instrumentsd. assemble questionnaire (if applicable)e. have instruments reviewed & evaluated, including pre-testingf. revise and repeat evaluation of instruments as needed

6. Plan the Analysisa. identify planned analyses, prepare "dummy" tablesb. choose statistical procedures/testsc. evaluate if intended analysis will meet study objectivesd. review sample and measurement instruments for compatibility with the analysis (e.g. measurement scales

suited to chosen statistics, adequate samples for subgroup analyses & desired accuracy).7. Administer the Study

a. assemble personnel and materials, assign responsibilitiesb. carry out data collection procedures including follow-upsc. manage personnel and data d. solve problems & make adjustments to procedures as needed

8. Conduct Data Processing and Analysisa. develop codebookb. data entry, and cleaning, selected data checking analysesc. carry out planned analysisd. prepare tables and figurese. double check results for consistencyf. data file documentation

9. Prepare Reportsa. identify audience(s) and plan report(s)b. develop outlines for each reportc. assemble tables and figuresd. prepare texte. reviews, editing & rewritingf. final proofingg. printing and distributionh. oral presentations

10. Document the Study and File Key Reports and Materialsa. document & file final reports, codebooks, and copies of data filesb. assure confidentiality/anonymity of individual subject's records

41