value and applicability of re-sampling techniques
DESCRIPTION
Value and Applicability of Re-Sampling TechniquesTRANSCRIPT
![Page 1: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/1.jpg)
Value and Applicability 1
Value and Applicability of Re-Sampling Techniques
by
Edgardo Donovan
RES 601 – Dr. Roger Rensvold
Module 5 – Case Analysis
Monday, September 15, 2008
![Page 2: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/2.jpg)
Value and Applicability 2
Value and Applicability of Re-Sampling Techniques
espite the technological advances that have enabled re-sampling
tools more accessible to the general public, formal statistical
methodology remains the prevailing technique utilized throughout
the majority of research projects given re-sampling’s limited
applicability to small well defined non-ambiguous sets of data.
Whereas typical research projects involve a single random sampling of a large group
of data in an attempt to infer characteristics that apply across its spectrum, re-sampling
involves numerous repeated samples within the same body of usually small data in an
attempt to define the characteristics of the data universally. Only recently has this been
easier to do. Re-sampling can possibly involve hundreds of thousands of calculations and
was less prevalent when personal computing technology was in its infancy and still rather
expensive.
Organizations promoting re-sampling such as the ones represented at
Resample.com believe that re-sampling eliminates a lot of the complexity inherent in
traditional research methods. They argue that rather than attempting to extend a series of
parametric and non-parametric tests from a small sample to better understand greater
Complexity is the disease. Resampling (drawing repeated samples from the given data, or population
suggested by the data) is a proven cure. Bootstrap, permutation, and other computer-intensive
procedures have revolutionized statistics. Resampling is now the method of choice for confidence limits,
hypothesis tests, and other everyday inferential problems.
ANONYMOUSResample.com, 2008
![Page 3: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/3.jpg)
Value and Applicability 3
phenomena is inferior compared to re-sampling which enables analysis of the totality of
most sorts of data. What is troubling is that one cannot find any instance of analysis on
their web site that examines the perceived advantages and disadvantages of the two
methodologies. Rather than provide analysis as to why re-sampling is superior beyond
what is discussed above, re-sampling proponents go as far as stating that the growing
stream of scientific articles using re-sampling techniques, both as a basic tool as well as for
difficult applications, testifies to re-sampling's value (Resample.com).
Re-sampling has become increasingly popular as a tool used for testing
mediation because it does not require the normality assumption to be met, and because it
can be effectively utilized with smaller sample sizes under 20 units (Wikipedia). One of
the challenges of traditional research, which emphasizes formal hypotheses and
significance testing of null hypotheses, is that extreme data variances in the majority of
cases are not desired and can take away from the overall research model applicability. Re-
sampling smoothes out the degree of data variance due to the fact that it resamples the
same groups of data sometimes hundreds and even thousands of times. The end result is a
more streamlined representation of results. By eliminating the need towards ensuring that
prospective data sets will confine themselves within an acceptable results range, re-
sampling mediation renders research less complex. This can be an attractive approach for
those who are seeking to accurately universally define a full range of possible results.
Mankind has always longed to make sense of the surrounding world and attempted
to categorize social and natural phenomena within a series of artificial constructs based on
an array of logical formulae. The beauty of formal hypotheses and significance testing of
![Page 4: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/4.jpg)
Value and Applicability 4
null hypotheses is that it does not attempt to define the totality of an environment but
attempts to derive behavior patterns and predispositions through the thorough analysis of
mostly random samples. Some research confines itself in better understanding certain
phenomena within very specific contexts and retains its validity for many years. Other
research which attempts universally define predictable dynamics both at a micro and
macro level with little to no context is usually less successful.
Unfortunately, re-sampling despite its practical applications in few areas, is
usually utilized towards achieving the latter objective. The main problem with re-sampling
is that it is practical in few mono-dimensional areas where data set behavior patterns can
be universally defined within a handful of parameters. Rather than further illuminate
regarding the infinite complexity of the world around us, re-sampling proponents believe
that complexity is the problem and that it has to be circumvented (Resampling.com).
Chong Ho Yu in his 2003 research titled “Resampling methods: concepts, applications,
and Justification”, states that the obstacles in computing resources and mathematical logics
have been removed and that perhaps now researchers will pay more attention to
philosophical justification of re-sampling. In making a case for his argument he brings up
an the “Monte Carlo Simulation” where researchers make up data and draw conclusions
based on many possible scenarios. The name "Monte Carlo" comes from an analogy to the
gambling houses on the French Riviera. Years ago some gamblers studied how they could
maximize their chances of winning by using simulations to check the probability of
occurrence for each possible case in games of chance. The forerunner of gaming statistical
analysis geared towards improving the success of players was actually pioneered by Ed
![Page 5: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/5.jpg)
Value and Applicability 5
Thorpe in his acclaimed 1962 book “Beat the Dealer”. He devised a somewhat successful
statistical methodology based on re-sampling designed towards that end. The contextual
basis of his method was the game of Blackjack which provided a contained small statistical
data set in the form of a deck or two of un-shuffled cards. His methodology provided “hit”
or “stay” indicators based on what cards had already been dealt and the probability of
desirable cards appearing. This method is also known as card-counting and was heralded
as a breakthrough but ceased to work once casinos caught on and started to involve 3 or
more decks of continuously shuffled cards into the game. The added level of complexity
eliminated the previous 1% advantage of the card-counter and turned the odds back in
overwhelming favor of the house. Other experts added to the critique of re-sampling vis-à-
vis card counting by pointing to the chance of a three-of-a-kind hand. They recognized that
that event does not happen very often, and it would take many hands from an un-shuffled
deck of cards to estimate its probability (Simon).
Once again we see that complexity is the chief enemy of the re-sampling technique.
Re-sampling may work fine in small mono-dimensional controlled data set environments
but ceases in its efficacy once multidimensional or “complex” variables are added to the
equation. The attempt to define multidimensional complex phenomena is the basis for
most scientific research and it is hard to imagine one being successful in that endeavor if
the choice to ignore complexity is made.
Despite the many weaknesses of the re-sampling methodology , one of the reasons
for its continued limited popularity is that it appeals to that facet of the human psyche that
longs to render the surrounding world less mysterious, more discernable, and less
![Page 6: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/6.jpg)
Value and Applicability 6
unpredictable so that it can be managed more effectively (Levin). However, one can
attempt to achieve the latter by operationalizing concepts into qualitative variables,
extending that process into quantitative data-gathering, and conducting null-hypothesis
analysis conveys a sense of order to what may otherwise seem as abstract ideas or
theories.
There may be a more promising future for re-sampling in the area of game-theory.
The latter is an accepted technique utilized to measure the likelihood of outcomes
concerning issues related to mono-dimensional environments. There are potential
extensions of game-theory techniques based on re-sampling in the areas of corporate risk
management and military war gaming. Although the latter two still involve complex
environments, re-sampling can be used to better define gain/loss propositions as long as
they are done in a highly contextualized micro-level. For example, a military campaign may
attempt to war-game a specific number of similarly modeled aircraft without taking into
account other impacting factors such as air superiority, anti-aircraft resources, weather
variances, proximity to support bases, pilot ability, etc. In the investment world, one could
attempt to resample scenarios based on the past performance of stocks in relation to
mono-dimensional variations of inflation, interest rates, etc.
Despite the technological advances that have enabled re-sampling tools more
accessible to the general public, formal statistical methodology remains the prevailing
technique utilized throughout the majority of research projects given re-sampling’s limited
applicability to small well defined non-ambiguous sets of data.
![Page 7: Value and Applicability of Re-Sampling Techniques](https://reader035.vdocument.in/reader035/viewer/2022081907/5472d63cb4af9fbe0a8b5191/html5/thumbnails/7.jpg)
Value and Applicability 7
Bibliography
Anonymous. (2008). Bootstrapping (statistics). Retrieved on 11 August 2008 from
http://en.wikipedia.org/wiki/Bootstrapping_(statistics)
Anonymous. (2008). Resampling stats. Retrieved on 11 August 2008 from
http://www.resample.com/
Howell, David. (2008). Resampling statistics: randomization and the bootstrap. University of
Vermont
Levin, Joel. (1998). What if there were no more bickering about statistical significance tests?
Research in the Schools. Vol. 5, No. 2, 43-53.
Simon, Julian. (2008). Why the formal method in statistics is usually theoretically inferior.
Retrieved on 11 August 2008 from http://www.graduate.tuiu.com/
Yu, Chong Ho. (2003). Resampling methods: concepts, applications, and justification.
practical assessment, research & evaluation, 8(19). Retrieved September 10, 2008 from
http://PAREonline.net/getvn.asp?v=8&n=19