ms-08 dec 2014.pdf

14
Management Programme ASSIGNMENT JULY-DECEMBER 2014 Course Code MS - 08 Course Title Quantitative Analysis for Managerial Applications Assignment Code MS-08/TMA/SEM - II/2014 Assignment Coverage All Blocks MBA Help Material Provided by Unique Tech Publication Unauthorized copying, selling and redistribution of the content is prohibited. This Material is provided for your reference only. The utility of this content will be lost by sharing. Please do not share this material with others. To know price of this assignment & For more inquiry visit: http://ignousolvedassignmentsmba.blogspot.in/ Dharmendra Kumar Singh [email protected] School of Management Studies INDIRA GANDHI NATIONAL OPEN UNIVERSITY MAIDAN GARHI, NEW DELHI – 110 068

Upload: gopu-pushpangadhan

Post on 21-Sep-2015

223 views

Category:

Documents


2 download

TRANSCRIPT

  • Management Programme

    ASSIGNMENT JULY-DECEMBER 2014

    Course Code MS - 08 Course Title Quantitative Analysis for Managerial

    Applications Assignment Code MS-08/TMA/SEM - II/2014 Assignment Coverage All Blocks

    MBA Help Material Provided by Unique Tech Publication

    Unauthorized copying, selling and redistribution of the content is prohibited.

    This Material is provided for your reference only.

    The utility of this content will be lost by sharing. Please do not share this material with others.

    To know price of this assignment & For more inquiry visit:

    http://ignousolvedassignmentsmba.blogspot.in/

    Dharmendra Kumar Singh

    [email protected]

    School of Management Studies

    INDIRA GANDHI NATIONAL OPEN UNIVERSITY MAIDAN GARHI, NEW DELHI 110 068

  • Q1.What are quartiles, deciles, and percentiles? State the general equation of computing the ith

    quartile, jth decile, and kth percentile.

    Given a data set that has been ordered in increasing magnitude, the median, first quartile and third quartile

    can be used split the data into four pieces. The first quartile is the point at which one fourth of the data lies

    below it. The median is located exactly in the middle of the data set, with half of all of the data below it.

    The third quartile is the place where three fourths of the data lies below it.

    We can generalize the idea of a quartile to that of a percentile. The nth percentile of a set of data is the

    point where n% of the data is below it.

    An Example

    A class of 20 students had the following scores on their most recent test: 75, 77, 78, 78, 80, 81, 81, 82, 83,

    84, 84, 84, 85, 87, 87, 88, 88, 88, 89, 90. The score of 80% has four scores below it. Since 4/20 = 20%, 80

    is the 20th percentile of the class. The score of 90 has 19 scores below it. Since 19/20 = 95%, 90

    corresponds to the 95 percentile of the class.

    Percentile vs. Percentage

    Be careful with the words percentile and percentage. A percentage score indicates the proportion of a test

    that someone has completed correctly. A percentile score tells us what percent of other scores are less than

    the data point we are investigating. As seen in the above example these numbers are rarely the same.

    Quartiles and Percentiles

    The median, first quartile and third quartile can all be stated in terms of percentiles. Since half of the data

    is less than the median, and one half is equal to 50%, we could call the median the 50th percentile. One

    fourth is equal to 25%, and so the first quartile the 25th percentile. Similarly the third quartile is the same

    as the 75th percentile.

    Deciles and Percentiles

    Besides quartiles, a fairly common way to arrange a set of data is by deciles. A decile has the same root

    word as decimal and so it makes sense that each decile serves as a demarcation of 10% of a set of data.

    This means that the first decile is the 10th percentile. The second decile is the 20th percentile. Deciles

    provide a way to split a data set into more pieces than quartiles without splitting it into 100 pieces as with

    percentiles.

    Applications of Percentiles

  • Percentile scores have a variety of uses. Anytime that a set of data needs to be broken into digestible

    chunks, percentiles are helpful. One common application of percentiles is for use with tests, such as the

    SAT, to serve as a basis of comparison for those who took the test. In the above example, a score of 80%

    initially sounds good. However this does not sound as impressive when we find out that it is the 20th

    percentile - only 20% of the class scored less than an 80%.

    Another example of percentiles being used are in children's growth charts. In addition to a physical height

    or weight measurement, pediatricians typically state this in terms of a percentile score to compare the

    height or weight of a given child to all children of that age.

    General equation of computing the ith quartile, jth decile, and kth percentile.

    Place your N numbers in order (rank order, smallest to largest).

    The median is the (N+1)/2 value. If N is odd, the median is the middle value of the set of ordered data; If

    N is even, the median is USUALLY taken as the arithmetic mean of the two middle values of the set of

    ordered data.

    The ith quartile, Q(i), i = 1, 2, or 3, is given by the i(n+1)/4 value. It may be necessary to interpolate

    between successive values.

    The jth decile, D(j), j = 1, 2, ..., 9 is given by the j(n+1)/10 value. It may be necessary to interpolate

    between successive values.

    The kth percentile, P(k), k = 1, 2, ..., 99 is given by the k(n+1)/100 value. It may be necessary to

    interpolate between successive values.

    Etc., etc., etc., ... The interpolation is USUALLY taken to be just a simple linear interpolation

    between the two points.

    Q2. In a railway reservation office, two clerks are engaged in checking reservation forms. On an

    average, the first clerk (A1) checks 55 per cent of the forms, while the second (A2) checks the

    remaining. While A1 has an error rate of 0.03 that of A2 is 0.02. A reservation form is selected at

    random from the total number of forms checked during a day and is discovered to have an error.

    Find the probabilities that it was checked by A1, and A2, respectively.

    Ans:

  • Q3.The weekly wages of 2000 workmen are normally distributed with mean wage of Rs 70 and wage

    standard deviation of Rs 5. Estimate the number of workers whose weekly wage are

    a.between Rs 70 and Rs 71

    b.between Rs 69 and Rs 73

    c.more than Rs 72, and

    d.less than Rs 65

    Ans:

  • Q4.A research organization claims that the monthly wages of industrial workers in district X

    exceeds that of those in district Y by more than Rs 150. Two different samples drawn independently

    from the two district yielded the following results:

    District X: x1 = 648, s12 = 120, and n1 = 100

    District Y: x2 = 495, s22 = 140, and n2 = 90

    Verify at 0.05 level of significance whether the sample results support the claim of the organization.

    Ans:

  • Q5. What do you mean by decomposition of a time series? State the essential characteristics of the

    additive and multiplicative models of time series analysis.

    Ans:

    A time series is a collection of observations of well-defined data items obtained through repeated

    measurements over time. For example, measuring the value of retail sales each month of the year would

    comprise a time series. This is because sales revenue is well defined, and consistently measured at equally

    spaced intervals. Data collected irregularly or only once are not time series.

  • An observed time series can be decomposed into three components: the trend (long term direction), the

    seasonal (systematic, calendar related movements) and the irregular (unsystematic, short term

    fluctuations).

    The decomposition of time series is a statistical method that deconstructs a time series into notional

    components. There are two principal types of decomposition which are outlined below.

    Decomposition based on rates of change

    This is an important technique for all types of time series analysis, especially for seasonal adjustment. It

    seeks to construct, from an observed time series, a number of component series (that could be used to

    reconstruct the original by additions or multiplications) where each of these has a certain characteristic or

    type of behaviour. For example, time series are usually decomposed into:

    the Trend Component that reflects the long term progression of the series (secular variation)

    the Cyclical Component that describes repeated but non-periodic fluctuations

    the Seasonal Component reflecting seasonality (seasonal variation)

    the Irregular Component (or "noise") that describes random, irregular influences. It represents

    the residuals of the time series after the other components have been removed.

    Decomposition based on predictability

    The theory of time series analysis makes use of the idea of decomposing a times series into deterministic

    and non-deterministic components (or predictable and unpredictable components)..

    Examples

    Kendall shows an example of a decomposition into smooth, seasonal and irregular factors for a set of data

    containing values of the monthly aircraft miles flown by UK airlines.[2]

    Software

    An example of statistical software for this type of decomposition is the program BV4.1 that is based on the

    so-called Berlin procedure.

    Additive and multiplicative seasonality. Many time series data follow recurring seasonal patterns. For

    example, annual sales of toys will probably peak in the months of November and December, and perhaps

    during the summer (with a much smaller peak) when children are on their summer break. This pattern will

    likely repeat every year, however, the relative amount of increase in sales during December may slowly

    change from year to year. Thus, it may be useful to smooth the seasonal component independently with an

  • extra parameter, usually denoted as (delta). Seasonal components can be additive in nature or

    multiplicative. For example, during the month of December the sales for a particular toy may increase by 1

    million dollars every year. Thus, we could add to our forecasts for every December the amount of 1

    million dollars (over the respective annual average) to account for this seasonal fluctuation. In this case,

    the seasonality is additive. Alternatively, during the month of December the sales for a particular toy may

    increase by 40%, that is, increase by a factor of 1.4. Thus, when the sales for the toy are generally weak,

    than the absolute (dollar) increase in sales during December will be relatively weak (but the percentage

    will be constant); if the sales of the toy are strong, than the absolute (dollar) increase in sales will be

    proportionately greater. Again, in this case the sales increase by a certain factor, and the seasonal

    component is thus multiplicative in nature (i.e., the multiplicative seasonal component in this case would

    be 1.4). In plots of the series, the distinguishing characteristic between these two types of seasonal

    components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the

    overall level of the series; in the multiplicative case, the size of the seasonal fluctuations vary, depending

    on the overall level of the series.

    The seasonal smoothing parameter . In general the one-step-ahead forecasts are computed as (for no

    trend models, for linear and exponential trend models a trend component is added to the model; see

    below):

    Additive model:

    Forecastt = St + It-p

    Multiplicative model:

    Forecastt = St*It-p

    In this formula, St stands for the (simple) exponentially smoothed value of the series at time t, and It-p

    stands for the smoothed seasonal factor at time t minus p (the length of the season). Thus, compared to

    simple exponential smoothing, the forecast is "enhanced" by adding or multiplying the simple smoothed

    value by the predicted seasonal component. This seasonal component is derived analogous to the St value

    from simple exponential smoothing as:

    Additive model:

    It = It-p + *(1- )*et

    Multiplicative model:

  • It = It-p + *(1- )*et/St

    Put in words, the predicted seasonal component at time t is computed as the respective seasonal

    component in the last seasonal cycle plus a portion of the error (et; the observed minus the forecast value

    at time t). Considering the formulas above, it is clear that parameter can assume values between 0 and 1.

    If it is zero, then the seasonal component for a particular point in time is predicted to be identical to the

    predicted seasonal component for the respective time during the previous seasonal cycle, which in turn is

    predicted to be identical to that from the previous cycle, and so on. Thus, if is zero, a constant

    unchanging seasonal component is used to generate the one-step-ahead forecasts. If the parameter is

    equal to 1, then the seasonal component is modified "maximally" at every step by the respective forecast

    error (times (1- ), which we will ignore for the purpose of this brief introduction). In most cases, when

    seasonality is present in the time series, the optimum parameter will fall somewhere between 0 (zero)

    and 1(one).

    Linear, exponential, and damped trend. To remain with the toy example above, the sales for a toy can

    show a linear upward trend (e.g., each year, sales increase by 1 million dollars), exponential growth (e.g.,

    each year, sales increase by a factor of 1.3), or a damped trend (during the first year sales increase by 1

    million dollars; during the second year the increase is only 80% over the previous year, i.e., $800,000;

    during the next year it is again 80% less than the previous year, i.e., $800,000 * .8 = $640,000; etc.). Each

    type of trend leaves a clear "signature" that can usually be identified in the series; shown below in the brief

    discussion of the different models are icons that illustrate the general patterns. In general, the trend factor

    may change slowly over time, and, again, it may make sense to smooth the trend component with a

    separate parameter (denoted [gamma] for linear and exponential trend models, and [phi] for damped

    trend models).

    The trend smoothing parameters (linear and exponential trend) and (damped trend). Analogous

    to the seasonal component, when a trend component is included in the exponential smoothing process, an

    independent trend component is computed for each time, and modified as a function of the forecast error

    and the respective parameter. If the parameter is 0 (zero), than the trend component is constant across

    all values of the time series (and for all forecasts). If the parameter is 1, then the trend component is

    modified "maximally" from observation to observation by the respective forecast error. Parameter values

    that fall in-between represent mixtures of those two extremes. Parameter is a trend modification

    parameter, and affects how strongly changes in the trend will affect estimates of the trend for subsequent

    forecasts, that is, how quickly the trend will be "damped" or increased

    Essential characteristics of the additive and multiplicative models of time series analysis.

  • 1. In plots of the series, the distinguishing characteristic between these two types of seasonal components

    is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of

    the series; in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall

    level of the series.

    2. In plots of series, the distinguishing characteristic between these two types of seasonal components is

    that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the

    series; in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level

    of the series.

    3. The standard FFT algorithm is that the number of cases in the series must be equal to a power of 2 (i.e.,

    16, 64, 128, 256, ...). Usually, this necessitated padding of the series, which, as described above, will in

    most cases not change the characteristic peaks of the periodogram or the spectral density estimates. In

    cases, however, where the time units are meaningful, such padding may make the interpretation of results

    more cumbersome.