Welcome to the hotseat. We've prepared a guide if you'd like to read more about how it works.

Standardised Deviations Test:intervals

0 votes
asked Apr 2, 2018 in BUS 3018F - Models by anonymous
After deciding on the range of the number line, how does one decide on the proportions or percentages to use within each interval? Is it safe to assume that in most cases we will use the interval and proportions on page 33(chapter 12) of the Edx notes? How were the percentages decided on(other than the assumption that there should be an equal number of positive and negative deviations)? 

1 Answer

0 votes
answered Apr 5, 2018 by Kelly (970 points)
Best answer

Tests, like the standardized deviations test, are used to check that the underlying distribution assumptions of the hypothesis: observed deaths have underlying mortality rates as defined by the graduation. This is EQUIVALENT to testing the hypothesis that: zx iid~N(0,1) for all x and zx's mutually independent for all x.

The standardized deviations test is designed to check whether all of the standardized residuals are identically normally(0,1) distributed. If we find many large deviations then the distribution, then the normal distribution is not a good approximation for all ages. 

Therefore, to test this we choose a reasonable split of the number line and, to answer your question, we calculate the expected proportion in each age group by considering the normal(0,1) PDF. We expect approximately 34% of observations to lie between -1 and 0, 14% between -2 and -1, etc... you can check these using the normal tables. 

The expected number of observations in each interval will be equal to m*proportion of observations expected in each interval. Where m is your total number of groups.