Tests, like the standardized deviations test, are used to check that the underlying distribution assumptions of the hypothesis: observed deaths have underlying mortality rates as defined by the graduation. This is EQUIVALENT to testing the hypothesis that: zx iid~N(0,1) for all x and zx's mutually independent for all x.
The standardized deviations test is designed to check whether all of the standardized residuals are identically normally(0,1) distributed. If we find many large deviations then the distribution, then the normal distribution is not a good approximation for all ages.
Therefore, to test this we choose a reasonable split of the number line and, to answer your question, we calculate the expected proportion in each age group by considering the normal(0,1) PDF. We expect approximately 34% of observations to lie between -1 and 0, 14% between -2 and -1, etc... you can check these using the normal tables.
The expected number of observations in each interval will be equal to m*proportion of observations expected in each interval. Where m is your total number of groups.