

OCTOBER 30 
Schmidt, Ch.10: "Introduction to Hypothesis Testing: Tests with a Single Mean," only pp. 272286. So far in this statistics, we've not even mentioned the technical phrase, "statistically significant." The free ride is over, as you learn in this chapter. Suppose you want to determine whether one sample (e.g., students at Northwestern) differs "significantly" from a given population (e.g., all collegelevel students in the U.S.). Assuming you have data on all college students nationwide, that's a job for the onesample z test. Be sure you understand these terms: test statistic, alpha, level of significance, critical region, critical value, onetailed, and twotailed. Yesterday's reading assignment stopped just short of introducing Type I and Type II errors. These are hard to keep straight, and even most researchers have to think hard before explaining the difference. An analogy with diagnosing appendicitis may help. Doctors have difficulty in distinguishing various forms of severe stomach pains from appendicitis.
The term alpha is attached to the significance of Type I errors and beta to Type II errors. The researcher is free to establish alpha and thus to control the probability of a Type I error. But the situation is more complicated with the probability of a Type II error, which depends on the sample size (among other factors). On pages 321323, Schmidt explains how sample size figures into the calculation, but you need not know the calculation.


OCTOBER 31 
Optional session on hypothesis testing


NOVEMBER 1 
Schmidt, Ch.10: "Introduction to Hypothesis Testing: Tests with a Single Mean," only pp. 286293. In test statistics up to now, we've assumed knowledge of the standard deviation of the variable for the population. Because we usually don't know that, we must estimate it from the sample, which introduces some additional error. So instead of the ztest, we use a procedure called the ttest. Another complication arises when we are dealing with data expressed as percentages or proportions (e.g., % of respondents with college education) rather than as interval scales (e.g., mean years of education). Fortunately, this issue is easy to handle, as Schmidt explains.


Kirk, "OneSample t and z tests and confidence interval for a correlation," portion of Chapter 11 in Statistics: An Introduction (Harcourt Brace, 1999), pp 367369. Methods of statistical analysis differ across different disciplines in the social sciences. Sociology and political science, for example, make more use of correlation than does psychology, which relies more heavily on ttests (treated this week) and analysis of variance (treated next week). Schmidt, the pyschologist who furnished most of our readings, has a chapter on correlational analysis, but he does not deal at all with the significance of a correlation coefficient. Kirk, who is also a psychologist, devotes about three pages to the topic, which fortunately is enough for us. 