This quarter I have been part of the teaching team for Research Design and Quantitative Methods, a core class in Evergreen's Masters of Environmental Studies. Naturally, I had to include a discussion of the debate that has been swirling around the use of P values as a "significance" filter and the role of null hypothesis statistical testing in general. Because the students have very limited backgrounds in statistics and the course ventures only a little bit beyond the introductory level, I have to simplify the material as much as possible, but this might be useful for those of you reading this who aren't very statsy, or who have to teach others who aren't.
As background reading for this topic, students were assigned the recent statement by the American Statistical Association, along with "P Values and Statistical Practice" by Andrew Gelman, whose blog ought to be on your regular itinerary if you care about these questions. Here are the slides that accompanied my lecture.
UPDATE: I've had a couple of late-breaking thoughts that I've incorporated into the slides. One is that the metaphor of bioaccumulation works nicely for the tendency for chance results to concentrate in peer-reviewed journals under p-value filtration (slide 22). The other is a more precise statement of why p-values for different results shouldn't be compared (slide 25).