How I Spent My Summer “Vacation”
I am a chemist, so by profession I am an experimenter. When I had the opportunity to teach a class this summer, it was the perfect opportunity to try some pilot experimental teaching techniques on a class of 20 instead of 200! Some of those experiments involved Student Response Devices, a.k.a. clickers.
As a scientist, one always tries to learn something from the experiment whether it went according to plan or not. And it is always useful to learn from the mistakes/successes of others, too! An old mentor of mine once said that “an afternoon in the library can save you three weeks in the lab!” So, I offer you a few things I picked up in my six week baptism by fire. I hope it is useful to you.
For those with other things to do, the take home message:
1) I love clickers. They are pedagogically valuable
2) Better questions make it easier.
3) Make the students use it and buy in to it.
4) Figure out how you want to use it.
5) Demographics (and data in general) are wonderful
6) Have a backup plan
7) Don’t forget to pick a participant list!
A few caveats before I get started. First, I know I am preaching to the choir here. Everyone believes in clickers enough to try them. Second, every class and course are different, so you may use this totally differently than I do.
When ISU finally adopted one clicker system (I had bet on Mallard and I was unwilling to adopt early again), I jumped on board. With a clicker receiver set up in Julian Hall, I was ready for summer. I had used peer instruction and ConcepTesting and interactive lectures before, but now it was electronic and I could collect real data. Data make chemists and scientists happy. Having done all this interactive business before I thought I would just finally be able to score the results. I was expecting to see that students understood most concepts as I had seen when I did polling by hand-raising. The first time I saw a statistical distribution (2nd day maybe?), I was shocked, but elated. The anonymous polling gave me what I desired: a true indication of the class and the ability to assess whether I could skip a topic, just skim it, or revisit something when I thought we were “done’. This reaffirmed my decision to use clickers and I was excited.
As with most lectures, the better prepared you are, the better the clicker experience will go. Better multiple-choice questions make it easier to learn something. Lousy questions afford lousy, less illustrative results. I used qualitative and concept-based questions more than quantitative questions, and when I did use quantitative questions, my answers were order of magnitude to save time.
Even though I had over 120 clicker questions in a 6 week class (16 2h lectures), some students still resented having to purchase a clicker. That student will likely never be happy, but I can understand the aversion if one only uses clickers once a week or so. As the system gets adopted across campus it will be less important, but I spent a little time on day 1 getting the students excited about clickers—we play “Countdown” (a bar trivia game) with some ISU-based to show them that this can be fun and that they (probably) have seen or done something like this before. Now that clicking was fun, they were involved with it, so the first chemistry questions weren’t a question of learning the material and the technology.
If you don’t know what you want out of the clickers, you won’t get much. I was looking for a way to instantly gauge student understanding and for a way to keep up the interactive nature I am used to with lecturing. For the summer, I was also looking for a way to replace the points I used to have from Mallard without having to design an entire WebCT site! Clickers did this for me. I chose to have 60% of their old “online quiz” grade come from just clicking at all and the remaining 40% from getting the correct answer. I will not grade my seniors that way next semester, but that is what I wanted from intro-level class. I could just as easily not “graded” any of the questions. Share your expectations/plan, and all is well. As a caveat, I mentioned that I am an experimenter. I told my students as much and they were very willing guinea pigs with the clickers! Figuring out how to use the clickers will also make your life easier with the reports. I played a lot until I found the 5 reports I wanted. I could always go back and generate any report I wanted from session data, but the 5 I wanted were able to run in a few minutes even on our slow machine. Some of the reports take a lot longer--I ran those in my office until I realized that I didn’t need/want those. For me, I used graphical results by question, graphical demographics, graded participant results, results by participant (answer detail), and individual scoring.
One of the neatest things I found with the current software (TurningPoint) is the demographic slides and features. I polled the class broadly on major (science, humanity, etc.), year, reason for taking the course, and math background. All summer I was able to watch the correlations (or lack thereof) between these factors and the answers to questions. It was very nice. One can surmise whether the question was answered correctly only by those who had learned it in a previous class, only those with a certain level of math skills, etc. For a data junkie, this was paradise.
I recommend a backup plan until the clicker system is more in place on campus. With most of my students buying used clickers to save a little money, batteries died during the short semester. With only 50 minutes or so during the semester I say “tough break” and tell the students to get a battery before next period, but summer is 2h long and we meet again the next day. I suggest maybe a loaner clicker that I could have them use for one period. Or not. I also had them write answers on a slip of paper and turn them in. Much easier with 20 than 200! So from now on my syllabus will say to replace your batteries to start the semester (2 button batteries is ~$4) or carry the screwdriver with them.
This is so important it should be first, but be sure to select a participant list!!! Whoever decided that “none” is the default instead of “auto” did not think about the possible chaos before a lecture. If you don’t care about being able to identify who answered what, none is fine. However, that is not me. Auto at least collects information so that one can figure it out later.
That is some of my advice. Maybe it is worth what you paid for it, but I learned a lot from this experience and I would hate to have someone be forced to learn it again on his/her own unless they want to.
I used clickers for three semesters for our Physics 102 course. I encourage instructors to use them for reading quizzes at the beginning of lecture. I assign around 3 sections of the text book to be read before each lecture, then I ask 3 to 5 relatively simple questions to check that they have done the reading. Two-thirds of students do not use their textbooks without this requirement.
The second way I use them is to do group exercises. I do some lecture on a topic, then give the students worksheets to fill out as a group or I simply display a question for them to work on and discuss as a group. Then, after 5 minutes or so, I collect the answers.
The third way I use them is to determine whether students have comprehended new lecture material or to check on previously learned topics. In these questions, which are mostly conceptual, and the responses I get, I can determine whether more discussion is needed or whether we can move on.
I do not recommend using clickers for quizzes.
As for portion of course grade...I want to encourage full participation in this interactive learning experience, so I make it a sizeable portion of the grade. I want students to read their books and read them for comprehension, so reading quizzes are worth 15%. I also want full participation in class exercises, so that is another 15%. We also have lab at 10%, quizzes at 35%, and the final at 25%. Also, I found that I had to scale the reading quizzes because most incoming freshmen really do not know how to read a text book for comprehension. They do improve as the course progresses through the semester, though. Therefore, I created a grading scale for these quizzes, which you can see at
This scale worked great in my classes. I think George does something similar.
We use our clickers in four ways:
1) Good Faith Effort quizzes based on readings from the textbook - This is a pass/fail assignment with 3-5 questions worth a total of 4 pt, and right now students only need a 40% to receive the points. We expect that the cut-off point will be higher in the future as we gain experience with both the clickers and our questions.
2) Video review questions - We show about 10 videos, and at the end of each video, we ask the student to answer 3 questions, each worth 2 points.
3) Discussion questions - These questions deal with controversial topics with no clear right or wrong answers. Students get 3 points simply for participating.
4) On-the-fly polling - This is an on the spot opinion poll on a particular issue. We also used the clickers to a satisfaction survey about the
clickers, and results were extremely positive. No points are awarded for
Currently, clicker assignments count for slightly over 25% of the grade.