A calling ...

"We are called to be architects of the future, not its victims."

"Make the world work for 100% of humanity in the shortest possible time through spontaneous cooperation without ecological offense or the disadvantage of anyone."

- Buckminster Fuller

Tuesday, January 25, 2011

National science test scores disappoint

National science test scores disappoint

Here's what I posted today in response to the above article:

We don't teach scientific inquiry. We teach facts to be memorized for the purposes of performing on tests. A hands on approach, where the process of asking questions, making observations, analyzing data, offering possible explanations, and asking questions about the explanations takes more time than our pacing guides would allow to do well. Higher level thinking skills, which are what are being assessed, sadly are not the focus of instruction for a number of reasons.

Foremost, there is little consensus about the goals of science instruction, i.e., what do we as a nation want students to be able to do?

Also, instructional methods are wildly inconsistent. Where are the agreed upon best practices? What are the best kits for conducting experiments? Are our science kits outdated? How are teams setting up experiments, sharing resources, and celebrating the scientific process?

Anybody can see, we need a better system.

A Few Observations:

Recently, I was in a 6th grade class subbing for a special education teacher where there was a mix of inclusion and pull out.  The demographics  in the class were highly skewed to the economically disadvantaged end of the spectrum.  Thus, there was unusually great pressure for teachers to teach to the SOL.  In the two days I spent with the class, virtually the entire time was spent in paper and pencil examinations.  I found myself growing angry as I observed students initially struggle and quickly quit trying to analyze reading questions that many adults would have difficulty answering.  When the testing data is collected, after the entire class bombed, (and not just the special education students), how will the data drive instruction? How will the teachers be evaluated?

The lead teacher was highly experienced, highly organized, highly energetic, and highly respected by her students.  The shared reading experience of Hatchet, by Gary Paulsen, I observed, and process of asking and answering discussion questions about important passages in the text was masterful.  Yet, as I gathered other observations, I found myself wondering, what is the percentage of teacher talk versus student talk in that classroom?

In math, students were being tested on multi-step problems involving estimation and problem solving.  I only saw how the special education students did, but what I saw initially was a classroom of students who lacked the strategies and stamina to work through these kinds of problems.  Many seemed to have mastered traditional addition and multiplication algorithms, but none were good at breaking down these problems into parts, and choosing math operations appropriately to solve multi-step problems, i.e., algebraic thinking. Furthermore, none were skilled estimators.  The frustration was palpable as I observed students begin marking answers randomly.  Afterwards, an IA modeled each problem step-by-step while I sat amongst students making sure all were taking notes and holding each of them accountable for every step.  I was happy to see struggling learners light up as patterns and logic were modeled for them.  Between the IA and me, we managed to keep all the students engaged and keep the process somewhat enjoyable.

On the second day, I worked for a full day alone with a student named O, testing.  On the math assessment, O was observing me for cues, watching my facial expressions, watching my eyes, listening to the tone of my voice, wanting help I couldn't give him.  What amazed me was the way O kept working the problems, writing down numbers and erasing them multiple times, testing various operations, asking me to re-read the questions over and over, and eventually finding an answer that was close to his answer choice.  One problem that stumped O involved estimation and addition of three numbers written to the thousandths place.  When he eventually used a back-end estimation strategy, he solved the problem.  I kept asking him, "do you want to tap out?"  Eventually, he did on a few of the questions where he had no idea where to start.  Since O had been absent on the previous day of testing and it was near the end of the grading period, O had spent an entire day testing.  I was proud of how O kept refusing to "tap out".

O was stumped by one problem that involved simple calculations because he could not figure out how to set up the problem, and perhaps more importantly, because he wasn't a skilled estimator:  10 items were purchased at $16.95 per each; if the tax rate was 4.5%, about how much total tax was charged?  While O figured out that he could use an estimate of $17.00 and eventually realized that 10 items would cost about $170.00, he didn't think of an estimate of 5% as being half of 10%, which would have enabled him to easily solve the problem in his head.  He eventually guessed correctly.  I wish I had had the time to explain to him the mental math strategy afterwards, but time expired.  He would have gotten it.