How do you assess #programming during lessons?

Given my background, I know a vast amount about automated assessment of code correctness and have a great deal of experience of it. In programming lessons, I never use it. Hmm. Why? I suspect because: it’s easy to do, and could be done outside the classroom at least as effectively as inside – whereas there are many things that can only be done inside the classroom.

My students have an expert programmer with them in the room, that’s a resource not to be squandered?

So, what do I do instead to assess programming during a lesson? Here’s a copy and paste of my answer on closed forums:

In concrete terms … specifically focussing on the “during lesson” part (ignoring homework, marking final programs, etc) …

What method(s) do you use for assessing students programming skills during lessons?

Multiple start points

Each task has variable levels of scaffolding, usually differentiated by dataset. Merely by glancing at the code I can tell how “hard” a version of the task they attempted.

e.g. doing shape drawing via for-loops and co-ords, worksheet prompted students to choose from shapes that needed one loop or a pair of nested loops; to choose squares (with prompts and sample code) or stars (no prompts, no code, they were on their own).

Have you actually run it yet?

Has the student run the program successfully yet? How long did that take them?

e.g. low-tech: walking around the classroom, looking at the filename on each screen. Default name in IDLE is “Untitled” which means they’ve never saved, and guarantees they’ve never run the program yet.

Especially obvious is a student who’s copied from (neighbour/internet/etc) and has a full program on screen for a few minutes but it’s still never been saved.

Looknig at the log / console output helps enormously here too. Depending on IDE, you may have easy access to complete console logs (and get the students to copy/paste it into a Google doc: faking this is actually harder than simply doing the work, so …)

NB: I’ve had to put a lot of emphasis on the importance of NOT sanitising the logs before pasting in. Many students at KS3/4 want to submit “perfect” work – have to keep re-iterating that an empty console suggests they did less work, gives them “0” marks for debugging etc.

Open-ended, easy-to-think-of mini-questions

Small “tweak” questions posed to students mid-programming. Small, easy to ask, and open or closed depending on how they intepret them.

e.g. on a worksheet lots of “what happens if “x” on line 3 were “y” instead? Why is it x and not x/2?”

e.g. when walking around, posing these ad-hoc while glancing at the code they’re writing

e.g. sending popup messages to some/all computers with these questions mid-task.

…etc.

These questions are often more about waiting to see the reaction of the student than getting the answer (although Y11 and up I’d expect assessable answers, Y8 or Y9 I’m often more interested in the reaction). Are they instantly confused? Do they suddenly realise they have no idea what x means? Can they start phrasing an answer, or is the vocabulary beyond them?

Leave a Reply