Author: sarah

Second Iteration of a Formative Assessment for Fifth Grade

In my fifth grade classes, students actively work in small groups to build, modify, and program a robot to move autonomously (with minimal human intervention). They use technology and navigate social learning situations to solve a problem that is anchored in the real world.

This module is challenging but using “focused questions, feedback, and diagnostic assessment” (Wiggins & McTighe,2005, p. 46) helps to uncover misunderstandings, questions and assumptions my students have. In turn, this informs my instruction and helps students learn more, avoid forgetfulness, and transfer what they know to other situations.

To plan for and reflect on one of the formative assessments within this fifth-grade robotics module, I have developed Formative Assessment Design Version 2.0. My prior iteration is Formative Assessment Design Version 1.0, which I wrote about in an earlier blog post.

References

Wiggins, G.P. & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development. Retrieved from http://p2047-ezproxy.msu.edu.proxy1.cl.msu.edu/login?url=https://search-ebscohost-com.proxy1.cl.msu.edu/login.aspx?direct=true&db=e000xna&AN=133964&scope=site

 

A Critical Review of Course Management Systems for Assessments

Online learning is becoming more and more common in our schools. It can include a wide range of courses from fully online courses to those that include an online component with face-to-face instruction.

Course Management Systems

To help facilitate online instruction, teachers may benefit from using a Course Management System (CMS) that allows them to organize and manage course content, assessments, students, and records. These can range from websites to all-inclusive, commercial systems.

As a teacher of science, engineering, and technology, my elementary school classes often include a digital component. In the last year, I have regularly used Canvas and Seesaw. When I was a middle school art teacher I used Google Classroom to facilitate assignments and record keeping.

Critical Review of CMSs

When I was tasked with critically reviewing the assessment features of three CMSs for my current K-5 technology teacher position, I decided to review Canvas, Google Classroom, and Edmodo. Although I have not used Edmodo as a teacher in the past, I have used it as a parent.

I also included Seesaw as a fourth option. While it may not technically be considered a CMS, Seesaw has been a great help to me in getting online content to my young students and collecting assignments back from them. Because I was curious about how it would compare to the full CMSs I was reviewing, I added it as a fourth system to review.

In comparing these CMSs, I used criteria that were provided to me, as well as four other criteria that I consider to be important in a CMS. I scored each criterion with a 1 for yes or 0 for no, so that I could easily compare the features of the CMSs.

My Results

Based on the results of my critical review, Canvas is the most robust system that can be tailored to provide easy access, even for young students. In the free version of Canvas, teachers have the option of building courses from scratch to gain unlimited access to the system (in terms of time and number of classes). I have scratch-built and customized existing courses in Canvas before and would be able to do this, but it may be a limiting factor for other teachers who prefer an out-of-the-box CMS package.

For the purposes of creating assessments in CEP813, I will use Canvas, which scored the most points in my critical review. Although I previously created a hybrid art course in Canvas for CEP820, I still want to further explore the full-featured assessment and tracking capabilities that Canvas provides.

Images

Wordle Cloud for Drexler (2010)” by Chris Jobling is licensed under CC BY-SA 2.0 (as header image)

Assessment Rubric 4.0: Including Technology and Universal Design for Learning

Over the past six weeks, I have been developing and revising a rubric by which to assess other assessments. Here are links to previous iterations and the blog posts that I wrote about them:

This week’s final iteration is Rubric 4.0, which I have updated to include criteria specifying the importance of a technology component in assessment and Universal Design for Learning (UDL). Universal Design for Learning stresses the importance of providing multiple means of engagement, representation, and action and expression to make education accessible for all students (Meyer, Rose, & Gordon, 2014).

References

Meyer, A., Rose, D.H., & Gordon, D. (2014). Universal design for learning: Theory and practice. Wakefield, MA: CAST.

Creating a Formative Assessment for Fifth Grade

Formative assessment, assessment for learning that occurs during a unit of instruction, is dynamic assessment. It gives teachers the opportunity to find out what students are able to do on their own or with adult help and guidance (Shepard, 2000).

By making students’ thinking visible and open to examination, it can reveal what a student understands and what misconceptions they hold (Trumbull & Lash, 2013). It also provides opportunities for scaffolding steps between one activity and the next, for each individual student (Shepard, 2000).

Guided by Rubric 3.0, my third iteration of a rubric to assess other assessments, I have created the first draft of a formative assessment. Formative Assessment Design Version 1.0 is meant to be used during a fifth-grade robotics module that I teach. During a typical school year, I teach this module four or five times, so I look forward to revising this formative assessment over time to make it the best it can be.

References

Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.

Trumbull, E. & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd. Retrieved from www.wested.org/online_pubs/resource1307.pdf

 

Assessment Rubric 3.0: Carefully Considering Feedback

According to Hattie and Timperley (2007), feedback, information a learner receives about aspects of their performance, can come from a variety of sources, such as a teacher, parent, classmate, book, or an experience. Feedback makes an impact when it helps to close “the gap between where students are and where they are aiming to be” (p. 90). Its impact on student learning can be significant, although some types of feedback have a greater positive effect than others.

Among other additions, I gave special consideration to the impact of feedback as I created Rubric 3.0, the third iteration of my rubric by which to assess other assessments.

Earlier iterations of this assessment rubric are Rubric 1.0 which you can read about in an earlier blog post, and Rubric 2.0 which you can read about in another earlier blog post.

In the coming weeks, Rubric 3.0 will become Rubric 4.0. By then, it will include ten carefully selected criteria for judging assessment instruments. I look forward to learning more and sharing that rubric soon.

Images

Feedback” by geralt is licensed under CC0

References

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

A Critical Review of Art Projects as an Assessment Genre

When I was a visual arts educator, a staple of art class was the art project: the kind that gets hung in the hallways and displayed at district-wide art shows. Typically, each unit included a culminating art project. They may have been 2-dimensional or 3-dimensional, digital or physical, and comprised of any of a variety of materials.

I used this assessment genre for all my classes, kindergarten through high school. Even when I was a student in visual arts education classes, I made artwork for the same reason: so that my teachers could assess my understanding of the concepts we had been learning in class through my creation of artwork.

Because of the importance of this assessment genre to art education, I decided to do a critical review of the art project genre for my latest assessment assignment.

Iterations of an Assessment Rubric

“To understand is to be able to wisely and effectively use—transfer— what we know, in context; to apply knowledge and skill effectively, in realistic tasks and settings” (Wiggins & McTighe, 2005, p. 7). An authentic performance task is one way to assesses students’ ability to transfer what they know (Wiggins & McTighe, 2005). It only makes sense, then, that one assignment in my Assessment class is to create a rubric by which to judge other assessment instruments.

I began this assignment two weeks ago, with Rubric 1.0 which you can read about in an earlier blog post. In that first iteration, I identified three criteria for quality assessments: direct and specific feedback, transparent learning targets, and a self-assessment component.

In this second iteration, Rubric 2.0, I added two criteria for quality assessments: the assessment requires only target knowledge, skills, and abilities (KSAs) to complete; and the assessment requires a transfer of knowledge to demonstrate understanding.

Requires Only Target Knowledge, Skills, and Abilities (KSAs) to Complete

One approach to creating valid and fair assessments is to require only target knowledge, skills, and abilities (KSAs) to complete the assessment. Assessment designers first identify what evidence is needed to judge whether students have demonstrated specified aspects of learning. After determining what knowledge, skills, and abilities (KSAs) are required, assessment designers then examine the assessment tasks to determine whether other unwanted, non-target KSAs are required to complete the assessment. If unwanted KSAs are included in the assessment, the assessment will yield results about the target KSAs and non-target KSAs, such as language skills or math skills (Trumbull & Lash, 2013). Therefore, non-target KSAs should be eliminated.

Assessment Requires Transfer of Knowledge to Demonstrate Understanding

As stated in the introduction, a well-crafted assessment that assesses students’ ability to transfer what they know should include an authentic performance task (Wiggins & McTighe, 2005). The assessment tool should clearly describe criteria for degrees of understanding. Understanding should be assessed separately from other traits, like mechanics, organization, and craftsmanship. According to Wiggins and McTighe (2005), those other traits should be assessed in a separate rubric, or all of the traits should be assessed in a grid-style rubric.

Conclusion

Eventually, my Rubric 2.0 will become Rubric 3.0, and finally, Rubric 4.0. By then, it will include ten carefully selected criteria for judging assessment instruments. I look forward to learning more and sharing those rubrics in future posts.

References

Trumbull, E. & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd. Retrieved from www.wested.org/online_pubs/resource1307.pdf

Wiggins, G.P. & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development. Retrieved from http://p2047-ezproxy.msu.edu.proxy1.cl.msu.edu/login?url=https://search-ebscohost-com.proxy1.cl.msu.edu/login.aspx?direct=true&db=e000xna&AN=133964&scope=site

Assessing My Own Assessment

Assessment is an important topic in education, with teachers, administrators, parents, students, and policymakers all staking a claim to the results of various types of assessments (NWEA, 2015).

Assessment can be used to inform teaching and provide feedback to students. When used effectively, it can “support and enhance learning” (Shepard, 2000, p. 4).

Testing is just one form of assessment. Drawing by Sarah Van Loo, 2017.

In an effort to improve my assessment practices, I critically examined one of my own assessments. First, I chose three elements that “make it possible for assessment to be used as part of the learning process” (Shepard, 2000, p. 10).  Then I began drafting a rubric with which to assess other assessments, Rubric 1.0. As the name implies, this rubric is a work-in-progress.

Rubric for an Art Project

The word assessment can refer to both the instrument and the process (MAEA, n.d.). The assessment tool that I chose to examine is a rubric for a comic strip. The last time I used this assessment tool was a few years ago. Nevertheless, I created it using a format that I often use for middle school art rubrics, so I think it is useful to examine it.
The assessment process was a project, the creation of a comic strip by each student in my middle school art class. The purpose of this assessment was to evaluate students’ understanding of craft, character development, story, and the basic elements of a comic strip, through their creation of a comic strip.

When I created this assessment tool, I made the assumption that my students were able to read and interpret each of the criteria and descriptions. I also made the assumption that my students understood the vocabulary used in the assessment tool.

Examination of My Comic Strip Rubric

Assessment doesn’t have to be a monster. Drawing by Sarah Van Loo, 2017.

In examining my rubric, I assessed whether it met the three criteria I used to create Rubric 1.0: feedback to students is direct and specific, learning targets are transparent, and it includes a component of self-assessment by the student.

Feedback to Students is Direct and Specific

According to Black and Wiliam (1998), feedback to students should be direct and specific, giving advice to students so they know what can be improved. This helps students recognize their own ability to improve.

In my experience, students sometimes view themselves as “talented” or “not talented.” With specific feedback about their own performance, they develop a growth mindset and learn that they can improve regardless of where they started.

The comments section of my assessment tool provides a space to provide specific feedback to students. If the teacher does not use the comments section but only circles the pre-written descriptions, students may view this feedback as vague.

Learning Targets are Transparent

Students should have access to the criteria by which they will be graded, providing them with the opportunity to strive for excellence and the ability to see the “overarching rationale” of classroom teaching (Black & Wiliam, 1998, p. 143).

I have noticed that when students have clear expectations laid out for them, it prevents a lot of questions from being asked. Students do not need to ask or guess what quality work looks like because clear guidelines have already been established.

The comic strip rubric sets forth clear expectations for quality of work, quantity of work, and use of time in class. It is possible that more elements of a good comic strip could be added, but this rubric sets forth standards for excellent work, as well as work that could be improved.

Includes a Component of Self-Assessment by The Student

When students assess their own work, the criteria of the assignment and feedback from others becomes more important than the grade alone. Students who assess their own work usually become very honest about it and are prepared to defend their work with evidence (Shepard, 2000).

Students who assess their own work are prepared to defend their work with evidence.

When students assess their own work, they use what they discover to improve their own work. I have noticed that they iterate on their projects and make improvements, without prompting.

The comic strip rubric allows for student self-assessment, providing one bonus point for doing it. In my experience, this provides an incentive for some students. Other students do not see the inherent value and therefore pass on assessing themselves. Rather than making it an optional bonus point, it could be a required element of the rubric.

Conclusion

At this point, the comic strip rubric does include the elements of Rubric 1.0. As I revise Rubric 1.0, though, I expect to discover ways to improve my comic strip rubric.

REFERENCES

Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-144.

MAEA. (n.d.). CEP 813 module 1.1 introduction. Retrieved from https://d2l.msu.edu/d2l/le/content/585048/viewContent/5241064/View

NWEA. (2015). Assessment purpose. Retrieved from https://assessmentliteracy.org/understand/assessment-purpose/

Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.

Beautifully Questioning My Teaching Practice

As I prepared to do this, my final project for my three summer classes, I was stuck. These classes have been exhilarating, challenging, and rewarding. Sometimes there were tears, both frustrated and proud.

Today I created something wonderful and hoped that the excitement from that would fuel me through this post. It didn’t. The hard part is that after so many weeks of pushing myself so hard, my brain was stuck.

So I looked at Twitter, read some news, looked at the ceiling. Nothing. One of

First-graders love coding using ScratchJr

my objectives for the assignment was to apply Warren Berger’s Why, What If, and How questioning methodology from A More Beautiful Question (Berger, 2016) to my own practice. So I eventually, begrudgingly started with that.

I began with creating a list of Why questions related to my teaching. I teach Project Lead The Way (PLTW) to students K-5 in two schools. I teach engineering concepts to all my students and coding to all my students except my kindergarteners.

Unsticking the Lid

As soon as I started asking questions, my imagination took flight. As Frances Peavey once said, a good question is like “a lever used to pry open the stuck lid on a paint can,” (Berger, 2016, p. 15). That was it! I simply needed to start asking questions and I was unstuck, just like the lid of the paint can.

As Berger suggests, I started by asking Why questions. If we’re paying attention, we ask Why when we encounter a suboptimal situation (Berger, 2016). Although I love my job and most of my students enjoy my classes, there are some who just don’t. Those students led me to ask:

Why?
  • Why do some students keep asking if they’re doing the problem “right”?
  • Why do some of my students think they can’t code?
  • Why do some of my students refuse to participate?
What If?

As I considered my Why questions, I focused on the fact that “Integrating coding into classes is being perceived by many as a way to stimulate computational and creative thinking,” (Johnson, Becker, Estrada, Freeman, 2015, p.21). Therefore, I decided to address the question: Why do some of my students think they can’t code?

The ScratchJr coding environment is user-friendly for young students, but still offers the opportunity to learn computational thinking.

Pondering this question, I realized that my first and second-grade students have great confidence when it comes to coding. It is my third through fifth-grade students who are more likely to struggle.

With my Why question in my mind, I began to ask What If. During this time of creative, wide open questioning, I asked What If questions to help me consider possibilities for change (Berger, 2016).

  • What if I let my older students start with ScratchJr (typically only first and second graders use ScratchJr)?
  • What if I made time for Hour of Code or other warm-up activities before starting on our unit together?
  • What if I ran an after-school coding club?
  • What if I work more closely with the media specialist to coordinate coding lessons?
How?

Asking How is about focusing on making progress toward a solution, about deciding which ideas to pursue (Berger, 2016). One of the great conundrums of my schedule is that I never seem to have enough time.

I co-teach, pushing materials into each classroom, typically for a couple of weeks at a time. When I’m in a class I have so much to do to complete a module. Also, I don’t want to waste any of the classroom teacher’s time. Therefore, I carefully avoid straying from my lesson plans. The problem is that some of my students simply need more. So I asked How…

  • How can I find time to let some students practice coding more outside of class?
    • After school
    • During lunch
    • On my prep hour
    • During other periods in the school day with a PLTW iPad
    • During other periods in the school day using another device

This practice could be with ScratchJr or with the app they’ll use during PLTW. It could even be a different app, as long as they get the opportunity to practice the computational thinking they need to improve their coding skills and gain confidence.

Next Steps

Prior to completing this assignment, I had vaguely considered this issue in the past but hadn’t gotten much past that. By taking the time to do this questioning process, I feel like I’ve taken my first steps toward solving a complex problem in my practice. My next step will be to talk to my classroom teachers to figure out how we can work together on behalf of our students.

References

Berger, W. (2016). A more beautiful question: The power of inquiry to spark breakthrough ideas. New York: Bloomsbury.

Johnson, L., Becker, S. A., Estrada, V., & Freeman, A. (2015). NMC horizon report: 2015 K-12 edition. Austin, TX: The New Media Consortium.

Images

All images in this blog post were created by Sarah Van Loo.

Curiosity Never Grows Old

Since I was little, I have loved to draw. I enjoyed everything about it. I wanted to learn how to make animated movies but never did. Now as an art teacher and technology teacher, I have access to great technologies that can help me. In fact, I spent last year teaching K-5 students coding in ScratchJr, Hopscotch, and Tynker.

This summer I decided to take what I already know about coding from those applications and do what I’ve always wanted to do: make an animated movie. I created this animation using Scratch.

I drew all the sprites, customized the background, and did it. It’s only one minute long, but I am so proud of myself and I’m thrilled with the result. I am delighted to share that movie here:

Images

All images and videos in this blog post were created by Sarah Van Loo.