Category: Education

Applying Learning Theories to My Teaching Practice

This summer in CEP 800, Learning in School and Other Settings, I had the opportunity to reflect on several prominent learning theories and how they impact my own teaching practice. Based on that reflection, I created a Prezi.

For additional details about all that you see in the Prezi, please check out this document.

To interact with the Prezi, click Present in the window above.  Use the double arrows to make it full screen, or interact with it in the small window. You can use the left and right arrows on the screen or on the keyboard to move through the animations, or you may click around and look at aspects of the presentation that most interest you.

Assessment Rubric 4.0: Including Technology and Universal Design for Learning

Over the past six weeks, I have been developing and revising a rubric by which to assess other assessments. Here are links to previous iterations and the blog posts that I wrote about them:

This week’s final iteration is Rubric 4.0, which I have updated to include criteria specifying the importance of a technology component in assessment and Universal Design for Learning (UDL). Universal Design for Learning stresses the importance of providing multiple means of engagement, representation, and action and expression to make education accessible for all students (Meyer, Rose, & Gordon, 2014).

References

Meyer, A., Rose, D.H., & Gordon, D. (2014). Universal design for learning: Theory and practice. Wakefield, MA: CAST.

Iterations of an Assessment Rubric

“To understand is to be able to wisely and effectively use—transfer— what we know, in context; to apply knowledge and skill effectively, in realistic tasks and settings” (Wiggins & McTighe, 2005, p. 7). An authentic performance task is one way to assesses students’ ability to transfer what they know (Wiggins & McTighe, 2005). It only makes sense, then, that one assignment in my Assessment class is to create a rubric by which to judge other assessment instruments.

I began this assignment two weeks ago, with Rubric 1.0 which you can read about in an earlier blog post. In that first iteration, I identified three criteria for quality assessments: direct and specific feedback, transparent learning targets, and a self-assessment component.

In this second iteration, Rubric 2.0, I added two criteria for quality assessments: the assessment requires only target knowledge, skills, and abilities (KSAs) to complete; and the assessment requires a transfer of knowledge to demonstrate understanding.

Requires Only Target Knowledge, Skills, and Abilities (KSAs) to Complete

One approach to creating valid and fair assessments is to require only target knowledge, skills, and abilities (KSAs) to complete the assessment. Assessment designers first identify what evidence is needed to judge whether students have demonstrated specified aspects of learning. After determining what knowledge, skills, and abilities (KSAs) are required, assessment designers then examine the assessment tasks to determine whether other unwanted, non-target KSAs are required to complete the assessment. If unwanted KSAs are included in the assessment, the assessment will yield results about the target KSAs and non-target KSAs, such as language skills or math skills (Trumbull & Lash, 2013). Therefore, non-target KSAs should be eliminated.

Assessment Requires Transfer of Knowledge to Demonstrate Understanding

As stated in the introduction, a well-crafted assessment that assesses students’ ability to transfer what they know should include an authentic performance task (Wiggins & McTighe, 2005). The assessment tool should clearly describe criteria for degrees of understanding. Understanding should be assessed separately from other traits, like mechanics, organization, and craftsmanship. According to Wiggins and McTighe (2005), those other traits should be assessed in a separate rubric, or all of the traits should be assessed in a grid-style rubric.

Conclusion

Eventually, my Rubric 2.0 will become Rubric 3.0, and finally, Rubric 4.0. By then, it will include ten carefully selected criteria for judging assessment instruments. I look forward to learning more and sharing those rubrics in future posts.

References

Trumbull, E. & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd. Retrieved from www.wested.org/online_pubs/resource1307.pdf

Wiggins, G.P. & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development. Retrieved from http://p2047-ezproxy.msu.edu.proxy1.cl.msu.edu/login?url=https://search-ebscohost-com.proxy1.cl.msu.edu/login.aspx?direct=true&db=e000xna&AN=133964&scope=site

Assessing My Own Assessment

Assessment is an important topic in education, with teachers, administrators, parents, students, and policymakers all staking a claim to the results of various types of assessments (NWEA, 2015).

Assessment can be used to inform teaching and provide feedback to students. When used effectively, it can “support and enhance learning” (Shepard, 2000, p. 4).

Testing is just one form of assessment. Drawing by Sarah Van Loo, 2017.

In an effort to improve my assessment practices, I critically examined one of my own assessments. First, I chose three elements that “make it possible for assessment to be used as part of the learning process” (Shepard, 2000, p. 10).  Then I began drafting a rubric with which to assess other assessments, Rubric 1.0. As the name implies, this rubric is a work-in-progress.

Rubric for an Art Project

The word assessment can refer to both the instrument and the process (MAEA, n.d.). The assessment tool that I chose to examine is a rubric for a comic strip. The last time I used this assessment tool was a few years ago. Nevertheless, I created it using a format that I often use for middle school art rubrics, so I think it is useful to examine it.
The assessment process was a project, the creation of a comic strip by each student in my middle school art class. The purpose of this assessment was to evaluate students’ understanding of craft, character development, story, and the basic elements of a comic strip, through their creation of a comic strip.

When I created this assessment tool, I made the assumption that my students were able to read and interpret each of the criteria and descriptions. I also made the assumption that my students understood the vocabulary used in the assessment tool.

Examination of My Comic Strip Rubric

Assessment doesn’t have to be a monster. Drawing by Sarah Van Loo, 2017.

In examining my rubric, I assessed whether it met the three criteria I used to create Rubric 1.0: feedback to students is direct and specific, learning targets are transparent, and it includes a component of self-assessment by the student.

Feedback to Students is Direct and Specific

According to Black and Wiliam (1998), feedback to students should be direct and specific, giving advice to students so they know what can be improved. This helps students recognize their own ability to improve.

In my experience, students sometimes view themselves as “talented” or “not talented.” With specific feedback about their own performance, they develop a growth mindset and learn that they can improve regardless of where they started.

The comments section of my assessment tool provides a space to provide specific feedback to students. If the teacher does not use the comments section but only circles the pre-written descriptions, students may view this feedback as vague.

Learning Targets are Transparent

Students should have access to the criteria by which they will be graded, providing them with the opportunity to strive for excellence and the ability to see the “overarching rationale” of classroom teaching (Black & Wiliam, 1998, p. 143).

I have noticed that when students have clear expectations laid out for them, it prevents a lot of questions from being asked. Students do not need to ask or guess what quality work looks like because clear guidelines have already been established.

The comic strip rubric sets forth clear expectations for quality of work, quantity of work, and use of time in class. It is possible that more elements of a good comic strip could be added, but this rubric sets forth standards for excellent work, as well as work that could be improved.

Includes a Component of Self-Assessment by The Student

When students assess their own work, the criteria of the assignment and feedback from others becomes more important than the grade alone. Students who assess their own work usually become very honest about it and are prepared to defend their work with evidence (Shepard, 2000).

Students who assess their own work are prepared to defend their work with evidence.

When students assess their own work, they use what they discover to improve their own work. I have noticed that they iterate on their projects and make improvements, without prompting.

The comic strip rubric allows for student self-assessment, providing one bonus point for doing it. In my experience, this provides an incentive for some students. Other students do not see the inherent value and therefore pass on assessing themselves. Rather than making it an optional bonus point, it could be a required element of the rubric.

Conclusion

At this point, the comic strip rubric does include the elements of Rubric 1.0. As I revise Rubric 1.0, though, I expect to discover ways to improve my comic strip rubric.

REFERENCES

Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-144.

MAEA. (n.d.). CEP 813 module 1.1 introduction. Retrieved from https://d2l.msu.edu/d2l/le/content/585048/viewContent/5241064/View

NWEA. (2015). Assessment purpose. Retrieved from https://assessmentliteracy.org/understand/assessment-purpose/

Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.

Beautifully Questioning My Teaching Practice

As I prepared to do this, my final project for my three summer classes, I was stuck. These classes have been exhilarating, challenging, and rewarding. Sometimes there were tears, both frustrated and proud.

Today I created something wonderful and hoped that the excitement from that would fuel me through this post. It didn’t. The hard part is that after so many weeks of pushing myself so hard, my brain was stuck.

So I looked at Twitter, read some news, looked at the ceiling. Nothing. One of

First-graders love coding using ScratchJr

my objectives for the assignment was to apply Warren Berger’s Why, What If, and How questioning methodology from A More Beautiful Question (Berger, 2016) to my own practice. So I eventually, begrudgingly started with that.

I began with creating a list of Why questions related to my teaching. I teach Project Lead The Way (PLTW) to students K-5 in two schools. I teach engineering concepts to all my students and coding to all my students except my kindergarteners.

Unsticking the Lid

As soon as I started asking questions, my imagination took flight. As Frances Peavey once said, a good question is like “a lever used to pry open the stuck lid on a paint can,” (Berger, 2016, p. 15). That was it! I simply needed to start asking questions and I was unstuck, just like the lid of the paint can.

As Berger suggests, I started by asking Why questions. If we’re paying attention, we ask Why when we encounter a suboptimal situation (Berger, 2016). Although I love my job and most of my students enjoy my classes, there are some who just don’t. Those students led me to ask:

Why?
  • Why do some students keep asking if they’re doing the problem “right”?
  • Why do some of my students think they can’t code?
  • Why do some of my students refuse to participate?
What If?

As I considered my Why questions, I focused on the fact that “Integrating coding into classes is being perceived by many as a way to stimulate computational and creative thinking,” (Johnson, Becker, Estrada, Freeman, 2015, p.21). Therefore, I decided to address the question: Why do some of my students think they can’t code?

The ScratchJr coding environment is user-friendly for young students, but still offers the opportunity to learn computational thinking.

Pondering this question, I realized that my first and second-grade students have great confidence when it comes to coding. It is my third through fifth-grade students who are more likely to struggle.

With my Why question in my mind, I began to ask What If. During this time of creative, wide open questioning, I asked What If questions to help me consider possibilities for change (Berger, 2016).

  • What if I let my older students start with ScratchJr (typically only first and second graders use ScratchJr)?
  • What if I made time for Hour of Code or other warm-up activities before starting on our unit together?
  • What if I ran an after-school coding club?
  • What if I work more closely with the media specialist to coordinate coding lessons?
How?

Asking How is about focusing on making progress toward a solution, about deciding which ideas to pursue (Berger, 2016). One of the great conundrums of my schedule is that I never seem to have enough time.

I co-teach, pushing materials into each classroom, typically for a couple of weeks at a time. When I’m in a class I have so much to do to complete a module. Also, I don’t want to waste any of the classroom teacher’s time. Therefore, I carefully avoid straying from my lesson plans. The problem is that some of my students simply need more. So I asked How…

  • How can I find time to let some students practice coding more outside of class?
    • After school
    • During lunch
    • On my prep hour
    • During other periods in the school day with a PLTW iPad
    • During other periods in the school day using another device

This practice could be with ScratchJr or with the app they’ll use during PLTW. It could even be a different app, as long as they get the opportunity to practice the computational thinking they need to improve their coding skills and gain confidence.

Next Steps

Prior to completing this assignment, I had vaguely considered this issue in the past but hadn’t gotten much past that. By taking the time to do this questioning process, I feel like I’ve taken my first steps toward solving a complex problem in my practice. My next step will be to talk to my classroom teachers to figure out how we can work together on behalf of our students.

References

Berger, W. (2016). A more beautiful question: The power of inquiry to spark breakthrough ideas. New York: Bloomsbury.

Johnson, L., Becker, S. A., Estrada, V., & Freeman, A. (2015). NMC horizon report: 2015 K-12 edition. Austin, TX: The New Media Consortium.

Images

All images in this blog post were created by Sarah Van Loo.

Curiosity Never Grows Old

Since I was little, I have loved to draw. I enjoyed everything about it. I wanted to learn how to make animated movies but never did. Now as an art teacher and technology teacher, I have access to great technologies that can help me. In fact, I spent last year teaching K-5 students coding in ScratchJr, Hopscotch, and Tynker.

This summer I decided to take what I already know about coding from those applications and do what I’ve always wanted to do: make an animated movie. I created this animation using Scratch.

I drew all the sprites, customized the background, and did it. It’s only one minute long, but I am so proud of myself and I’m thrilled with the result. I am delighted to share that movie here:

Images

All images and videos in this blog post were created by Sarah Van Loo.

Questioning the Wicked Problem of Teaching Complex Thinking

Each year The New Media Consortium reports on key trends, significant challenges, and important developments in the field of educational technology. Among the significant challenges of 2015 was teaching complex thinking (Johnson, Becker, Estrada, & Freeman, 2015). The problem itself is complex enough that we could refer to it as a “wicked problem.” According to Koehler and Mishra, these are described as problems that “have incomplete, changing and contradictory requirements” (as cited in Week 4 – Learn, 2017).

“Rodin’s The Thinker” by Andrew Horne, retrieved from https://commons.wikimedia.org/w/index.php?curid=15582363, is licensed under Public Domain.

Because of the changing nature of wicked problems, it is impossible to come up with a perfect solution. Instead, my team Laura Allen, Guadalupe Bryan, Alex Gorton, and I worked to investigate and try to offer a “best bad idea”  in response to the problem of teaching complex thinking (as cited in Week 4 – Learn, 2017).

We approached this wicked problem from the perspective of A More Beautiful Question. We hoped to ask “an ambitious yet actionable question that can begin to shift the way we perceive or think about something – and that might serve as a catalyst for change” (Berger, 2016, p. 8).  Although our problem is unsolvable, we can still be a catalyst for change – if we know what to do.

Using the method presented in A More Beautiful Question, we asked Why, What If, and How. The most challenging aspect of this approach was giving time and thoughtful consideration to each phase in order to ask good questions. Berger points out that we’re deluged with answers, but “to get to our answers, we must formulate and work through the questions ourselves” (Berger, 2016, p. 3).

In our shared planning document, we brainstormed and took notes. Together, we asked 55 Why questions. When we ask Why, it helps to approach the problem from an inquisitive, almost childlike perspective. This led to our beautiful, driving question:

How are teachers addressing the complex thinking skills necessary for students to become productive and innovative 21st-century learners?

I needed to give our complex problem the consideration it deserved. Before moving on to the What If phase, I crafted this infographic about the complexity of our problem:

After arriving at our driving question, we responded by asking What If. When we ask What If, we use creative, divergent thinking to expand the possibilities to explore.

Around this time, we surveyed other educators in our professional learning networks (PLNs) about our wicked problem. Based on the results of the survey and on the What If questions we asked, our team singled out one What If question:

What if students had more freedom/choice in developing their complex thinking skills?

With our survey results in and our What If question settled on, we investigated current research around the question of How. We researched four current educational trends around student choice: project-based learning, genius hour, authentic inquiry, and student choice in assessments.

Check out our ThingLink below to see our group’s presentation of this entire process. We describe our methods, survey and results, and practical ways to introduce student choice in a 21st-century classroom. Don’t miss our references in the lower left if you want to learn even more. (If your browser doesn’t allow you to click on all the links, go directly to the ThingLink site.)

 



Reflections

Collaborative teamwork is a 21st-century skill that our group used to great effect. Even though we were never all in the same room for this, apps like Zoom, text messaging, Google Docs, and email helped us undertake this complex project.

The process was challenging at times, but the results were worth it. I am excited to try out some of our suggestions in my own class this fall.

References

Berger, W. (2016). A more beautiful question: The power of inquiry to spark breakthrough ideas. New York: Bloomsbury.

Johnson, L., Becker, S. A., Estrada, V., & Freeman, A. (2015). NMC horizon report: 2015 K-12 edition. Austin, TX: The New Media Consortium.

Week 4 – Learn. (2017, July 22). Retrieved from http://www.msuedtechsandbox.com/MAETely1-2017/week-4-wicked-problems/week-4-learn/

Images

Unless otherwise captioned, all images and videos in this blog post were created by Sarah Van Loo or the students of MAET Year 1.