This article presents the results of a 2016 classroom research study assessing the impact of open pedagogy on student skills mastery in English 101, a first-year undergraduate composition course at a two-year community college in North America. Ninety-two students in five sections used the same free OER course materials, but two sections were given traditional assignments (i.e. formal essays and grammar exercises) and the other three sections were given “open” assignments that involved designing and remixing open resources. Assignment results and other course metrics used to investigate the impact on student skills mastery yielded no statistically significant differences in performance between the student groups, which suggests that there may be no harm in shifting away from the traditional “disposable” assignment.
‘Open educational resources’ (OER) are digital teaching materials that are either in the public domain or explicitly licensed for certain kinds of reuse, remixing, and redistribution. ‘Open pedagogy’, on the other hand, refers to the broader practice of redesigning the educational experience to be more meaningful by leveraging the permissions of open content to involve students in a more engaged learning experience via assignments that include curation and remixing. In recent years, while scholarly research into the impact of OER adoption has deepened, the open content movement has begun a shift towards examining the ways that open pedagogy may or may not impact student success.
One of the core principles of ‘open’ pedagogy is the desire to transition away from what David Wiley (2013) calls the ‘disposable assignment’, which “students complain about doing and faculty complain about grading” and ultimately “add no value to the world”. A ‘disposable’ (or what some are now calling ‘throwaway’) assignment can be understood as anything a student is asked to do in an educational context that has no lasting value to anyone beyond a given grade in the limited context of a single course (or even a single module in a course). ‘Disposable’ assignments are graded and then nobody ever looks at them again. Conversely, an ‘open’ assignment provides renewable value outside of the individual educational context, either to other students or to the public. For example, if students in a history class need to be assessed on their knowledge of the factors leading up to the Second World War, an instructor might assign any number of projects to demonstrate this knowledge, including (to name only a very few) a timeline of events, an oral presentation, or even a simple written report. In each case, the project may very well serve as a delivery mechanism for the student’s mastery of the subject and, if desired, the project could be shared with other students. However, without 5R permissions associated with the content of these projects, the work would not necessarily be possible to share digitally and publicly in a manner consistent with existing copyright laws. Student projects usually include many types of copyright-restricted content, from quoted passages to images, and the use is easily defensible under ‘fair use’ guidelines as long as the work remains in a limited educational context and is not published. That’s where the value of the ‘disposable assignment’ ends. If, on the other hand, the assignment is designed with 5R permissions in mind, the work can exist publicly beyond the individual educational context and be built upon by future learners within and without the institution.
It is important to note that recently, Wiley and Hilton (2018) connected the use of the term ‘open pedagogy’ back to exploratory and collaborative learning practices dating back decades, and argued for the use of the term ‘OER-enabled pedagogy’ to describe the practices “only possible or practical in the context of the 5R permissions which are characteristic of OER,” such as those outlined above.
Since the beginning of the relatively young open content movement in education, scholarly studies about OER have focused primarily on models of institutional adoption and/or the efficacy and perception of open resources as replacements for traditional textbooks. There is presently very little research on in-practice ‘open pedagogy’, possibly because the term itself has only been around for a few years, but most likely because the OER movement has been primarily focused on establishing that open content is at least as good as traditional content. In the last few years, however, the discussion of what possibilities emerge for teaching and learning when students are able to interact with remixable open content has begun to occupy the core of the OER movement and, notably, there are few studies that look at the efficacy of these pedagogical practices in the same form and scale as much of the extant OER research.
In recent years, pedagogical practice has become central to the open education movement. Knox (2013) rebutted dominant discussions about the value and efficacy of OER in shifting focus on the capacity of learners by arguing that “the mere removal of perceived barriers to access” does little to disrupt the structures of power—in education and beyond—from which the movement claims to free the students (p. 827). Among other things, Knox is saying that the movement has ignored the role of pedagogy in student learning, focusing instead on freedoms from restricted access and burdensome cost while simultaneously claiming that OER empowers students to self-direct their learning experiences without any kind of evidence that such a thing is actually happening or is even possible. In other words, coupling this criticism with the focus of OER-related studies outlined above, ‘open education’ does not mean much in terms of a transformation of pedagogy if studies only look at what impact an open textbook might have versus that of a traditional publisher textbook. Later that year, Wiley (2013) articulated the idea of ‘open pedagogy’ in a post on his blog at opencontent.org by providing an analogy to describe the use of OER in the way that traditional textbooks are used:
“It is like driving an airplane down the road. Yes, the airplane has wheels and is capable of driving down on the road (provided the road is wide enough). But the point of an airplane is to fly at hundreds of miles per hour – not to drive. Driving an airplane around, simply because driving is how we always traveled in the past, squanders the huge potential of the airplane.”
The point is that, around 2013, interest in the pedagogical potential of open content used in education had begun to congeal—if only theoretically. The ability to retain, reuse, revise, remix, and redistribute content, the argument goes, allows for a more interactive and meaningful learning experience because students can contribute to the very creation of classroom learning tools that may be shared with peers and even the world.
Still, a broader pedagogical groundwork for understanding the potential for open pedagogy was needed. Hegarty (2015) proposed a theoretical model for use of OER in open pedagogy, in which four of the “eight attributes” of effective OER use relate primarily to student activity at the center of the educational experience: “Learner Generated,” “Participatory Technology,” “Innovation and Creativity,” “Sharing Ideas and Resources” (p. 4). Basically the idea is that students are involved in high-impact learning strategies when they become active participants in the generation of course content rather than passive consumers of course content.
Recent scholarly research suggests that open pedagogical practices can provide additional resources for students to use, whether as optional or extra-credit assignments. Scott, Moxham, and Rutherford (2014) described several case studies of what they term ‘shadow modules’ that exist alongside the course’s traditional content but contain student-generated and openly-licensed materials that are made available to subsequent student groups. These were upper-division anatomy courses and the model employs a volunteer student module leader to help “arrange group meeting and tutorials” in shadow module sessions (p. 288). While only 20% (on average) of students actually attended these optional sessions, the materials they developed were shared with and used by all the other students (p. 291). Similarly, Wiley, Webb, Westin and Tonks (2017) demonstrated that student-created OER may actually be correlated to student skills mastery and that extra credit serves as some incentive to get students to make their work available under an open license. Additionally, Grewe and Davis (2017) concluded that the enrollment in an OER course correlates to greater student learning outcomes when compared to prior academic performance.
Whether or not successful applications of ‘open’ pedagogy in upper division and graduate courses translate to the first-year composition classroom at the community college remains to be seen. Drawing from my own experience advocating for OER across my district and facilitating OER faculty workshops, I can report that many instructors of first-year composition (if not all 100-level courses) are reluctant to leverage 5R permissions to transform their classrooms for fear that the introduction of a vastly-different pedagogy would risk further destabilizing the already precarious place in which first-year community college students find themselves. In other words, faculty may suspect that giving first-year students too much ‘freedom’ (to choose how they will complete an assignment and with what materials) is a bad idea because students at that level are not yet ready for the responsibility. Regular attendance, submission of traditional assignments, and in-class participation are constant struggles in the first-year composition classroom at the community college; introducing students to an ‘open’ pedagogy that is largely foreign to them (as it is not practiced in most elementary and high schools) can be seen as too much of a risk.
Furthermore, English faculty may be reluctant to eliminate the ‘throwaway’ assignment. Part of the purpose (with stress on part) of first-year composition is to prepare students for the kinds of closed-form writing they will likely be expected to do in other courses and possibly (depending on their field of study) in the professional world. Shifting away from ‘throwaway’ essay assignments to ‘open’ pedagogy, the skepticism goes, is likely to leave the students underprepared for other college courses (and maybe beyond).
The primary research question for this study was: Does switching to ‘open’ assignments from ‘throwaway’ assignments have a significant impact on student skills mastery?
In pursuit of an answer to this question, the students in five sections of English 101, all taught by the same instructor in the same semester and with access to the same openly-licensed course materials, were split into two groups. The control group was given the traditional assignments and the treatment group was given some ‘open’ assignments. Because of the experimental nature of the research, the number of student participants impacted, and the introductory level of the course, only a few relatively small adjustments to course content seemed appropriate for the treatment group. Changes were made only to two of five major course assignments, one at the end of a module on rhetorical analysis and the other involving individual writing improvement plans (Fig. 1). Course materials and instructor-student interaction were consistent between the control and treatment groups with the exception of these two assignments.
The rhetorical analysis module began in the fourth week of the semester and its primary learning objectives were to identify and evaluate the rhetorical components and appeals in a given text. The null hypothesis was that changing the major assignment at the end of the module from ‘throwaway’ to ‘open’ would have no effect on the students’ mastery of the learning objectives—students would simply be communicating their analyses in a different form.
The traditional summative assessment, which was assigned to the control group, asked students to demonstrate skills mastery by composing a unified, long-form essay containing a rhetorical analysis of a political speech of their choosing. This formal rhetorical analysis essay would easily qualify as a ‘throwaway’ assignment because it has no value to anyone beyond the demonstration of skills and subsequent receipt of a grade (not to mention that it is not particularly interesting to write or evaluate).
On the other hand, students in the treatment group were asked to identify a rhetorical situation common in their daily experiences (whether academic, professional, or personal) and then design a “learning tool” that could be shared to help others in that discipline or interest group understand the functions of rhetorical components and appeals in that specific situation. The actual form of their “learning tool” was not prescribed; students were encouraged to consider what form would be most appropriate for the context and audience they had identified. The tool was expected to contain all original content or appropriately-used open content, so that in either case the final product could be itself licensed and shared publicly. Students were encouraged, but not required, to openly-license their work. Some suggestions included designing an informational flyer, recording a brief video, and creating a slideshow. The course also contained an “Applied Rhetoric” wiki page to which students could choose to contribute if they were having trouble coming up with a general design. Any students contributing to the wiki did so with the understanding that the content would be licensed CC BY-SA.
A total of 92 students spread over five course sections participated in the study; 32 (in two sections) of the control group and 60 (in three sections) the treatment group, but not all students completed the measured assessments. Because the researcher was teaching five sections of the course at the same time, it was not possible to split the students into equal groups. The differences between the groups were measured using the following metrics:
The last of these metrics was not aimed at their performance on the rhetorical assignment itself, but their performance on the course’s final critical assignment, a long-form argumentative essay. While the measures of examples and quiz performance sought to gauge student mastery of rhetorical analysis skills, the third measure intended to determine whether or not there would be a difference in performance on a major essay assignment later on, when the control had gone through the experience of composing an additional essay during the semester and the treatment group had not. This was in response to the concern outlined above that transitioning from ‘throwaway’ to ‘open’ might debilitate students in a course where one of the objectives is to compose just such a closed-form essay (as disposable as it may be).
In the course of reviewing student submissions, the researcher tallied the number of examples used to illustrate claims about rhetorical components or appeals, as well as how many of those examples were (by my estimation) accurate in their use. In the case of the control group, examples were provided in the form of paraphrases or quotations from or observations of the political speech chosen by the student. In the treatment group, paraphrases or quotations of hypothetical or suggested speech were common, but there tended to be more abstract descriptions of how a certain rhetorical component would work in a given situation. In the treatment group, a vast majority created the rhetorical examples themselves, since they were asked to invent a relevant rhetorical situation to analyze.
The number and accuracy of provided examples were used to calculate mean accuracy. In other words, instead of focusing on how many examples were given, the focus was on how many of the given examples were determined to be accurately illustrative of a particular rhetorical component or appeal (Fig. 2).
While students tasked to write a ‘throwaway’ essay did average a greater number of given examples and relevant examples overall, and averaged a full 5% higher on accuracy, the statistical tests identified this difference as not significant.
At the end of the rhetorical analysis module, students were given a thirteen-question quiz worth a total of twenty-five points. Quiz questions were all multiple choice or matching, and all focused on the basics of definition and identification of rhetorical components and appeals. As with the rhetorical analysis assignment itself, not all participating students completed the quiz. In the control group, 22/32 participated (i.e. took the quiz) and the mean score was 19.77. In the treatment group, 43/60 participated with a mean score of 20.42.
The boxplots in Figs. 3 and 4 illustrate the range and distribution of student scores in the treatment and groups.
While the treatment group, given the ‘open’ assignment, demonstrated a tighter performance and a slightly better mean quiz score, a Wilcoxon Rank Sum test determined that the difference was not significant.
In the course’s final module, students were assigned a closed-form argumentative essay in which they were expected to argue a position about language usage in response to an issue raised in one of the module’s three primary texts. Nothing was particularly ‘open’ about this assignment, though the texts they had to read were freely-accessible (to them): the first pages of Italo Calvino’s novel If on a Winter’s Night a Traveler (available for review via Google Books), sections of David Foster Wallace’s essay “Tense Present” (available through the institution’s subscription to Academic Search Premier), and Toni Morrison’s speech at the 2008 PEN Literary Gala (video available via YouTube and also published in Burn This Book). All students were provided with the same links and assignment details.
Again, not all participants submitted the assignment. From the control group, 22 of 32 participated with a mean score of 165.7 (83% / B). From the treatment group, 38/60 participated with a mean score of 167.3 (84% / B). The difference was marginal and a Wilcoxon Rank Sum test yielded that it was not statistically significant, illustrated in Figs. 5 and 6.
Qualitatively speaking, a major confounding factor for those in the treatment group was that they were expected not only to demonstrate skill at rhetorical analysis but asked to apply their rhetorical skill in their choice of form and design for the tool. Put briefly, some students in the treatment group found the task exceedingly difficult. After discussion with several students in the treatment group, the researcher concluded that the assignment was more difficult not only because they’d largely never been asked to an assignment like that, but because it required higher-order cognitive tasks. Rather than “simply” explaining their rhetorical analysis in the (relatively) familiar closed form of an essay, students in the treatment group were making their own decisions about form and content. The response to this additional challenge was unsurprisingly various.
Some students met the task with great interest, in part because they did not have to write an essay and in part because the learning tool assignment permitted the creative use of students’ individual skills. By far, the most common choice of form for the learning tool was a set of slides in MS PowerPoint, often including detailed explanation in the slide notes. Some students recorded and edited videos and some simply made videos using mobile devices. One student used his knowledge of Wix web design software to create a well-structured website. For many students, the opportunity to demonstrate their analysis in a form with which they were already familiar seemed a pleasant alternative to a full length-formal essay. Considering that the two groups fared the same on the final essay in the semester, it seems that there was no reason to burden them with yet another essay assignment when the same skills mastery could be displayed in a more familiar form.
Of course, many students found this confounding factor frustrating, either out of outright resistance to the idea of doing something different or (more often) because of the extra considerations required by the very same freedoms afforded by the open assignment. To clarify, ‘outright resistance’ was not common; most students accepted the task in the typical range of first-year composition student reactions to assignments, from tired resignation to energetic motivation. However, a small few students were vocally opposed to the very concept of the ‘open assignment’ during lab-time consultations. One student literally threw up her hands and asked why she couldn’t just type out her analysis in MS Word like she does with homework in every other class. A couple other students in the treatment group insisted on writing essays. Fortunately, this kind of insurgency was rare. When students found the task difficult, the researcher was often able to help brainstorm ideas and provide feedback during lab time. Not unlike the process of drafting an essay, several student projects went through multiple phases of transformation before they were anywhere near finalized. In some cases, students were unable to answer one of the assignment’s fundamental challenges: create something that might have actual value outside the classroom. It was a tricky challenge that sometimes puzzled the researcher, too. It did not always work out. One interesting example of this was a student intending to pursue medicine who had elected to create a tool to inform ER employees about how the principles of rhetoric may be applied when communicating with tense, scared, and impatient patients. He insisted on making a set of informative slides, but when pressed about what real-world value a set of slides would have to ER employees, neither he nor the researcher had any idea. After some discussion, it was determined that a medium-sized informative poster would be appropriate, the kind of thing one might find in a break room. He liked the idea but did not, in the end, submit the assignment.
It may be that some of this difficulty stemmed from a dearth of models by which students would have been able to see examples of various successful projects. A version of the assignment had been piloted in two sections of English 102 the summer preceding the study, so many issues of prompt and instructional clarity had been worked out (these sections did not participate in the study). However, students in the treatment group had only two models left over from this pilot, and both were handouts designed by students for students but with different learning objectives in mind (one was about plagiarism and one the “red herring” logical fallacy). As this learning tool assignment is used in subsequent semesters, the number and variety of quality models will likely grow.
In the first week of the semester, students were given a grammar and mechanics diagnostic consisting of ten 10-question quizzes focused on some of the more common types of errors that students writers make (e.g. comma splices, misuse of semicolons, subject-verb disagreement). Additional feedback related to grammar and mechanics was given to students in the form of marginal comments on the first and third writing assignments when necessary. Approximately three-quarters of the way through the semester, students were assigned different versions of a “Writing Improvement Plan”. Both the control and treatment groups were asked to look back at their diagnostic results and also consider any additional feedback they’d been given on their essays. If a student scored 6/10 or less on any of the ten quizzes in the diagnostic, they were expected to work on that specific error. The control group was provided access to free, online (but copyrighted) exercises at chompchomp.com and owl.english.purdue.edu and expected to complete and submit exercises corresponding to the errors that they needed to work on. The treatment group, on the other hand, was given four openly-licensed resources to explore: two full courses (Saylor’s ENGL001 and Lumen Learning’s English Composition 1), an open composition textbook (McLean’s Writing for Success), and a mechanics primer and workbook (Aragona’s Sentence-Level Essentials). Students in the treatment group were told to find content in those resources that would help them improve in the areas indicated by the diagnostic and encouraged to remix the content to be personalized to their needs. They were then asked to submit a description and explanation of this “personalized toolkit,” including what content from which sources they might use and an explanation of why they chose that content. They were not required to actually assemble the toolkit, but told that, if they did, they would receive feedback. In other words, the control group did no exercises; rather than grammar drills, they reviewed a variety of open content and made evaluative decisions about how they might remix that content to best facilitate their growth.
The diagnostic module students completed at the beginning of the semester was exactly replicated at the end of the semester: they took the same test again. The only difference was that students in the control had been assigned to drill prescribed exercises, whereas students in the treatment had been assigned to review and evaluate the open content and explain how they’d use it. Students in both groups had access to the open content, but only the treatment group was required to actually view the content (and the researcher suspects that very few, if any, students in the control group utilized the linked open content when it wasn’t required that they do so). Only 8/32 (25%) students in the control group took both the pre- and post-tests, compared to 20/60 (33%) in the treatment group. Both groups saw statistically significant (p=.000) gains in skills mastery over the semester (using a nonparametric Wilcoxon Rank Sum test), illustrated in Fig. 7.
Interestingly, while the treatment group began almost a full ten points below the control, they improved much more than the control and ended at almost the same average. However, multivariate tests comparing the effects of time and treatment between the groups determined that this difference was only “approaching” significance (p=.071).
Future sections of these courses given the same assignment will have more models as examples, which may help to mitigate the confounding factor of designing the tool from the ground up. Furthermore, future sections will also have the option of improving on the content in the “Applied Rhetoric” course wiki, which may have unforeseen consequences. In any case, this study’s results indicate that the open assignment may be an option in the pursuit of student skills mastery.
While the shift to a renewable assignment in this study yielded no significant difference with respect to student skills mastery, it is possible that the failure was of the study’s design itself. As noted above, the renewable assignment given in the module focused on the topic of rhetoric was considered to be more difficult than a traditional essay would have been, and that may be because rather than being presented with a familiar form (i.e. an essay) and told to fill it with a demonstration of their knowledge, students were expected to identify a real-world situation and design a tool that could be used in that situation to explain the concepts of rhetoric in context. These are arguably tasks that are much more complicated, and the summative assessments used to compare the impact of the differing pedagogies focused on simpler skills, such as the ability to identify and describe a concept rather than apply it directly to the real world. Future iterations of this research should consider ways to design the assessments so that more complex skills are measured.
While more challenging, the renewable rhetoric assignment nonetheless provided students the opportunity to use prior knowledge and extracurricular skills in the demonstration of their rhetorical prowess, which some students found exciting and others found frustrating.
One arguably significant finding in this study was the statistically insignificant difference between performance on grammar quizzes between students in the treatment group, who were literally just given a few open resources and told to explore, and the control group, who were assigned practice drills on error types they were not able to identify in the pretest.
Despite the experiment’s many flaws, the results show that, in this semester with these students sharing the same OER and the same instructor, moderate shifts toward open pedagogy had no impact on skills mastery. In other words, in this case it may be true that I did no harm by disposing of the ‘disposable assignment’.
Grewe, K.E., & Davis, W.P. (2017). The impact of enrollment in an OER Course on Student Learning Outcomes. International Review of Research in Open and Distributed Learning, 18(4). https://doi.org/10.19173/irrodl.v18i4.2986
Hegarty, B. (2015). Attributes of open pedagogy: a model for using open educational resources. Educational Technology. Retrieved from https://commons.wikimedia.org/wiki/File:Ed_Tech_Hegarty_2015_article_attributes_of_open_pedagogy.pdf
Knox, J. (2013). Five critiques of the open educational resources movement. Teaching In Higher Education, 18(8), 821–832. https://doi.org/10.1080/13562517.2013.774354
Scott, J.L., Moxham, B.J., & Rutherford, S.M. (2014). Building an open academic environment - a new approach to empowering students in their learning of anatomy through ‘Shadow Modules’. Journal Of Anatomy, 224(3), 286–295. https://doi.org/10.1111/joa.12112
Wiley, D. (2013, October 21). What is open pedagogy? [Blog post]. iterating toward openness. Retrieved from https://opencontent.org/blog/archives/2975.
Wiley, D., & Hilton III, J.L. (2018). Defining OER-Enabled Pedagogy. The International Review of Research in Open and Distributed Learning, 19(4). https://doi.org/10.19173/irrodl.v19i4.3601
Wiley, D., Webb, A., Westin, S., & Tonks, D. (2017). A preliminary exploration of the relationships between student-created OER, sustainability, and students success. The International Review of Research in Open and Distributed Learning, 18(4). http://dx.doi.org/10.19173/irrodl.v18i4.3022