Establishing a MOOC quality assurance framework – a case study
University of Roma Tre (Italy)
antonella.poce@uniroma3.it, francesca.amenduni@uniroma3.it,
mariarosaria.re@uniroma3.it & carlo.demedio@uniroma3.it
Abstract
The rapidly growing number of learning materials and repositories makes the issue of how to find the most relevant and best quality resources to be integrated in teaching and learning offers. Thus, effective quality assessment tools are more and more needed. In the present paper, a case-study focusing on quality assurance in a Virtual Mobility (VM) international project is presented. VM stands for ICT supported activities, organized at higher institutional level, that makes possible or facilitate international, collaborative experiences in a context of teaching and/or learning. Different approaches were combined to ensure the quality of a specific MOOC and the OERs created to promote VM. Three main macro-indicators were identified for OERs evaluation: 1. Quality, 2. Appropriateness, and 3. Technical aspects. Each project partner was invited to search, select and peer-assess OERs related to the skills necessary to be engaged in VM. First results of the peer-review activity and future directions to ensure OpenVM OERs and MOOC quality are presented.
Keywords: Virtual mobility, OER, MOOCs, quality assurance framework.
Reception date: 25 July 2019 • Acceptance date: 7 October 2019
Introduction
Open Education is understood as a mode of carrying out education using digital technologies to provide alternative and less restrictive access routes to formal and non-formal education (Brown, 2008). This perspective is broad enough to enable a comprehensive view, thus encompassing for instance Open Educational Resources (OERs), Massive Open Online Courses (MOOCs), and recognition of open learning (Stracke, & Tan, 2018). According to OECD definition (2007), Open Educational Resources (OER) are “digital learning resources offered online freely (without cost) and openly (without licensing barriers) to teachers, educators, students, and independent learners in order to be used, shared, combined, adapted, and expanded in teaching, learning and research”. OERs are not only course components, but they can be entire courses, a museum collection, an open access journal or a reference work. Over time, the term has come to cover also content management software and content development tools. Finally, OERs include implementation resources such as standards and licensing tools for publishing digital resources, which allow users to adapt resources in accordance with their cultural, curricular and pedagogical requirements. Having said that, the term ‘OER’ is not synonymous with online learning, eLearning or mobile learning. Many OERs are also printable. What makes “Open” an Educational Resource is the feature of
“free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself…“ (Chan et al., 2002).
OERs are free and they can be adapted and remixed, thus they can enhance collaboration and networking, fostering a sharing culture and respect for various cultures and believes (Tappeiner, DiSanto, & Lyons, 2019). According to Tuomi (2006), openness includes social and technical features: the social domain concerns the freedom to use, contribute and share the resources. Constraints to the social domain can be the copyright, the price of access or accessibility. Regarding the copyright challenge, the Creative Commons licence is the best-known and most often used open licence at present and offers a number of sharing options. Openness means also accessibility and it can depend on individual capabilities; for example, course contents may be freely available in a language the user does not understand, or the user may have a disability that precludes the individuals using the content.
A systematic approach to OERs quality assessment is particularly important to make decisions about which existing resources to include in a learning path. The rapidly growing number of learning materials and repositories makes the issue of how to find the most relevant and best quality resources. In addition, overlapping and competing standards, size of the search pool, quality of metadata are issues that different initiatives in the field of Open Education have tried to solve (Dietze et al., 2013; St. Lifer, 2018). Thus, there is urgency for effective search, discovery, and quality assessment tools. Quality can be defined as “[…] appropriately meeting the stakeholders’ objectives and needs which is the result of a transparent, participatory negotiation process within an organization” (Pawlowski, 2007). In the context of OERs, quality can for example mean that a teacher finds a suitable resource for his/her teaching. There are several alternative ways of approaching quality management in Open Education. Quality assurance can be a centrally or decentralized process, and the process may be open or closed (OECD, 2007; Jansen, Rosewell, & Kear, 2017). A common tool for the evaluation of the OERs is social ranking, which can be described as a form of crowd-sourced peer-review (Camilleri, Ehlers, & Pawlowski, 2014). The present paper will describe and critical discuss OERs and MOOC quality approach adopted in the Erasmus + project “OpenVM: Opening Education for Developing, Assessing and Recognising Virtual Mobility Skills in Higher Education”1.
Ensure OERs quality in the Open Virtual Mobility Project
Virtual mobility (VM) stands for ICT supported activities, organized at higher institutional level, that support or facilitate international, collaborative experiences in a context of teaching and/or learning (Tur, Urbina, Firssova, Rajagopal, & Buchem, 2018). Virtual Mobility (VM) has a great potential to contribute to the internationalization, innovation and inclusion in higher education. The barriers to physical mobility of educators and students, such as high costs, socio-economic, political and health-related issues, can be dramatically reduced by adding the virtual component to mobility and, making mobility accessible to everyone (EuroPACE, 2010). The OpenVM project is a Erasmus+ strategic partnership dedicated to create accessible opportunities for achievement of virtual mobility skills and to ensure higher uptake of virtual mobility in higher education in Europe. Despite numerous virtual mobility initiatives and projects in the past years, the uptake of virtual mobility in higher education is still low and the possibilities remain unknown to a large number of educators and students in Europe. Higher education teachers and students but also internationalization officers and other institutional stakeholders, need the skills, confidence and readiness to start, implement and develop virtual mobility actions. Open Virtual Mobility (OpenVM) has a great potential to contribute to the internationalization, innovation and inclusion in higher education.
The project lasts three years (2017-2020) and it aims at supporting higher education teachers and students in developing, assessing and recognising the skills needed for design, implement and participate in virtual mobility activities in line with Open Education principles.
The key outcome of the openVM project is the Virtual Mobility Learning Hub2 for achievement, assessment and recognition of virtual mobility skills as a central reference point. The VM Learning Hub will apply innovative tools and methods (such as open credentials, evidence-based assessment and matching algorithms for learning groups) and provide a set of open educational resources (OER), a massive online learning course (MOOC) and guidelines to support the design, implementation and participation in virtual mobility in higher education.
There is widespread skepticism of the quality of MOOCs and the learning methodologies used and there is evidence that supports this skeptical view (Margaryan, Bianco, & Littlejohn, 2015; Lowenthal & Hodges, 2015). Thus, the Quality Assurance Framework (QAF) is one of the pivotal aspects for the success of the Open Virtual Mobility (Atenas, & Havemann, 2014). Levels of quality can be defined from the most general aspects to the more specific ones (Fig. 1). Although each quality level interacts with each other, in this paper we will focus on two of the five levels presented in the figure 1: quality of the courses (in our case, MOOCs) and individual OERs quality.
Figure 1: Levels of quality in Open Education – Adapted from Camilleri, Ehlers, & Pawlowski (2014).
In the context of the the OpenVM project, a Massive Open Online Course named OpenVM MOOC has been developed in order to promote students’ and teachers’ skills necessary to be involved in VM. The OpenVM is structured in eight miniMOOCs, corresponding to the eight key skills and related content necessary to be engaged in Virtual Mobility (Firssova & Rajagopal, 2018): 1. Intercultural Skills; 2. Collaborative learning; 3. Autonomy-driven learning; 4. Networked Learning; 5. Media and digital literacy; 6. Active self-regulated learning; 7. Open mindedness; 8. Knowledge of Virtual Mobility and Open Education. These knowledge and skills were identified by applying a group concept mapping methodology and involving 49 experts in the field of virtual mobility and/or open education, with experience in higher education as university professors or education management and support (Firssova & Rajagopal, 2018).
Three levels are then proposed for each miniMOOCs:
- foundation level: focused on knowledge acquisition;
- intermediate level: focused on knowledge application in a collaborative learning environment;
- advanced level: focused on self-reflection and meta-reflection;
Each miniMOOC has a pre-assessment activity: participants are required to fill in a quiz and, according to the score they obtained, they can be directed to the foundation level, intermediate level or advanced level. Each combination between the level and the miniMOOC is defined a subMOOC. Thus, the OpenVM MOOC is composed by 24 subMOOC, 8 miniMOOCs for 3 levels (Figure 2). Each subMOOC has different forms of assessment. In the foundation level and in the intermediate level there are mainly quizzes (e.g. multiple choices, true or false and drag and drop exercises), whilst in the advanced level there are also e-portfolio and peer-assessment activities. At the end of each subMOOC, participants obtain a badge that certifies the skills acquired in that specific subMOOC.
All the miniMOOC will contain approximately 9 Open Educational Resources (3 for the basic level, 3 for the intermediate level and 3 for the advanced level). In the OpenVM MOOC, OERs are considered the study material that participants could read, listen to, download and re-use for their personal purposes. OERs include slides, supplementary audio files, URLs to other resources, online articles and video lectures.
Figure 2: The OpenVM MOOC structure.
Different approaches were combined to ensure the quality of the OpenVM MOOC and OpenVM OERs. At a more general level, quality assurance of the OpenVM MOOC is addressed through an iterative cycle of design, creation, implementation, and assessment, following the Design Based Research model – DBR (Barab & Squire, 2004). Salinas (2012) remarks that the DBR model has had an important uptake in Technology Enhanced Learning research as it is aimed at creating knowledge on the design, implementation and evaluation of the educational experience. It aspires to explore problems in real contexts requiring a solution in a particular context (de Benito & Salinas, 2016). Moreover, the DBR model has been argued to be suitable for the study of innovation, for which the contrast with the theoretical background and action observation in successive iterations is the strategy for knowledge creation (Brown, 1992; de Benito & Salinas, 2016; Shavelson, Phillips, Towne & Feuer, 2003). In the QA for the OpenVM Erasmus+ project, diverse tasks and instruments have been included following the DBR model. For each component (e-assessment, OERs, and MOOCs) the following phases (Piedra, Chicaiza, López, & Caro, 2015) are included:
- Assessment by partner (internal);
- Assessment by external experts;
- Assessment by pilot users;
- User testing assessment;
- Learning analytics.
Within this paper, we will present the results of the first phase, the assessment carried out by partners regarding OERs and future perspective on the other phases.
In the quality assurance of the OpenVM OERs, elements of the traditional peer-review with social rating were combined (Camilleri, Ehlers, & Pawlowski, 2014). In the OpenVM project, project partners can change role between reviewer and producer depending on the project phase, and this make the quality review process closer to a social rating practice. Partners were also provided with a rubric (Table 1) to assess OERs selected and produced by peers as in the traditional peer-review. Three macro-indicators have been identified for the OERs evaluation (Poce, Agrusti & Re 2015) to assess OERs to be included in the Open VM MOOC:
- Quality;
- Appropriateness;
- Technical aspects.
Each macro-indicator was operationalised through sub-indicators (Table 1). By combining the answers on different sub-indicators, it is possible to provide a general overall evaluation of the OER (0=not usable; 1=limited; 2=good; 3=superior). For example, a resource can be considered weak if it is not recent neither peer-reviewed and/or accessible to people with disabilities. On the other hand a resource is considered superior if it covers one of the MOOC’s topics, if it is updated and its contents are clear organized and accessible to different kinds of target. The table was mainly inspired by a separate rubric for the evaluation of OERs created by ACHIEVE.org, a nonprofit education organization created in 1996 by a bipartisan group of governors and business leaders, fully recognized by international companies and institutions.3
OERs assessment and selection in the OpenVM Project
University Roma Tre is responsible for the Intellectual Output 6 of the OpenVM project, which includes organizing the process of OERs design, assessment and selection. Once the OERs assessment rubric presented in Table 1 was created, project partners were required to provide OERs in different formats (mainly texts and videos) and partners’ languages, following the quality guidelines of the OERs assessment rubric. OERs contents had to be connected to the eight skills necessary to be engaged effectively in virtual mobility. Each skill was assigned according to every partners’ specific background and expertise. In order to support OERs identification, Roma Tre team proposed different types of OERs repositories on the web. Not only repositories created by formal educational institutions, such as universities, but also other informal and no formal institutions databases (e.g. TedX video repository) were suggested to be used on purpose.
The process was organized as follows:
- Each partner had to identify at least 9 OERs (3 for the foundation level, 3 for the intermediate level, 3 for the advanced level) related to one of the eight skills of the OpenVM MOOC. Each partner was responsible to identify OERs within a certain area in order to cover all MOOCs’ contents. Partners had to download the OERs in a spreadsheet created on Google Sheets. The use of Google Sheets allowed partners to comment, insert feedback, and propose alternative contents.
- OERs selected were peer-assessed by another partner of the project. Peer-assessors could add comments, feedback, and propose alternative OERs. This way, partners had the opportunity to discuss suitability or non-suitability of the OERs selected to be included in the OpenVM MOOC.
- In the last phase, during a face-to-face workshop organised in February 2019 in Heerlen, partners worked in small groups of two or three people. Each group was invited to organize the OERs selected and assessed into a template for a miniMOOC design provided by the Roma Tre team.
The process was thought to guarantee each partner’s participation in the selection and assessment of the OERs and, eventually, in the OpenVM MOOC design.
First results of the OERs selection and assessment
Since the peer-assessment process is in progress, first results related only to six skills area are presented in Table 2.
This process was useful to exclude resources with poor overall quality. Only resources that obtained a “good” or “superior” overall assessment were included into the miniMOOCs. In case of positive assessment, partners had also the opportunity to include scoring descriptions, commenting briefly, like in the following extract (E1):
E1 “This resource is a good way to start a discussion about similar processes and the implications in the educational situation!” (Peer assessor of the Intercultural skills OERs)
On the other hand, when resources received a negative evaluation, the use of the Google spreadsheet allowed partners to discuss further and find a shared final decision (Figure 3).
Figure 3: Screenshot – Google Spreadsheet used to discuss OERs assessment.
Partners had approximately three months to complete this work, from November 2018 to February 2019. Once partners achieved a final agreement related to the course contents, learning objectives and assessment methods, then OERs, e-assessment and instructions were uploaded on the learning hub.
Conclusion and future steps
In the context of Open Education and Virtual Mobility, quality assessment needs to be taken into account. The present work describes the Quality Assurance framework for OERs within a specific MOOC created in the Erasmus+ Open Virtual Mobility Project. The University of Roma Tre research group firstly has developed a rubric for the OERs quality assessment. Then, each project partner was invited to search and assess OERs related to the eight skills identified by (Firssova & Rajagopal, 2018) necessary to be engaged in Virtual Mobility. Partners had to download the OERs in a spreadsheet created on Google Sheets. The OERs selected were peer-assessed by other partners in a joint project meeting. This way all the partners contributed to build the OpenVM MOOC (Camilleri, Ehlers, & Pawlowski, 2014), combining elements of the traditional peer-review with social rating activities.
Having said that, further work is needed to ensure OERs and MOOCs quality. As quality is not a generic concept, user behavior and comments can indicate the quality of MOOCs and OERs in relation to the learner context. We will carry out a pilot phase from 2019 to 2020 and we will collect different kinds of data regarding user interaction with the miniMOOCs, part of the MOOC under investigation, and the OERs used per each miniMOOC. As a strategic management decision, the OpenVM project will have to consider the role of learning analytics to gather and assess data from the MOOC and all single elements. In addition, different forms of data collection will be combined: user comments, recommendation and ratings. The insights will be used to improve OERs and the OpenVM MOOC quality and design, following an iterative process, as indicated by the Design Based Research approach.
Initial results regarding users’ OERs assessment were collected (Poce, Re, Amenduni, & Valente, 2019). In future phases, OVM MOOC users will be asked to provide their evaluation of the selected OERs.
Acknowledgements
A. Poce coordinated the research presented in this paper. Research group is composed by the authors of the contribution that was edited in the following order: A. Poce (Introduction; Conclusion and future steps) and Discussion), F. Amenduni (Ensure OERs quality in the Open Virtual Mobility Project); M. R. Re (OERs assessment and selection in the OpenVM Project), C. De Medio (First results of the OERs selection and assessment).
References
Footnotes
1https://www.openvirtualmobility.eu/es_ES/
2https://hub.openvirtualmobility.eu
3https://www.achieve.org/contributors
Papers are licensed under a Creative Commons Attribution 4.0 International License
Refbacks
- There are currently no refbacks.