Learners in MOOCs often experience challenges that can be identified as barriers to learning. These barriers may be MOOC- or not MOOC-related. By knowing about potential barriers learners would be better prepared and more likely to handle and overcome them. Therefore, the aim of this study was to advance insight and knowledge about barriers to learning in MOOCs. Assessment and reassessment of the data using exploratory factor analysis provided a good model fit for a 6-factor structure. This was confirmed by a confirmatory factor analysis. Further classification of the factors revealed that barriers experienced by learners were predominantly non-MOOC related. To get insight into the barriers learners experience, it was suggested to convert the identified factor structure into a diagnostic instrument (dashboard) powered by learner self-report. This dashboard then provides information about barriers learners experience and can be valuable for making (re) design decisions and for developing learner supporting tools and interventions.
Massive Open Online Courses (MOOCs) are online-courses of various duration (typically 5–8 weeks), covering various topics, designed to be accessible to anyone, anywhere, at any time (Barnes, 2013). They provide a fairly novel non-formal learning opportunity to gain knowledge on a wide variety of topics (Greene et al., 2015; Misopoulos et al., 2018). Although there are similarities with distance education, there are also some important differences: MOOCs are often free of charge (though nowadays, in most cases, certificates are charged), there are no educational entry level requirements, in-course support is not always available and most MOOCs only provide limited acknowledged credentials or academic credits (Reich & Ruipérez-Valiente, 2019). Also, learners can individually form their own goal intentions of what they want to learn and achieve in the MOOC. For example, learners may choose to learn the content of several weeks, or just study a specific topic that is addressed in the MOOC. Alternatively, they may aim to finish the complete MOOC to earn a certificate. As a result of the open and non-committing nature of MOOC-learning, leaners have the opportunity to tailor their learning needs and determine their personal goal intentions (Henderikx et al., 2017).
However, due to its open, accessible and less supported form of learning, learners face quite a number of challenges (Gamage et al., 2015; Misopoulos et al., 2018). Some of these challenges can be identified as barriers to learning and may negatively influence learner retention (Adamopoulos, 2013; Belanger & Thornton, 2013; Hone & El Said, 2016) with the possible consequence that personal goal intentions may not be realized. Therefore, in this study, barriers are described as obstacles that hinder or prevent learners reaching their personal goals, following the definition of (Henderikx et al., 2018a). Barriers can be either MOOC-related or non-MOOC related and may cause learners to change their individual intentions or to quit (Henderikx et al., 2018b). MOOC-related barriers typically refer to the lack of interaction, lack of instructor presence or course content (e.g. Hone & Said, 2016; Onah et al., 2014) whereas non-MOOC related barriers typically refer to insufficient academic knowledge, lack of time or lack of digital skills (e.g. Conole, 2016; Khalil & Ebner, 2014).
These barriers can potentially obstruct the completion of personal goal intentions. According to Fishbein and Ajzen (2010) the translation of goal intentions into actual behavior (with the outcome that personal goals are realized) is moderated by actual behavioral control. Actual behavioral control is related to taking measures in order to cope with barriers at the moment they arrive. The higher the degree of actual behavioral control of learners, the better their chance is to achieve their personal learning goals i.e. to translate their individual goal intentions into actual behavior. If MOOC-learners would know in advance about potential barriers in advance, they would be better prepared to take measures to cope with these barriers. In other words, the more one knows about potential barriers the better prepared one is when they actually arrive and, therefore, the more likely one is to complete individual goal intentions. Yet, not only MOOC-learners may profit from insights into barriers to learning, MOOC-providers and designers may profit as well as they can use this knowledge for making informed decisions about redesigning MOOCs and for taking measures to support and inform learners about potential barriers they may face.
The purpose of the current study, therefore, is to determine potential barriers from the population of MOOC-learners and to categorize them. The categorization serves as a way to identify those barriers that are related to the MOOC (thus barriers that can only be addressed by the MOOC provider) and barriers that are related to the MOOC learners themselves (about which the MOOC provider can inform MOOC-learners so that they are aware of these potential barriers and are able to take precautions to avoid them).
This study is based on a chapter of a PhD dissertation (Henderikx, 2019; chapter 7, https://research.ou.nl/ws/files/11849471/DissertationMaartjeHenderikx.pdf), which described the development and validation of an instrument to assess barriers to learning in MOOCs. Current study builds on this chapter and further extends a previous study, which reported on a preliminary classification of barriers to learning in MOOCs (Henderikx et al., 2018a), with the aim to refine the classification and improve the generalisability and therefore the usability of respective barrier classification. The article is structured as follows: First, a literature review will provide an overview of the most relevant literature on barriers to online learning and to MOOCs in particular. Second, the methodology of the study is presented, followed by the results of the analyses. Lastly, the results will be discussed as well as the limitations, recommendations for future research and implications for practice.
Online learning with MOOCs gives MOOC learners the freedom to cater for their own learning and make individual choices about the learning activities of the MOOC they intent to undertake (Henderikx et al., 2017; Koller et al., 2013; Reich, 2014). However, learners are not always able to translate their individual intentions into actual behaviour i.e. they do not always succeed in achieving their individual goal intentions (Fishbein & Ajzen, 2010). A consistent reason for this discrepancy is that learners struggle with issues that hinder or impede their learning in MOOCs (e.g. Henderikx et al., 2018b; Conole, 2016; Misopoulos et al., 2018).
Research about issues that can potentially impede successful learning in MOOCs is growing and holds similarities with research findings that pertain to online learning and distance education contexts. The body of studies about barriers pertaining to learning in MOOCs is not yet as extensive as in distance learning and other online learning contexts. Nevertheless, as MOOCs developed over time, more studies focussed on issues that stand in the way of learner achievement. One of the main barriers experienced by MOOC-learners is ‘lack of time’ (Belanger & Thornton, 2013; Boyatt et al., 2013; Conole, 2016; Khalil & Ebner, 2014; Onah, et al., 2014; Shapiro et al., 2017). MOOC-learners indicated that they struggle with finding a balance between learning online with MOOCs and everyday life without revealing specific reasons for these time constraints.
Other salient barriers mentioned by MOOC-learners are ‘lack of interaction with peers’ (Hone & Said, 2016; Khalil & Ebner, 2013; Mcauley et al., 2010) and related to that, ‘lack of instructor presence’ and ‘in-MOOC support’ (Mackness et al., 2010; Onah, et al., 2014). These studies showed that interaction with learners and interaction with and support from instructors is regarded as important by learners and generally found this to be a statistically significant predictor for learner achievement. These findings correspond with findings in distance education studies where the lack of interaction with peers (Carr, 2000) and with instructors (Shin, 2003) caused students to drop out. Also, ‘lack of decent and instant feedback’ (Balfour, 2013; Grover et al., 2013), ‘insufficient academic background’ (Belanger & Thornton, 2013; Khalil & Ebner, 2014; Shapiro et al., 2017), the content of the MOOC (Hone & Said, 2016; Onah, et al., 2014) and ‘lack of technical skills’ (Conole, 2016; Onah et al., 2014) are frequently mentioned by learners. All aforementioned studies agreed about the importance of recognizing each of these issues for successful learning. If these issues were not perceived as positive by the learners, they have the potential to become barriers and hinder or impede the translation of individual goal intentions into actual behaviour.
Following on from previous studies, Henderikx et al. (2019) aimed to further the understanding of barriers to learning in MOOCs by exploring whether several MOOC-learner related variables affected their experience of barriers in MOOCs. The study revealed that MOOC-learners between the age of 20–50 years old were more likely to experience work- and family-related barriers resulting in time constraints. Furthermore, women more often than men faced the barrier ‘lack of time,’ but more prior online learning experience alleviated the experience of this barrier. Lastly, learners with a limited academic background more often indicated that they struggled with the content of the course than learners with a more extended academic background.
Previous overviews of barrier-related literature in the context of MOOCs illustrated that there are many hurdles learners can encounter which may have a negative impact on achieving their personal goal intentions. In contrast to the current study, these studies explored only one or some specific issues that can be regarded as barriers to learning. The one exception is a study by Muilenburg and Berge (2005) in distance education context which presented an overview of potential barriers distance education students (expected to) face while learning online. Their principal component analysis revealed eight categories of barriers: (1) administrative/instructor issues, (2) social interactions, (3) academic skills, (4) technical skills, (5) learner motivation, (6) time and support for studies, (7) cost and access to the internet, (8) technical problems. A composite scores calculation per component identified social interactions as the most important barrier for students’ online-learning. Academic skills were identified as the least important barrier.
Building on a previous study by Muilenburg and Berge (2005), Henderikx et al. (2018a) aimed to expand research into barriers in the MOOC context by composing an overview of potential barriers for MOOC-learners and empirically classify them. A literature review identified forty-four potential barriers which impeded academic achievement in the context of online learning in general, distance education and a MOOC-context specifically. Over 300 MOOC-learners indicated to what extent they experienced these forty-four barriers while learning in MOOCs. This data was analysed using principal component analysis as this statistical method allows for categorizing data. After several iterations, during which nine barriers could not be categorized, four distinct components (i.e., categories) summarized the barriers, namely 1) Technical and online learning related skills, 2) Social context, 3) Course design and 4) Time, support and motivation (see Figure 1). The Cronbach alpha’s, which were used to assess the internal consistency, indicated a very high coherence between the items in the same category. This indicated that the barriers in each category have a consistent relationship with each other. Further investigation of the categories revealed that most barriers experienced by learners were not related to the MOOC itself and that these non-MOOC-related barriers were generally experienced as more severe by learners than MOOC-related barriers.
While the findings of previous study were interesting, especially with regards to getting further insight into MOOC learning, the study was limited in several ways. The sample was very specific as it included only learners who took part in one or more MOOCs in the Spanish language. In addition, the principal component analysis was merely an exploratory technique, used as a first attempt for identifying potential categories. Confirmation of the results with samples drawn from a more varied MOOC-learner population and the application of more refined analyses methods will further improve the classification which will benefit the generalisability and usability.
The main purpose of this study is to advance research about barriers to learning in MOOCs by strengthening the identification and categorization of barriers as experienced by learners. While experiencing something as a barrier is generally of a subjective nature, self-reported measures which rely on individual’s experiences are well suited for identifying barriers to learning (Duffy et al., 2018). Based on the literature review and the preliminary categorization, we expected that the current study would reveal more refined categories that will at least represent potential barriers concerning course content, social interactions, skills, motivation and time related issues.
In previously discussed exploratory barrier study (Henderikx et al., 2018a) a principal component analysis (PCA) was used for explorative dimension reduction purposes. Current research aimed to extend this study by using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) as these analysis methods are generally used for determining a factor structure (EFA) and for confirming a factor structure (CFA) (Bryant & Yarnold, 1995; Byrne, 2005). We investigated to what extent the exploratory factor analysis resulted in a distinguishable, internally consistent categorization of barriers and whether this categorization could be confirmed by a confirmatory factor analysis. In addition, we compared the preliminary categorization found previously by (Henderikx et al., 2018a), with the newly identified categorization.
Six-thousand participants were randomly selected in three batches from a list of 50,000+ MOOC-learners to fill in a survey and to ask them for future contact; the first batch addressed 1000, the second 3000, and the third batch 2000 MOOC-learners. The MOOC-learners participated in a MOOC from Delft University in the English language, offered on the EdX platform. Of the 6000 participants, 540 completed the survey, resulting in a response rate of 9%, which is not unusual for online administrations (Saleh & Bista, 2017). The majority of the participants were of European (27%), North-American (20%) and South-American nationality (21%). A further 15% of the participants had an Asian nationality, while the remaining 17% were participants from other countries. The majority of the participants held a master (49%) or bachelor (33%) degree. Seven percent of the participants had a doctorate degree, while 11% had an associate degree or secondary or primary education. Most participants rated their English proficiency good to very good (88%). Ten percent indicated that their command of the English language was average, while the remaining 2% of the participants rated their level as fair to poor. Overall, the sample is similar in terms of demographics to samples reported in other research on MOOCs (Ho et al., 2014).
The list of 44 potential barriers of the previous study by (Henderikx et al., 2018a) was reused and supplemented with several demographic questions about educational background and employment status. Participants were asked to what extent the presented barriers negatively influenced or hindered their progression while learning in a MOOC. A five-point Likert scale gave them the opportunity to indicate their experienced hindrance from ‘to a very large extent’ to ‘not at all’. Examples of barriers were ‘lack of motivation, ‘workplace issues’, ‘technical problems with the site’ and ‘lack of timely feedback’ (see Appendix A, for mean scores and standard deviation per barrier).
Qualtrics e-mail facility was used to recruit participants for the study in three batches over the course of several weeks from December 2017 until January 2018. Participation was voluntary and the minimum age requirement was set to 17 years. Before being able to proceed to the survey, participants had to confirm their age and voluntary participation by giving electronic consent. The survey took about 5–10 minutes to complete. Per batch, one reminder was sent. Approval for this study was granted by the Ethics Committee of the Open University of the Netherlands (13/11/2017/Kenmerk U2017/08311/MQF).
All analyses were performed in Mplus v.7.3. (Muthén & Muthén, 1998–2014) using Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA; Byrne, 2012). Although the data was ordered-categorical (Likert scales) we treated it as continuous data and thus used Maximum Likelihood as estimation approach, as this is the recommended approach when the number of categories is 5 or more and the distribution and skewness/kurtosis of the data was approximately normal (Rhemtulla et al., 2012).
An EFA approach is recommended in cases where there is sparse theory available as it will provide the best understanding of the factor structure (Schmitt, 2011; Schmitt et al., 2018). A CFA approach is typically used for theory testing (Bryant & Yarnold, 1995). For determining the number of factors, the best approach is to not rely solely on fit indices, but to combine this with item interpretation and common sense (Schmitt et al., 2018). Preacher and colleagues (2013) alerted that merely searching for good fit indices i.e. models that best fit the data, often leads to overly complex and overfitted models (Hayduk, 2014), which will not benefit the exploratory power of the model (Preacher, 2006). Therefore, our main goal was to explain and describe a meaningful factor structure with potential for generalisability (Hastie et al., 2001; Preacher et al., 2013).
In addition, it has also been suggested that when a factor structure is expected to be complex, an EFA approach is preferred (Schmitt et al., 2018). In the case of this study, the literature review produced a high number of potential barriers which may possibly lead to a complex factor structure. In other words, the barriers may be difficult to classify into completely distinct categories. This indicated that an EFA approach may be preferable to a CFA approach. Nevertheless, we also performed a CFA analysis to test the restrictive factor structure based on the result of the EFA.
In order to perform the EFA and CFA, the sample was randomly split. Given a sample size of 540 respondents, each sub-sample consisted of 270 respondents. Although splitting a sample may have some statistical drawbacks (Schmitt et al., 2018), it is common practice (e.g., Ng, 2013; Wegener & Fabrigar, 2000).
Model goodness of fit was evaluated for the EFA and CFA analyses using the commonly applied fit indices. Since the chi-square is known to be highly sensitive to sample size (Marsh et al., 1988; Marsh et al., 2005), a variety of sample-size independent goodness of fit indices was also examined to assess the fit of the alternative models: The Root Mean Square Error of Approximation (RMSEA), the Tucker-Lewis Index (TLI), and the Comparative Fit Index (CFI; Fan et al., 1999; Hu & Bentler, 1999; Kenny et al., 2015; Marsh et al., 2004; Yu, 2002). The TLI and CFI vary along a 0-to-1 continuum and values greater than 0.80, 0.90 and 0.95 typically reflect an acceptable, good and excellent fit to the data. RMSEA values of less than 0.06 and 0,08 indicate a good and acceptable fit to the data respectively.
EFA analysis with oblimin rotation was performed on the pool of 44 barrier items. Taking the 4-component classification of barriers by Henderikx et al. (2018a) into account, we focused on examining factor structures of EFA models with 4 to 8-factors. Fit indices in combination with the maximum allocation of items indicated that the 6 and 7-factor models adequately to best represented the data.
As emphasised by Schmitt et al. (2018) fit indices should be assessed in combination with item interpretation and common sense. In terms of goodness of fit indices, the 7-factor model fitted the data slightly better than the 6-factor model. However, the 7-factor model showed several factors with loadings that did not exceed the .40 cut off and cross-loading which were of the same magnitude, resulting in ‘empty’ factors. Therefore, the 7-factor solution was rejected. Since the 7-factor solution was rejected due to ‘empty’ factors, we did not further investigate an 8 or 9-factor solution.
The 6-factor solution did not suffer from cross-loading items or ‘empty’ factors, produced distinguishable factors and largely matched the expected categories of the barriers as based on the previous findings by (Henderikx et al., 2018) described in the introduction. Nevertheless, ten items could not be categorized (i.e. had to be removed) as their loadings were <.40 (Comrey & Lee, 2013; Tabachnick & Fidell, 2013) or had cross-loadings on more than one factor. The removed items represented the barriers ‘Lack of in course support’, ‘lack of timely feedback’, ‘lack of decent feedback’, ‘lack of social context’, ‘unfamiliar with online learning tools’, ‘learning environment not motivating’, ‘lack support family friends’, ‘course content too hard’, ‘technological problems with the site’, ‘insufficient academic knowledge’.
These items were removed and form a separate miscellaneous category that should not be ignored as learners also have to be (made) aware of these potential barriers. See Table 1 for the final goodness of fit indices of the 6 and 7-factor models. The factor loadings of the 6-factor EFA model are presented in Table 2.
|6-FACTOR SOLUTION||7-FACTOR SOLUTION|
|Chi-Square Test of Model Fit|
|degrees of freedom||372||344|
|Standardized/weighted Root Mean Square Residual|
|Lack of Interaction with Instructor||.753*||.188*||–.098||–.025||–.041||–.008|
|Lack of Instructor Presence||.645*||.240*||.023||–.056||–.005||.009|
|Lack of Interaction with Students||.723*||.092||–.080||.060||.062||–.022|
|Learning feels Impersonal||.715*||.006||.036||.011||.081||.036|
|Lack of Student Collaboration||.603*||–.102||.260*||–.031||.015||–.005|
|Prefer Face To Face Learning||.659*||–.149*||.050||.064||.030||.126*|
|Feeling Of Isolation||.569*||–.127*||.186||–.066||.102||.168*|
|Lack of Language Skills||–.047||.883*||.098*||.009||–.018||.033|
|Lack of Writing Skills||.035||.829*||.023||.011||.010||–.029|
|Lack of Reading Skills||–.009||.669*||.235*||.134*||–.038||.019|
|Lack of Information Literacy Skills||.132||.584*||.028||.286*||.063||–.029|
|Lack of Typing Skills||.028||.738*||–.082||.046||–.016||.031|
|Lack of Confidence||.203*||.423*||–.147*||–.061||.135*||.153*|
|Low Quality of the Learning Materials||–.031||.04||.855*||.048||.021||.039|
|Instructors Don’t Know How To Teach Online||.124*||.029||.753*||.041||–.011||.045|
|Course content is bad||–.032||.109*||.757*||.029||.038||.042|
|Course content is too easy||.117||.070||.567*||–.093||.139*||–.159*|
|Unavailable Course Materials||.035||.025||.578*||.200*||–.038||–.019|
|Lack of Clear Expectations or Instructions||.211*||.027||.565*||.038||.028||.119*|
|Technical Problems PC||–.106*||.050||.068||.766*||.058||.003|
|Lack of Software Skills||.078||–.003||.125||.571*||–.043||.117|
|Lack of Skills Using the Delivery System||.009||.004||.063||.663*||–.003||.175*|
|Lack of Adequate Internet||–.090||.058||.161*||.532*||.141*||.019|
|Insufficient Training To Use Delivery System||.087||.150*||.044||.492*||.093||.097|
|Lack of Technical Assistance||.188*||.188*||.011||.530*||.120*||–.134*|
|Lack of Support from the Employer||.195*||–.111||.140||.088||.545*||–.249*|
|Too Many Interruptions During Study||.002||.106||.027||.057||.526*||.153*|
|Lack of Motivation||.053||.083||.083||.179*||–.036||.665*|
|Own Responsibility Learning||.145*||–.022||–.053||.139||.079||.610*|
The purpose of this study was to investigate whether the exploratory factor analysis would result in distinguishable, internally consistent categories and to determine the overlap with the preliminary categorisation previously found by (Henderikx et al., 2018a). To answer these questions, the items per category were assessed for their descriptiveness and representation of a particular category of MOOC barriers. A review of the items that loaded on each factor provided the following factor labels:
|Factor 1||Social interactions: These are issues to learning in MOOCs that learners perceive as being caused by the online environment such as a lack of interaction with peers and instructors, the lack of collaboration with other learners, feeling isolated or preferring face-to-face learning.|
|Factor 2||Academic skills: Learners perceive barriers to learning in MOOCs due to a lack of basic academic skills related to writing, reading typing and information literacy.|
|Factor 3||Content related issues: This factor is concerned with issues that are related to the content of the MOOC that can be experienced as barriers by learners, such as the unavailability of learning materials, a low quality of the learning materials or the lack of clear instructions in the MOOC.|
|Factor 4||Technical skills and problems: Learners experience barriers to learning in MOOCs due to their lack of technical skills or technical problems such as technical problems with the pc or the internet or their unfamiliarity with online learning tools, lack of skills using the delivery system or lack of software skills.|
|Factor 5||Situational issues: This factor is concerned with the extent to which learners experience barriers relating to a lack of time in general, family or work issues or interruptions during their study time.|
|Factor 6||Individual motivation: These are issues relating to learner motivation while learning in the less supported learning environment of a MOOC, such as procratination, lack of motivation or the own responsibility for learning.|
The internal consistency of these factors was assessed by calculating the Cronbach’s alpha (see Table 3). Commonly, a Cronbach’s alpha of >.70 is considered acceptable, >.80 is considered as good and >.90 is regarded as excellent (Taber, 2018). The Cronbach’s alpha coefficients presented in Table 3 indicate that the internal consistency of the majority of the factors is good when taking the aforementioned indicators into account (see Table 3).
|Factor 1||7||Social interactions||.867|
|Factor 2||6||Academic skills||.894|
|Factor 3||6||Content related issues||.881|
|Factor 4||6||Technical skills and problems||.845|
|Factor 5||6||Situational issues||.801|
|Factor 6||3||Individual motivation||.750|
As the EFA analysis resulted in a well fitted model with 6 factors, the next step was to test if this model would hold in the more restrictive form of a factor structure based on CFA. In the CFA, items will only load on their target factors (i.e., categories), thus all cross-loadings are fixed to zero. The CFA was performed allocating the 34 remaining barriers (i.e., the barrier items that could be categorized) to the respective factors to confirm the 6-factor solution. The absolute fit indices Chi-square test of model fit (X2), RMSEA and the SRMR, indicate how well a proposed model fits the data and generally provides the best indication of model fit (Hooper et al., 2008). Table 4 reveals that, based on these indices, the model achieved an acceptable fit. In addition, the incremental fit indices, CFI and TLI also illustrate an acceptable fit as their values are between .80 and .90 (Fan et al., 1999; Hu & Bentler, 1999; Kenny et al., 2015; Marsh et al., 2004; Yu, 2002).
|NO COVARIATION||WITH COVARIATION|
|Chi-Square Test of Model Fit|
|degrees of freedom||512||508|
|90% confidence interval||.058–.070||.051–.063|
|Standardized/weighted Root Mean Square Residual|
However, the analysis showed covariation of 4 pairs of items, which was likely due to the similarity in wording and meaning; 3 pairs within factor 1 (lack of instructor presence with lack of interaction with the instructor, lack of interaction with students with lack of student collaboration and lack of interaction with students with lack of interaction with the instructor) and 1 pair within factor 5 (family issues with workplace issues). Even though the values of the analysis could be considered acceptable, covariation within factors should not be ignored. The following analysis allowed for the four items to covariate within their respective factors and resulted in fit indices values that can be considered as good (RMSEA < .60 and TLI/CFI > .90; Fan et al., 1999; Hooper et al., 2008; Hu & Bentler, 1999; Kenny et al., 2015; Marsh et al., 2004; Yu, 2002). Table 4 displays the fit indices values for both models.
Furthermore, the factor loadings were all statistically significant and generally high (see Table 5) which indicates a good measurement quality. Good measurement quality refers to ‘the strength of the standardized factor loadings, which is highly related to reliability’ (McNeish et al., 2018, p 44).
|Lack of Interaction with Instructor||.677*|
|Lack of Instructor Presence||.691*|
|Lack of Interaction with Students||.604*|
|Learning feels Impersonal||.808*|
|Lack of Student Collaboration||.621*|
|Prefer Face To Face Learning||.691*|
|Feeling Of Isolation||.719*|
|Lack of Language Skills||.909*|
|Lack of Writing Skills||.840*|
|Lack of Reading Skills||.870*|
|Lack of Information Literacy Skills||.855*|
|Lack of Typing Skills||.730*|
|Lack of Confidence||.436*|
|Low Quality of the Learning Materials||.907*|
|Instructors Don’t Know How To Teach Online||.836*|
|Course content is bad||*||.834*|
|Course content is too easy||.572*|
|Unavailable Course Materials||.679*|
|Lack of Clear Expectations or Instructions||.716*|
|Technical Problems PC||.800*|
|Lack of Software Skills||.681*|
|Lack of Skills Using the Delivery System||.759*|
|Lack of Adequate Internet||.700*|
|Insufficient Training To Use Delivery System||.698*|
|Lack of Technical Assistance||.675*|
|Lack of Support from the Employer||.507*||–|
|Too Many Interruptions During Study||.676*|
|Lack of Motivation||.775*|
|Own Responsibility Learning||.708*|
The aim of this study was to advance research about barriers to learning in MOOCs by strengthening the identification and categorization of barriers as experienced by learners. Firstly, a barrier overview was created based on findings in literature about online learning, distance learning and MOOC-learning specifically and translated into a self-report survey. Assessment and reassessment of the data using EFA analysis provided a good model fit for the 6-factor structure. In addition, the interpretation of the items per factor revealed that the internal coherence was very high. This was confirmed by the high Cronbach’s alpha values for the majority of the factors. Overall, the structure of the 6-factor model is of good quality and corresponds well with the expectations that guided the analyses, which is typically regarded as clear support for the construct validity of a model (Prudon, 2014).
Comparing the 6-factor structure to the earlier found 4-component structure (Henderikx et al., 2018a), revealed great content similarity between the two structures. Previous category 1, which contained a combination of technical and online learning related skill barriers is represented by the new category 2 (academic skills) and 4 (technical skills and problems). Category 4 of the initial exploratory study which contained time, support and motivation related barriers is represented by the new categories 5 (situational issues) and 6 (individual motivation) of the new model. Previous category 3, which included course related barriers is represented by the new categories 3 (content related barriers). Lastly, previous category 2, which consisted of barriers related to social context is represented by category 1 (social interactions) of the new model. Furthermore, from each model (4-component and 6-factor) 10 items needed to be removed during the iterative process of the analyses for different reasons. The removed items differed in each model except for two items: ‘technological problems with the site’ and ‘course content too hard’. This result suggests that these two particular items should be removed from the overall barrier list as they were repeatedly not strong enough to load high enough on one component or factor.
The comparison of the two models demonstrates that previous categories 1 and 4, which represented a combination of barriers, were split in the new model, each representing their own coherent set of barriers. The 6-factor model therefore presents finer specified categories and can thus be regarded as an improved model. The considerable similarities between the structures of the two separate studies using different methodologies which were each conducted using completely different samples seemed a promising indication that the model would hold in more restrictive analysis method like CFA. This was indeed the case as the fit values could be considered as acceptable. However, the analysis indicated covariation of 4 item pairs within a factor (not between factors). Depending on the reason, it is generally accepted to improve goodness of fit by allowing the covariation. In this case the covariation of the 4 items pairs can be explained by the fact that wording and meaning of the items were very similar (Harrington, 2009). An option would have been to remove the items from the analysis, yet the items showed good loadings on their respective factors (cut off > .4), therefor removal was not considered an option. Allowing for covariation increased the model fit to the data from acceptable to good. The good model fit in combination with the good measuring structure that corresponds with the expectation that current study would reveal more refined categories is clear support for good construct validity (Prudon, 2014).
Further classification of the 6-factor structure, similar to the classification of the 4-component structure (Henderikx et al., 2018) resulted in Table 6. From Table 6, it can be inferred that the factors and thus the experienced barriers by learners are predominantly non-MOOC related. This insight can be of value for MOOC-designers and providers as it may guide them in finding suitable re-design solutions or interventions to support learners in achieving their personal learning goals, even if it concerns non-MOOC related issues.
|1||Social interactions||Partly MOOC and partly non-MOOC related|
|2||Academic skills||Non-MOOC related|
|3||Content related issues||MOOC related|
|4||Technical skills and problems||Partly MOOC and partly non-MOOC related|
|5||Situational issues||Non-MOOC related|
|6||Individual motivation||Non-MOOC related|
For example, barriers related to social context, that are considered MOOC-related like lack of interaction and lack of collaboration could be addressed in the design of the MOOC by for instance integrating assignments which demand or support interaction and collaboration with fellow learners.
There are some limitations that should be taken into account. Firstly, although the sample size is generally considered as good to very good and the item ratio of 1:8 is considered acceptable (Comrey & Lee, 2013), an item ratio of 1:20 generally provides the most stable results (Osborne et al., 2008). Future studies are recommended to increase sample size and thus increase item ratio to further confirm the categorization. In addition, we did not take age or gender into consideration when analysing the results. It might be interesting to investigate whether gender and age affect the factor structure as it is known that gender as well as age can influence factor structures (Barnett et al., 2018; Drake & Egan, 2017; Idrees et al., 2017; Urushihata et al., 2010). If this would indeed be the case, learner support can then be personalised by gender and age groups. Also, the moment of targeting the potential respondents, namely at a random point in time opposed to immediately after finishing a MOOC, might have influenced the reliability of their responses. We had no knowledge of how recent they participated in a MOOC at the moment of the survey and thus how far back they had to go in their memory to recollect their experience with barriers. Further studies should attempt to collect barrier data immediately at the end of a MOOC when barrier experiences are still fresh. Furthermore, the majority of the participants came from a western culture, as a result future cross-cultural studies are needed to address the lack of cultural diversity in the current sample. Lastly, we were not aware of to what extent learners who completed the survey were successful in achieving their personal learning goals when participating in their respective MOOCs nor were we aware of the design of the respective MOOCs. It would be very interesting if future research could include learner achievement as well as several context specific questions in the survey that supports the possibility for making certain distinctions based on learner success or MOOC design regarding the experience of barriers to learning in MOOCs.
The findings of this study gave insight into barriers learners face while learning in MOOCs and showed evidence that they can be empirically categorized in comprehensive and useful categories. MOOC-learners as well as MOOC-providers and designers could benefit from having insight into barriers. MOOC-learners can use the awareness of barriers they experience to increase their actual behavioural control in the future. MOOC-providers and -designers can use insight into barriers learners experience to adjust and further develop the course as different types of barriers have a different impact on improvement. In addition, they can use the knowledge to support learners to achieve their personal learning goals.
A suggestion is to convert the refined categories found in this study into a diagnostic instrument (dashboard) which will be powered by learner self-report of barriers after finishing learning in the MOOC (see Figure 2). Such a dashboard will provide information about to what extend learners experience certain MOOC-related barriers which is valuable information for making (re) design decisions of the MOOC, but also for developing learner supporting tools and interventions even if it concerns non-MOOC related issues (see also Table 6).
For instance, to support learners regarding technical and online-learning related skills, it would be possible to, prior to the start of a MOOC, specifically draw attention to the minimum requirements regarding technical and online learning skills needed to be able to successfully learn in the MOOC. Or, learners who struggle with barriers related to time or motivation can, even though not MOOC-related, be supported by providing information on how to handle and cope with these kinds of barriers, as well as by providing supporting interventions (Antonaci et al., 2019; Jansen et al., 2020). Ultimately, being able to make informed decisions about possible redesign and development of supporting tools, is likely to benefit learner success and overall quality of the MOOC.
|SAMPLE INFORMATION N = 540|
|ITEM NUMBER||ITEM DESCRIPTION||M||SD|
This work is financed via a grant by the Dutch National Initiative for Education Research (NRO)/The Netherlands Organization for Scientific Research (NWO) and the Dutch Ministry of Education, Culture and Science under the grant nr. 405-15-705 (SOONER/http://sooner.nu). The data collection was facilitated by Delft University. The authors would like to especially thank Jan-Paul van Staalduinen for the collaboration and the facilitation of the data collection.
The authors have no competing interests to declare.
Adamopoulos, P. (2013). What Makes a Great MOOC? An Interdisciplinary Analysis of Student Retention in Online Courses. In 34th International Conference on Information Systems: ICIS 2013. United States: Association for Information Systems. http://pages.stern.nyu.edu/~padamopo/What%20makes%20a%20great%20MOOC.pdf
Antonaci, A., Klemke, R., Dirkx, K., & Specht, M. (2019). May the plan be with you! A usability study of the stimulated planning game element embedded in a MOOC platform. International Journal of Serious Games, 6(1), 49–70. DOI: https://doi.org/10.17083/ijsg.v6i1.239
Balfour, S. P. (2013). Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review. Research & Practice in Assessment, 8, 40–48. https://eric.ed.gov/?id=EJ1062843
Barnes, C. (2013). MOOCs: The challenges for academic librarians. Australian Academic & Research Libraries, 44(3), 163–175. DOI: https://doi.org/10.1080/00048623.2013.821048
Barnett, S. D., Hickling, E. J., & Sheppard, S. (2018). The impact of gender on the factor structure of PTSD symptoms among active duty United States military personnel. European Journal of Trauma & Dissociation, 2(3), 117–124. DOI: https://doi.org/10.1016/j.ejtd.2018.01.002
Belanger, Y., & Thornton, J. (2013). Bioelectricity: A quantitative approach. Durham, NC. http://dukespace.lib.duke.edu/dspace/bitstream/handle/10161/6216/Duke_Bioelectricity_MOOC_Fall2012.pdf?sequence=1
Boyatt, R., Joy, M., Rocks, C., & Sinclair, J. (2013). What (Use) is a MOOC? In L. Uden, Y. Tao, H. Yang & I. Ting (Eds.), Springer proceedings in complexity: 2nd International Workshop on Learning Technology for Education in Cloud (pp. 133–145). DOI: https://doi.org/10.1007/978-94-007-7308-0_15
Bryant, F. B., & Yarnold, P. R. (1995). Principal-components analysis and exploratory and confirmatory factor analysis. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (p. 99–136). American Psychological Association. https://psycnet.apa.org/record/1995-97110-004
Byrne, B. M. (2005). Factor analytic models: Viewing the structure of an assessment instrument from three perspectives. Journal of personality assessment, 85(1), 17–32. DOI: https://doi.org/10.1207/s15327752jpa8501_02
Byrne, B. M. (2012). Structural equation modeling with Mplus: Basic concepts, applications, and programming. New York: Routledge. DOI: https://doi.org/10.4324/9780203807644
Carr, S. (2000). As distance education comes of age, the challenge is keeping the students. The Chronicle of Higher Education, 4, A39–A41. https://eric.ed.gov/?id=EJ601725
Comrey, A. L., & Lee, H. B. (2013). A first course in factor analysis (2nd ed.). Psychology Press. DOI: https://doi.org/10.4324/9781315827506
Drake, K. E., & Egan, V. (2017). Investigating gender differences in the factor structure of the Gudjonsson Compliance Scale. Legal and Criminological Psychology, 22(1), 88–98. DOI: https://doi.org/10.1111/lcrp.12081
Duffy, M. C., Lajoie, S. P., Pekrun, R., & Lachapelle, K. (2018). Emotions in medical education: Examining the validity of the Medical Emotion Scale (MES) across authentic medical learning environments. Learning and Instruction. DOI: https://doi.org/10.1016/j.learninstruc.2018.07.001
Fan, X., Thompson, B., & Wang, L. (1999). Effects of sample size, estimation methods, and model specification on structural equation modeling fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 56–83. DOI: https://doi.org/10.1080/10705519909540119
Fishbein, M., & Ajzen, I. (2010). Predicting and changing behaviour: The reasoned action approach. Psychology Press (Taylor & Francis). DOI: https://doi.org/10.4324/9780203838020
Gamage, D., Fernando, S., & Perera, I. (2015, August). Quality of MOOCs: A review of literature on effectiveness and quality aspects. In Ubi-Media Computing (UMEDIA), 2015 8th International Conference (pp. 224–229). IEEE. DOI: https://doi.org/10.1109/UMEDIA.2015.7297459
Greene, J. A., Oswald, C. A., & Pomerantz, J. (2015). Predictors of Retention and Achievement in a Massive Open Online Course. American Educational Research Journal, 52(5), 925–955. DOI: https://doi.org/10.3102/0002831215584621
Grover, S., Franz, P., Schneider, E., & Pea, R. (2013, June). The MOOC as distributed intelligence: Dimensions of a framework & evaluation of MOOCs. In Proceedings CSCL, 2, 42–5. https://repository.isls.org/bitstream/1/1940/1/42-45.pdf
Harrington, D. (2009). Confirmatory factor analysis. Oxford New York Press. DOI: https://doi.org/10.1093/acprof:oso/9780195339888.001.0001
Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference, and prediction. Springer. DOI: https://doi.org/10.1007/BF02985802
Hayduk, L. A. (2014). Shame for disrespecting evidence: The personal consequences of insufficient respect for structural equation model testing. BMC Medical Research Methodology, 14, 124. DOI: https://doi.org/10.1186/1471-2288-14-124
Henderikx, M. A. (2019). Mind The Gap: Unravelling learner success and behaviour in Massive Open Online Courses [doctoral dissertation]. Open Universiteit Netherlands. https://research.ou.nl/ws/files/11849471/DissertationMaartjeHenderikx.pdf
Henderikx, M. A., Kreijns, K., & Kalz, M. (2017). Refining success and dropout in massive open online courses based on the intention–behavior gap. Distance Education, 38(3), 353–368. DOI: https://doi.org/10.1080/01587919.2017.1369006
Henderikx, M., Kreijns, K., & Kalz M. (2018a). A classification of barriers that influence intention achievement in MOOCs. In V. Pammer-Schindler, M. Pérez-Sanagustín, H. Drachsler, R. Elferink & M. Scheffel (Eds.), Lifelong technology-enhanced learning. EC-TEL 2018. LNCS, 11082, 3–15. DOI: https://doi.org/10.1007/978-3-319-98572-5_1
Henderikx, M. A., Kreijns, C., & Kalz, M. (2018b). Intention – behavior dynamics in MOOCs Learning. What happens to good intentions along the way? In 2018 Learning With MOOCS (LWMOOCS): Proceedings of the Fifth Learning with MOOCs Conference (pp. 110–112). IEEE. DOI: https://doi.org/10.1109/LWMOOCS.2018.8534595
Henderikx, M., Kreijns, K., Castaño Muñoz, J., & Kalz, M. (2019). Factors influencing the pursuit of personal learning goals in MOOCs. Distance Education, 40(2), 187–204. DOI: https://doi.org/10.1080/01587919.2019.1600364
Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014). HarvardX and MITx: The first year of open online courses. HarvardX Working Paper No. 1. http://harvardx.harvard.edu/multiple-course-report. DOI: https://doi.org/10.2139/ssrn.2381263
Hone, K. S., & El Said, G. R. (2016). Exploring the factors affecting MOOC retention: A survey study. Computers & Education, 98, 157–168. DOI: https://doi.org/10.1016/j.compedu.2016.03.016
Hooper, D., Coughlan, J., & Mullen, M. (2008). Structural equation modelling: Guidelines for determining model fit. Electronic Journal of Business Research Methods (EJBRM), 6(1), 53–60. https://academic-publishing.org/index.php/ejbrm/article/view/1224
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. DOI: https://doi.org/10.1080/10705519909540118
Idrees, M., Hafeez, M., & Kim, J. Y. (2017). Workers’ age and the impact of psychological factors on the perception of safety at construction sites. Sustainability, 9(5), 745. https://www.mdpi.com/2071-1050/9/5/745/htm. DOI: https://doi.org/10.3390/su9050745
Jansen, R. S., van Leeuwen, A., Janssen, J., Conijn, R., & Kester, L. (2020). Supporting learners’ self-regulated learning in Massive Open Online Courses. Computers & Education, 146, 103771. DOI: https://doi.org/10.1016/j.compedu.2019.103771
Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486–507. DOI: https://doi.org/10.1016/j.compedu.2019.103771
Khalil, H., & Ebner, M. (2013). Interaction Possibilities in MOOCs – How Do They Actually Happen? International Conference on Higher Education Development (pp. 1–24). Egypt: Mansoura University. https://www.scribd.com/document/134249470/Interaction-Possibilities-in-MOOCs-How-Do-They-Actually-Happen. DOI: https://doi.org/10.1177/0049124114543236
Khalil, H., & Ebner, M. (2014). MOOCs Completion Rates and Possible Methods to Improve Retention – A Literature Review. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 1236–1244). Chesapeak, VA: AACE. https://www.learntechlib.org/p/147656
Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and intention in massive open online courses: In depth. Educause Review Online. http://er.educause.edu/articles/2013/6/retention-and-intention-in-massive-open-online-courses-in-depth
Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In Proceedings of the 7th international conference on networked learning 2010 (pp. 266–275). University of Lancaster. https://researchportal.port.ac.uk/portal/en/publications/the-ideals-and-reality-of-participating-in-a-mooc(067e281e-6637-423f-86a5-ff4d2d687af1).html
Marsh, H. W., Balla, J. R., & McDonald, R. P. (1988). Goodness-of-fit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin, 103(3), 391–410. DOI: https://doi.org/10.1037/0033-2909.103.3.391
Marsh, H. W., Hau, K. T., & Grayson, D. A. (2005). Goodness of fit evaluation in structural equation modeling. In A. Maydeu-Olivares & J. J. McArdle (Eds.), Contemporary psychometrics. A festschrift to Roderick P. McDonald (pp. 225–340). n. https://psycnet.apa.org/record/2005-04585-010
Marsh, H. W., Hau, K. T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) Findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341. DOI: https://doi.org/10.1207/s15328007sem1103_2
McAuley, A., Stewart, B., Siemens, G., & Cormier, D. (2010). The MOOC model for digital practice. https://oerknowledgecloud.org/sites/oerknowledgecloud.org/files/MOOC_Final.pdf
McNeish, D., An, J., & Hancock, G. R. (2018). The thorny relation between measurement quality and fit index cutoffs in latent variable models. Journal of personality assessment, 100(1), 43–52. DOI: https://doi.org/10.1080/00223891.2017.1281286
Misopoulos, F., Argyropoulou, M., & Tzavara, D. (2018). Exploring the factors affecting student academic performance in online programs: a literature review. In A. Khare & D. Hurst (Eds.), On the line (pp. 235–249). DOI: https://doi.org/10.1007/978-3-319-62776-2_18
Muilenburg, L. Y., & Berge, Z. L. (2005). Student barriers to online learning: A factor analytic study. Distance Education, 26(1), 29–48. DOI: https://doi.org/10.1080/01587910500081269
Muthén, L. K., & Muthén, B. O. (1998–2014). MPlus user’s guide (8th ed.). Los Angeles, CA. Retrieved from https://www.statmodel.com/download/usersguide/MplusUserGuideVer_8.pdf
Ng, S.-M. (2013). Validation of the 10-item Chinese perceived stress scale in elderly service workers: one-factor versus two-factor structure. BMC Psychology, 9. https://link.springer.com/article/10.1186/2050-7283-1-9. DOI: https://doi.org/10.1186/2050-7283-1-9
Onah, D. F., Sinclair, J., & Boyatt, R. (2014). Dropout rates of massive open online courses: behavioural patterns. In International conference on education and new learning technologies. EDULEARN14 proceedings, 1, 5825–5834. Barcelona. http://wrap.warwick.ac.uk/65543/
Osborne, J. W., Costello, A. B., & Kellow, J. T. (2008). Best practices in exploratory factor analysis. Best practices in quantitative methods (pp. 86–99). DOI: https://doi.org/10.4135/9781412995627.d8
Preacher, K. J. (2006). Testing complex correlational hypotheses with structural equation models. Structural Equation Modeling, 13(4), 520–543. DOI: https://doi.org/10.1207/s15328007sem1304_2
Preacher, K. J., Zhang, G., Kim, C., & Mels, G. (2013). Choosing the optimal number of factors in exploratory factor analysis: A model selection perspective. Multivariate Behavioral Research, 48, 28–56. DOI: https://doi.org/10.1080/00273171.2012.710386
Prudon, P. (2014). Confirmatory factor analysis: a brief introduction and critique. Tilgængelig på. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.6207&rep=rep1&type=pdf
Reich, J. (2014). MOOC completion and retention in the context of student intent. Educause Review Online. http://er.educause.edu/articles/2014/12/mooc-completion-and-retention-in-the-context-of-student-intent
Reich, J., & Ruipérez-Valiente, J. A. (2019). The MOOC pivot. Science, 363(6423), 130–131. DOI: https://doi.org/10.1126/science.aav7958
Rhemtulla, M., Brosseau-Liard, P. E., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological methods, 17(3), 354–373. DOI: https://doi.org/10.1037/a0029315
Saleh, A., & Bista, K. (2017). Examining factors impacting online survey response rates in educational research: Perceptions of graduate students. Journal of MultiDisciplinary Evaluation, 13(29), 63–74. https://files.eric.ed.gov/fulltext/ED596616.pdf
Shapiro, H. B., Lee, C. H., Roth, N. E. W., Li, K., Çetinkaya-Rundel, M., & Canelas, D. A. (2017). Understanding the massive open online course (MOOC) student experience: An examination of attitudes, motivations, and barriers. Computers & Education, 110, 35–50. DOI: https://doi.org/10.1016/j.compedu.2017.03.003
Schmitt, T. A. (2011). Current methodological considerations in exploratory and confirmatory factor analysis. Journal of Psychoeducational Assessment, 29, 304–321. DOI: https://doi.org/10.1177/0734282911406653
Schmitt, T. A., Sass, D. A., Chappelle, W., & Thompson, W. (2018). Selecting the best factor structure and moving measurement validation forward: An illustration. Journal of personality assessment, 100(4), 345–362. DOI: https://doi.org/10.1080/00223891.2018.1449116
Shin, N. (2003). Transactional presence as a critical predictor of success in distance learning. Distance Education, 24(1), 69–68. DOI: https://doi.org/10.1080/01587910303048
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273–1296. DOI: https://doi.org/10.1007/s11165-016-9602-2
Urushihata, T., Kinugasa, T., Soma, Y., & Miyoshi, H. (2010). Aging effects on the structure underlying balance abilities tests. Journal of the Japanese Physical Therapy Association, 13(1), 1–8. DOI: https://doi.org/10.1298/jjpta.13.1
Wegener, D. T., & Fabrigar, L. R. (2000). Analysis and design for nonexperimental data addressing casual and noncausal hypotheses. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 412–450). Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511996481.024
Yu, C.-Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Los Angeles: University of California. https://pdfs.semanticscholar.org/7a22/ae22553f78582fc61c6cab4567d36998293b.pdf