MOOCs can be considered as a powerful alternative in extraordinary situations where people cannot reach formal education. More than 25 million students from all over the world have access to more than 2800 lessons on the edX platform (edX, 2020). Due to COVID-19, which influenced the whole world in the first months of 2020, most of the countries declared curfews (Cohen & Kupferschmidt, 2020). In this regard, CEO of Udemy Coccari (2020) stated that they had reached the highest number of students in the history of the company by receiving 22 million new records in March. These findings show that MOOCs are a powerful alternative for learning. MOOCs provide equality in education and efficiency in theoretical education by being a large-scale educational resource off-campus (Guo, 2017). Since MOOCs provide access to course content with any mobile device or computer, they provides a large-scale participation in education (Rudas, 2014). In MOOCs, learners can interact with the instructors (Criollo-C et al., 2018) and with each other through discussion forums.
Highly qualified students in a poorly structured MOOC will likely not be able to complete the course (Abeer & Miri, 2014). The lack of information technology competence of their teachers puts MOOCs at a disadvantage (Gao, 2018). As the number of courses and students increases, the completion rate of MOOCs decreases (Cagiltay et al., 2020). Tang et al. (2015) stated in his study that the quit rate in MOOCs is around 13% and suggested that the big data produced in MOOCs can be used to investigate the reasons for this.
However, in order to increase the effectiveness of MOOCs, and to provide a better learning environment, the need to evaluate MOOCs has arisen. Such a broad approach is expected to meet some quality standards (Lowenthal & Hodges, 2015). One of the indicators of quality in online learning is student satisfaction (Kara, Kukul & Çakır, 2021; Moore, 2005). Accordingly, in terms of the satisfaction of the learners, the evaluation of MOOCs will be beneficial in terms of revealing the quality of education and improving future practices. Accordingly, this research aims to reveal learner satisfaction in MOOCs.
The history of the MOOCs goes back to 2008. The first MOOC was offered by the University of Manitoba that year (Mackness et al., 2010). However, the term “MOOC” can be somewhat misleading, as not all MOOCs are massive, and not all of them are open to everyone (Bohnsack & Puhl, 2014). And many MOOCs now charge fees for access or for credit, and some are only available to select groups of learners (Taneja & Goel, 2014).
As MOOCs have gained popularity in the past decade, their reliance on universities for course creation has decreased and the number of courses being created by companies such as Google, Microsoft, Amazon, and Facebook has continued to grow, with Coursera seeing a significant increase in the proportion of non-university courses in 2021, with 39% of new courses coming from corporate partners (Shah, 2021).
Learner satisfaction is considered as one of the indicators of the quality and success of online learning (Kara et al., 2021; Moore, 2005). Many institutions make assessments on learning outcomes by collecting data on learners’ performance and satisfaction (Kember & Ginns, 2012). In this assessment, satisfaction can be defined as the scores given by learners based on their learning experiences in their training (Li et al., 2016). Therefore, while evaluating the designed online learning, the satisfaction of the learners is tried to be measured (Kara et al., 2021). Learner satisfaction is defined as “perceptions of learning experiences and perceived value of a course” (Kuo et al., 2013, p.17). One of the indicators of success in MOOCs is individuals’ continuance intention. Although it is thought that dropout rates in MOOCs alone may not be sufficient to evaluate the adequacy of the MOOC (Wang & Baker, 2018), it is seen that in the literature that the level of satisfaction with the MOOC plays a major role in individuals’ continuance at that course (Joo et al., 2018).
Learner satisfaction is affected by variables such as interaction, self-regulated learning (Asoodar et al., 2016; Gameel, 2017; Kuo et al., 2013), instructor, quality and flexibility of content (Sun et al., 2008), and engagement (Bahati et al., 2019). Badali et al., (2020) argued that a design that would make learners active in MOOCs would increase their engagement and, consequently, their satisfaction levels would increase. The evaluation of the design elements in the MOOCs and accordingly the learner satisfaction is important for the evaluation of the quality of the MOOCs. With the widespread use of MOOCs and reaching large masses, learner satisfaction has emerged as a variable that must be examined for the success of MOOCs (Hew et al., 2020).
While studies revealing that similar variables are valid for MOOCs (Gameel, 2017; Rabin et al., 2019), there are also studies indicating that factors such as Perceived Usefulness and Perceived Ease of Use in MOOCs have an effect on learner satisfaction (Joo et al., 2018). However, the studies investigating learner satisfaction were carried out with similar research approaches (Hew et al., 2020), and they use similar data collection processes.
Korableva et al. (2019) investigated the effect of MOOC platform interfaces on student satisfaction. They conducted a qualitative method and collected the data with interviews from 60 participants. Li (2019) investigated the effect of learners’ demographic characteristics on student satisfaction in some of her research on MOOCs with quantitative data which is collected by surveys. Badali et al. (2020) investigated the effect of educational content prepared according to the educational principles of Merrill on students’ satisfaction in his experimental study. Mulik et al. (2020) collected data from 310 MOOC users using an online survey to investigate the relationship between satisfaction’s streaming experience and acceptance of MOOCs. Daneji et al. (2019) investigated the satisfaction effect of perceived benefit using structural equation modeling in his study on MOOCs.
Qualitative studies, experimental researches or data collected with surveys, especially with small sample groups or with a single course, are quite common (Brooker et al., 2018). This situation may be limited to see the big picture of MOOCs. Therefore, examining learner satisfaction in larger sample groups will be beneficial in terms of the quality and success of MOOCs to be designed in the future.
Text mining is one of the data mining techniques that reveal hidden and meaningful structures in the data (Gupta & Lehal, 2009). Unlike data mining, texts are used as a text mining data set. Many different algorithms can be used according to the purpose of text mining (Aggarwal & Wang, 2011). Algorithms used in text mining convert text into numerical values and analyze them (Vijayarani & Janani, 2016).
In this research, the Leximancer text mining tool was used to analyze user comments and visualize the analysis. Leximancer tool was preferred in many academic studies such as blog mining (Chen, 2014), content analysis (Fisk et al., 2012; Zawacki-Richter & Naidu, 2016), analysis of data from the feedback system (Travaglia et al., 2009), content analysis to map the history of a cultural journal (Cretchley et al., 2010).
The method and reporting of the research were carried out in five steps. These steps are seen in Figure 1. The process that started with data collection was made ready for analysis by classifying and clearing the data. Then it was analyzed and reported.
Data Analysis Process.
As seen in Figure 1, the text mining process started with the data collection step. At this step, firstly, the user comments downloaded from Udemy on the date of the 16th of April, 2020. It consists of “39101” comments made by users in “960” courses with “Turkish” content in the “Software Development” category on the Udemy platform. The selected 960 courses consist of all active courses available on the “Udemy” platform in the “software development” category as of the date the data collected. Since there are many different categories of courses on the Udemy platform, and therefore the comments can be very scattered in terms of context, the selected courses were limited to only the “software development” category and the focus of the research was narrowed to a single field. In addition, when the Turkish courses were examined, it was seen that the most courses were in the software development category. The software development category was chosen in order to obtain more data and increase the validity and reliability of the study. Again, the data were collected only from the courses in the “Turkish” language, avoiding the difficulties that may arise due to language differences during the analysis and language integrity was preserved.
Each comment on courses on the Udemy platform requires the user to rate (star) between 0.5 and 5. In this way, it can be assumed that the users with positive user experience have a high rating score with the comments or the users with negative user experience with a low rating score. The content of the comments, the posting time, and the rating score were taken for analysis. Later, comments are categorized according to their rating points and grouped. All comments were gathered in three groups (Figure 2). The ‘negative comments’ group consists of 1519 comments with a rating score of 0.5–1.5, ‘average comments’ group consists of 3469 comments between 2.0–3.5, and the ‘positive comments’ group consists of 34113 comments between 4.0–5.0.
Rating distribution of comments.
When analyzing the data, it is essential to clean it from noisy data to improve the quality of the results. These data are non-distinctive words if they are found in clusters that are frequently used in the language structure (Aggarwal & Zhai, 2012). Open source Knime software was used for data cleaning. The nodes included in the Knime software were used for data cleaning. ‘Punctuation Erasure’ used to clear punctuation marks, ‘N Chars Filter’ used to clear character groups under three characters, ‘Number Filter’ used to clear numerical characters, ‘Case Converter’ used to convert all characters to lowercase, ‘Stop Word Filter’ used to clear various words in the Turkish language that do not have any distinctive features.
The data free from noisy data were analyzed separately with Leximancer software as “negative comments,” “average comments,” and “positive comments” in three different categories. Leximancer finds the main concepts in the text and provides an overview through the conceptual map showing their relationships (Chen, 2014). The software determines the frequency of the words used together in the text, and the words formed together and evaluated high-frequency words as a concept (Cretchley et al., 2010). The resulting concepts are clustered into themes according to proximity to each other and help to interpret the formed sets of concepts (Leximancer, 2018). The ‘concept map’ data generated was interpreted in the context of MOOC.
Users’ comments were divided into three groups according to their ratings, and text mining was performed for each group. Accordingly, 1519 negative comments were made to MOOCs. As can be seen in Figure 3, the text mining applied to negative comments shows that the comments consist of five clusters.
Concept map of negative comments (n = 1519).
The fast cluster was the smallest in the negative comments group and contains the least concept. The two concepts in the cluster were the concepts of fast and teaching. This finding may be interpreted as the lessons taught in the courses are faster than the learning speed of the people and have negatively affected their satisfaction levels. Also, the fast concept is linked to the example concept under the education cluster. Failure to train the samples used to reinforce the subject in the courses while the students are trying to make them simultaneously may cause them to make this comment.
Another cluster of negative comments was the education cluster. In some of the comments in this cluster, the concepts of inadequate and information seem relevant. This situation may be interpreted as there are participants who find the instructor’s knowledge insufficient. From another point of view, it may be interpreted that some of the participants are ahead of the course level, and therefore the information given in the course is insufficient for them. Two concepts that can support this finding were also included in this cluster: subject and from scratch. The participants may not have found what they expected when the course started with the basics. In the same cluster, the concepts of really, good, and knowledge were also seen as related concepts. Some of the participants commented negatively, although they considered the instructor’s knowledge sufficient. The reason for this may be pedagogical deficiencies, although the instructor is well knowledgeable.
Some concepts that support the comments made according to the concepts in the education cluster were also included in the teaching cluster. For example, concepts such as unnecessary and simple cannot meet the expectations of the participants who have advanced expectations from the course. In addition, the concepts of narration and bad in the narrative cluster were two concepts that appear in relation to each other. This situation may be interpreted as the instructor’s narration was not liked by the learners. From a broader perspective, it may be interpreted that the pedagogical competencies of the instructor were not considered sufficient by the learners.
Another cluster emerging in negative comments was the course cluster. When the concepts in this cluster were examined, the course, fee, money, purchase, and YouTube concepts come out. This may have been due to the fact that the learners paid for the course and could not find what they expected. There may have been comments that similar videos are on YouTube. The emergence of similar, many, and video concepts within the same cluster may be interpreted as that there are many videos with similar content on YouTube.
The last cluster that appeared in negative comments was the answer cluster. At first glance, this cluster may be considered to consist of comments due to a lack of interaction in MOOCs. The fact that question and answer concepts are included in this cluster may result from the students not being able to answer the questions posed to the instructor in the MOOCs that are asynchronous. This situation may be interpreted as inadequate interaction.
3469 neutral comments were made to MOOCs. As a result of text mining applied to neutral comments, it is seen that the comments consist of four clusters (Figure 4).
Concept map of neutral comments (n = 3469).
When the neutral comments were examined, one of the clusters was the “Much” cluster. It can be seen that there are too many repetitions at the center of this cluster. This finding may be interpreted as a finding that matches the simple concept in negative comments. It is understood that the instructor is constantly repeating so that the participants can better understand the subject. However, this situation can make the lesson boring for learners who are more than the basic level. Accordingly, it is understood from the comments that there were students who find the course unnecessarily long in the same cluster. The emergence of the concept of too fast in the same cluster supports this finding. Some learners find the instructor’s expressions fast because they are at a basic level. Apart from this, technical problems were also reflected in this cluster.
When the concepts in the teaching cluster were analyzed, it is seen that in this cluster, as in other clusters, the teaching remains at a basic level, basic information was included, and concepts such as basic and inadequate were included. In the video cluster, the most prominent concept was long. The videos were perceived as long by learners. This situation may be related to the orientation of adults towards the needs of education. The learner can evaluate the video as long when they cannot reach the information they need.
The course cluster has the most common concept with the other clusters. The level of the contents was frequently mentioned also in this cluster. In addition, the questions asked to the instructor were included in the cluster as a concept. This situation may be interpreted as the learners give importance to interaction. The concepts I hope and continue were included in the course cluster. This situation can be interpreted as students are waiting for the continuation of the course. In other words, it may be interpreted that the learner wants the course to continue because they are satisfied with the result of the course.
In the “nice” cluster in the positive comments group (Figure 5), there are statements that the narration is fluent and nice. Also, there are themes in this cluster that state the lessons are good for the beginners. In the Teaching cluster, findings indicating that the narration is clear and successful. The Education cluster includes comments that the course is successful and should continue. It is seen from the concepts that appear in the cluster that learners find courses useful. This finding can be interpreted as that the learners expect the continuation of their course.
Concept map of positive comments (n = 34113).
Among the positive comments clusters, most of the concepts were in Good and Thanks/Advice clusters. When this situation is evaluated within the framework of satisfaction, it can be interpreted that positive comments were made by those with a high level of satisfaction. Individuals with a high level of satisfaction stated that they can recommend the courses to their friends. Besides, it can be understood from the comments in this cluster that the instructor was thanked. In the Good cluster, there were also comments that the education was at the basic level, as in the negative and neutral comments. This situation can be interpreted in two ways; 1– Learners who need a basic level welcome this situation, 2- Although it is a basic course, the instructor’s expression and the structuring of the course satisfied the learners.
MOOCs are one of the biggest alternatives to traditional education (Chen, 2014), where learners can get the education they want according to their needs (Wang & Baker, 2018). The participation of the learners in the educations for their own needs causes them to be willing and ready to learn. Although many of the learners enrolled in MOOCs never start the course (Impey et al., 2015), the learners who commented have attended classes. The rating in the comments shows the satisfaction of the learner from the course. Satisfaction is also known to affect learner success (Rashidi & Moghadam, 2014). Therefore, increasing the satisfaction of the learners in the online environment will ensure the success of the learners and MOOCs.
The literature reveals that online student satisfaction is influenced by concepts such as student engagement (Bahati et al., 2019), interaction (Kuo et al., 2013), content (Hew et al., 2020). Interaction elements of content created in MOOCs affect the level of satisfaction of learners (Gameel, 2017). The interaction between the learner and the instructor is particularly effective on the satisfaction (Hew et al., 2018; Hone & El Said, 2016). When negative comments are analyzed in this direction, it is seen that the problems experienced by the learners in interacting with the instructor negatively affect their satisfaction. Since there is no specific classroom environment in MOOCs, learners ask the instructor rather than asking other learners about the problems they had. In this case, they expect an answer as quickly as possible from the instructor (Sun et al., 2008).
Satisfaction levels of learners in MOOCs decrease when they encounter content different from their learning speed. This situation may be due to the fact that instructors prepare content without considering the learners with different learning styles. Analyzing the target audience well before designing the course (Li et al., 2015) will increase student satisfaction. The fact that there are clusters in all types of comments (positive, neutral, and negative) that may refer to the pedagogical competencies of the instructor draws attention to the role of instructors in MOOCs. In addition, whether the learners of the content find what they expect has also affected their satisfaction. Since this situation is related to the characteristics of the learners, the design of the course is also related to the learner analysis.
Uses of MOOCs are becoming more common day by day. This situation brings with it many points to be considered about the designs of MOOCs. In this study, the comments of the participants regarding MOOCs were examined within the framework of satisfaction.
Interaction in distance education has a positive effect on learner satisfaction (Alqurashi, 2019; Ekwunife-Orakwue & Teng, 2014; Kuo et al., 2014a, 2014b; Paul et al., 2015; Shea et al., 2016; Swart et al., 2014). The same situation is also valid in MOOCs. Especially the interaction between the learner and the instructor affects the level of satisfaction of the learner (Hew et al., 2018; Hone & El Said, 2016; Li et al., 2015). Learners need interaction in these types of environments and need functions to enable interaction (Liaw, 2008). By providing the necessary interaction and support by the instructor, the loyalty of the learners (Hew et al., 2018) and, therefore, their satisfaction increases. For this reason, it is essential that instructors answer the learners’ questions quickly (Sun et al., 2008). Learners enroll in MOOCs according to their needs (Chametzky, 2014). Since there is no classroom in MOOCs, the learner-learner interaction may stay in the background (Hew et al., 2018). That makes the learner-teacher interaction important. The instructor’s willingness to interact with students may increase interaction (Hew, 2016). Thus, students’ satisfaction levels can be increased by meeting their needs to interact with the instructors (Martin & Bolliger, 2018).
Another element to be considered in order to increase the satisfaction levels of learners in MOOCs is content. A well-organized content will increase the engagement of individuals in the online learning environment, thereby increasing their engagement (Briggs, 2015). Satisfaction levels of individuals may increase as engagement increases (Bahati et al., 2019). Preparing content for MOOCs that may be of interest to the learner and increasing the learner interaction may also increase learner satisfaction (Gameel, 2017). This situation can be arranged according to the needs of the learners. Learners state that they need interaction in such environments (Liaw, 2008). In the MOOCs prepared in the xMOOC type, the learner may move away from the interaction as it is not active in the process. In addition to the interaction in the content, individual differences of the learners should be taken into account while designing the content (Rodgers, 2008). The difference between learners may cause the content to be simple to some students and more complex to others. In order to prevent that a more flexible content can be created. A flexible content structure, in which learners can navigate within the content according to their learning speed, may increase the satisfaction of the learners (Sun et al., 2008).
One of the factors that affect learner satisfaction in MOOCs is the instructor itself. According to some researchers, the most significant factor in online learning is the instructor itself (Martin & Bolliger, 2018). The pedagogical competencies and communicative skills of the instructor support teaching (Denis et al., 2004), and this affects student satisfaction. Instructors need to have in-depth knowledge of the subject, real enthusiasm about the lesson, and interest in teaching the course (Hew, 2016). Instructors are expected to be able to design content that can increase learner engagement. For this reason, the most important role that instructors should have is an instructional designer (Li et al., 2017).
Instructors are expected to develop course materials that will facilitate learning. However, providing learners with cognitive support after finishing these materials is another role expected from the instructor (Li et al., 2017). In addition, preparing more flexible, customizable content considering individual differences may increase satisfaction. That is also related to the instructor’s role as an instructional designer.
There are some limitations in this research. The study was carried out only with the data available on the “Udemy” platform, whose language is “Turkish”, and which is in the “software development” category. Therefore, it cannot be said that the results can be generalized equally for all MOOCs. The repetition of the study on similar platforms, languages and disciplines will contribute more to the field.
The authors have no competing interests to declare.
Abeer, W., & Miri, B. (2014). Students’ preferences and views about learning in a MOOC. Procedia-Social and Behavioral Sciences, 152, 318–323. DOI: https://doi.org/10.1016/j.sbspro.2014.09.203
Aggarwal, C. C., & Wang, H. (2011). Text mining in social networks. In Social network data analytics (pp. 353–378). Springer. DOI: https://doi.org/10.1007/978-1-4419-8462-3_13
Aggarwal, C. C., & Zhai, C. (2012). Mining text data. Springer Science & Business Media. DOI: https://doi.org/10.1007/978-1-4614-3223-4
Alqurashi, E. (2019). Predicting student satisfaction and perceived learning within online learning environments. Distance Education, 40(1), 133–148. DOI: https://doi.org/10.1080/01587919.2018.1553562
Asoodar, M., Vaezi, S., & Izanloo, B. (2016). Framework to improve e-learner satisfaction and further strengthen e-learning implementation. Computers in Human Behavior, 63, 704–716. DOI: https://doi.org/10.1016/j.chb.2016.05.060
Badali, M., Hatami, J., Farrokhnia, M., & Noroozi, O. (2020). The effects of using Merrill’s first principles of instruction on learning and satisfaction in MOOC [Article; Early Access]. Innovations in Education and Teaching International, 10. DOI: https://doi.org/10.1080/14703297.2020.1813187
Bahati, B., Fors, U., Hansen, P., Nouri, J., & Mukama, E. (2019). Measuring Learner Satisfaction with Formative e-Assessment Strategies. International Journal of Emerging Technologies in Learning (iJET), 14(07), 61–79. DOI: https://doi.org/10.3991/ijet.v14i07.9120
Bohnsack, M., & Puhl, S. (2014). Accessibility of MOOCs. In International Conference on Computers for Handicapped Persons (pp. 141–144). Cham: Springer. DOI: https://doi.org/10.1007/978-3-319-08596-8_21
Briggs, A. (2015). Ten ways to overcome barriers to student engagement online. Online Learning Consortium.
Brooker, A., Corrin, L., De Barba, P., Lodge, J., & Kennedy, G. (2018). A tale of two MOOCs: How student motivation and participation predict learning outcomes in different MOOCs. Australasian Journal of Educational Technology, 34(1). DOI: https://doi.org/10.14742/ajet.3237
Cagiltay, N. E., Cagiltay, K., & Celik, B. (2020). An Analysis of Course Characteristics, Learner Characteristics, and Certification Rates in MITx MOOCs. International Review of Research in Open and Distributed Learning, 21(3), 121–139. DOI: https://doi.org/10.19173/irrodl.v21i3.4698
Chametzky, B. (2014). Andragogy and engagement in online learning: Tenets and solutions. Creative education, 5(10), 813–821. DOI: https://doi.org/10.4236/ce.2014.510095
Chen, Y. (2014). Investigating MOOCs through blog mining. The International Review of Research in Open and Distributed Learning, 15(2). DOI: https://doi.org/10.4236/ce.2014.510095
Criollo-C, S., Luján-Mora, S., & Jaramillo-Alcázar, A. (2018). Advantages and disadvantages of M-learning in current education. 2018 IEEE World Engineering Education Conference (EDUNINE). DOI: https://doi.org/10.1109/EDUNINE.2018.8450979
Coccari, G. (2020, 21 May). Udemy’den Haberler – 2020 [Video file]. https://teach.udemy.com/tr/state-of-udemy-2020/
Cohen, J., & Kupferschmidt, K. (2020). Countries test tactics in ‘war’against COVID-19. Science, 367(6484). DOI: https://doi.org/10.1126/science.367.6484.1287
Cretchley, J., Rooney, D., & Gallois, C. (2010). Mapping a 40-year history with Leximancer: Themes and concepts in the Journal of Cross-Cultural Psychology. Journal of Cross-Cultural Psychology, 41(3), 318–328. DOI: https://doi.org/10.1177/0022022110366105
Daneji, A. A., Ayub, A. F. M., & Khambari, M. N. M. (2019). The effects of perceived usefulness, confirmation and satisfaction on continuance intention in using massive open online course (MOOC). Knowledge Management & E-Learning-an International Journal, 11(2), 201–214. DOI: https://doi.org/10.34105/j.kmel.2019.11.010
Denis, B., Watland, P., Pirotte, S., & Verday, N. (2004). Roles and competencies of the e-tutor. Paper presented at the Networked learning 2004: A research based conference on networked learning and lifelong learning: Proceedings of the fourth international conference, Lancaster.
edX. (2020, 21 May). School Partners. https://www.edx.org/schools-partners
Ekwunife-Orakwue, K. C., & Teng, T.-L. (2014). The impact of transactional distance dialogic interactions on student learning outcomes in online and blended environments. Computers & Education, 78, 414–427. DOI: https://doi.org/10.1016/j.compedu.2014.06.011
Fisk, K., Cherney, A., Hornsey, M., & Smith, A. (2012). Using computer-aided content analysis to map a research domain: A case study of institutional legitimacy in postconflict East Timor. Sage Open, 2(4). DOI: https://doi.org/10.1177/2158244012467788
Gameel, B. G. (2017). Learner satisfaction with massive open online courses. American Journal of Distance Education, 31(2), 98–111. DOI: https://doi.org/10.1080/08923647.2017.1300462
Gao, J. (2018). Analysis on the Development Trend and Disadvantages of College English MOOC in the Information Age. International Conference on Contemporary Education, Social Sciences and Ecological Studies (CESSES 2018). DOI: https://doi.org/10.2991/cesses-18.2018.40
Guo, P. (2017). MOOC and SPOC, which one is better? Eurasia Journal of Mathematics, Science and Technology Education, 13(8), 5961–5967. DOI: https://doi.org/10.12973/eurasia.2017.01044a
Gupta, V., & Lehal, G. S. (2009). A survey of text mining techniques and applications. Journal of emerging technologies in web intelligence, 1(1), 60–76. DOI: https://doi.org/10.4304/jetwi.1.1.60-76
Hew, K. F. (2016). Promoting engagement in online courses: What strategies can we learn from three highly rated MOOCS. British Journal of Educational Technology, 47(2), 320–341. DOI: https://doi.org/10.1111/bjet.12235
Hew, K. F., Hu, X., Qiao, C., & Tang, Y. (2020). What predicts student satisfaction with MOOCs: A gradient boosting trees supervised machine learning and sentiment analysis approach. Computers & Education, 145, 103724. DOI: https://doi.org/10.1016/j.compedu.2019.103724
Hew, K. F., Qiao, C., & Tang, Y. (2018). Understanding student engagement in large-scale open online courses: A machine learning facilitated analysis of student’s reflections in 18 highly rated MOOCs. International Review of Research in Open and Distributed Learning, 19(3). DOI: https://doi.org/10.19173/irrodl.v19i3.3596
Hone, K. S., & El Said, G. R. (2016). Exploring the factors affecting MOOC retention: A survey study. Computers & Education, 98, 157–168. DOI: https://doi.org/10.1016/j.compedu.2016.03.016
Impey, C. D., Wenger, M. C., & Austin, C. L. (2015). Astronomy for astronomical numbers: A worldwide massive open online class. International Review of Research in Open and Distributed Learning, 16(1), 57–79. DOI: https://doi.org/10.19173/irrodl.v16i1.1983
Joo, Y. J., So, H.-J., & Kim, N. H. (2018). Examination of relationships among students’ self-determination, technology acceptance, satisfaction, and continuance intention to use K-MOOCs. Computers & Education, 122, 260–272. DOI: https://doi.org/10.1016/j.compedu.2018.01.003
Kara, M., Kukul, V., & Çakır, R. (2020). Self-regulation in three types of online interaction: How does it predict online pre-service teachers’ perceived learning and satisfaction? The Asia-Pacific Education Researcher, 30(1), 1–10. DOI: https://doi.org/10.1007/s40299-020-00509-x
Kember, D., & Ginns, P. (2012). Evaluating teaching and learning: A practical handbook for colleges, universities and the scholarship of teaching. Routledge. DOI: https://doi.org/10.4324/9780203817575
Korableva, O., Durand, T., Kalimullina, O., & Stepanova, I. (2019, March). Studying user satisfaction with the MOOC platform interfaces using the example of coursera and open education platforms. In Proceedings of the 2019 International Conference on Big Data and Education (pp. 26–30). DOI: https://doi.org/10.1145/3322134.3322139
Kuo, Y. C., Belland, B. R., Schroder, K. E., & Walker, A. E. (2014a). K-12 teachers’ perceptions of and their satisfaction with interaction type in blended learning environments. Distance Education, 35(3), 360–381. DOI: https://doi.org/10.1080/01587919.2015.955265
Kuo, Y. C., Walker, A. E., Belland, B. R., & Schroder, K. E. (2013). A predictive study of student satisfaction in online education programs. The International Review of Research in Open and Distributed Learning, 14(1), 16–39. DOI: https://doi.org/10.19173/irrodl.v14i1.1338
Kuo, Y. C., Walker, A. E., Schroder, K. E., & Belland, B. R. (2014b). Interaction, Internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses. The Internet and Higher Education, 20, 35–50. DOI: https://doi.org/10.1016/j.iheduc.2013.10.001
Leximancer. (2018). Leximancer User Guide Release 4.5. https://doc.leximancer.com/doc/LeximancerManual.pdf
Li, K. (2019). MOOC learners’ demographics, self-regulated learning strategy, perceived learning and satisfaction: A structural equation modeling approach. Computers & Education, 132, 16–30. DOI: https://doi.org/10.1016/j.compedu.2019.01.003
Li, N., Marsh, V., & Rienties, B. (2016). Modelling and managing learner satisfaction: Use of learner feedback to enhance blended and online learning experience. Decision Sciences Journal of Innovative Education, 14(2), 216–242. DOI: https://doi.org/10.1111/dsji.12096
Li, S., Zhang, J., Yu, C., & Chen, L. (2017). Rethinking distance tutoring in e-learning environments: A study of the priority of roles and competencies of Open University Tutors in China. International Review of Research in Open and Distributed Learning, 18(2), 189–212. DOI: https://doi.org/10.19173/irrodl.v18i2.2752
Li, Y., Zhang, M., Bonk, C. J., & Guo, Y. (2015). Integrating MOOC and Flipped Classroom Practice in a Traditional Undergraduate Course: Students’ Experience and Perceptions. International Journal of Emerging Technologies in Learning, 10(6). DOI: https://doi.org/10.3991/ijet.v10i6.4708
Liaw, S.-S. (2008). Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: A case study of the Blackboard system. Computers & Education, 51(2), 864–873. DOI: https://doi.org/10.1016/j.compedu.2007.09.005
Lowenthal, P. R., & Hodges, C. B. (2015). In search of quality: Using Quality Matters to analyze the quality of massive, open, online courses (MOOCs). International Review of Research in Open and Distributed Learning, 16(5), 83–101. DOI: https://doi.org/10.19173/irrodl.v16i5.2348
Mackness, J., Mak, S., & Williams, R. (2010, May). The ideals and reality of participating in a MOOC. In Proceedings of the 7th international conference on networked learning, 10, pp. 266–274.
Martin, F., & Bolliger, D. U. (2018). Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learning, 22(1), 205–222. DOI: https://doi.org/10.24059/olj.v22i1.1092
Moore, J. C. (2005). The Sloan Consortium quality framework and the five pillars. The Sloan Consortium. DOI: https://doi.org/10.4018/978-1-59140-555-9.ch245
Mulik, S., Srivastava, M., & Yajnik, N. (2020). Flow Experience and MOOC Acceptance: Mediating Role of MOOC Satisfaction. Nmims Management Review, 38(1), 52–68.
Paul, R. C., Swart, W., Zhang, A. M., & MacLeod, K. R. (2015). Revisiting Zhang’s scale of transactional distance: Refinement and validation using structural equation modeling. Distance Education, 36(3), 364–382. DOI: https://doi.org/10.1080/01587919.2015.1081741
Rabin, E., Kalman, Y. M., & Kalz, M. (2019). An empirical investigation of the antecedents of learner-centered outcome measures in MOOCs. International Journal of Educational Technology in Higher Education, 16(1), 14. DOI: https://doi.org/10.1186/s41239-019-0144-3
Rashidi, N., & Moghadam, M. (2014). The Effect of Teachers’ Beliefs and Sense of Self-Efficacy on Iranian EFL Learners’ Satisfaction and Academic Achievement. Tesl-Ej, 18(2), n2.
Rodgers, T. (2008). Student engagement in the e-learning process and the impact on their grades. International Journal of Cyber Society and Education, 1(2), 143–156.
Rudas, I. J. (2014). MOOC—The educational revolution of the century. 2014 IEEE 12th International Symposium on Intelligent Systems and Informatics (SISY). DOI: https://doi.org/10.1109/SISY.2014.6923578
Shah, D. (2021, December 28). A decade of MOOCs: A review of stats and trends for large-scale online courses in 2021. EdSurge. https://www.edsurge.com/news/2021-12-28-a-decade-of-moocs-a-reviewof-stats-and-trends-for-large-scale-online-courses-in-2021
Shea, J., Joaquin, M. E., & Wang, J. Q. (2016). Pedagogical design factors that enhance learning in hybrid courses: A contribution to design-based instructional theory. Journal of Public Affairs Education, 22(3), 381–397. DOI: https://doi.org/10.1080/15236803.2016.12002254
Sun, P.-C., Tsai, R. J., Finger, G., Chen, Y.-Y., & Yeh, D. (2008). What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50(4), 1183–1202. DOI: https://doi.org/10.1016/j.compedu.2006.11.007
Swart, W., MacLeod, K., Paul, R., Zhang, A., & Gagulic, M. (2014). Relative proximity theory: Measuring the gap between actual and ideal online course delivery. American Journal of Distance Education, 28(4), 222–240. DOI: https://doi.org/10.1080/08923647.2014.924721
Taneja, S., & Goel, A. (2014). MOOC providers and their strategies. International Journal of Computer Science and Mobile Computing, 3(5), 222–228.
Tang, J. K., Xie, H., & Wong, T.-L. (2015). A big data framework for early identification of dropout students in MOOC. International Conference on Technology in Education. DOI: https://doi.org/10.1007/978-3-662-48978-9_12
Travaglia, J. F., Westbrook, M. T., & Braithwaite, J. (2009). Implementation of a patient safety incident management system as viewed by doctors, nurses and allied health professionals. Health, 13(3), 277–296. DOI: https://doi.org/10.1177/1363459308101804
Vijayarani, S., & Janani, R. (2016). Text mining: open source tokenization tools-an analysis. Advanced Computational Intelligence: An International Journal (ACII), 3(1), 37–47. DOI: https://doi.org/10.5121/acii.2016.3104
Wang, Y., & Baker, R. (2018). Grit and intention: Why do learners complete MOOCs? The International Review of Research in Open and Distributed Learning, 19(3). DOI: https://doi.org/10.19173/irrodl.v19i3.3393
Zawacki-Richter, O., & Naidu, S. (2016). Mapping research trends from 35 years of publications in Distance Education. Distance Education, 37(3), 245–269. DOI: https://doi.org/10.1080/01587919.2016.1185079