As technology improves and access to the Internet continues to become more affordable and easier to obtain, a greater number of people are becoming technology-literate. As a result, there appears to be a need among adults for increased technology learning and use. In recent years, e-learning has increased in popularity and is preferred by a greater number of learners (Paechter & Maier, 2010). To address the educational needs of e-learning, a variety of options are available for distance learning. To provide distance and e-learning opportunities for all interested parties (e.g., university students, working professionals, retirees, etc.), universities are offering more online learning resources to meet various educational needs. These opportunities cover a multitude of topics and meet the increasing need for the latest online technologies.
Even though increases in e-learning appear to be positive, if educational retention rates are fully considered, then the e-learning picture becomes less promising. Interestingly, a variety of factors seems to be contributing to why individuals are unable to complete online programs (Kara, Erdoğdu, Kokoç & Cagiltay, 2019). For example, whilst some learners are unable to manage their time between work, family, and study (Yasmin, 2013); others are unable to cope with instructional content and the process of online learning. In some cases, the inability to continue the e-learning curriculum was not related to content difficulty (Barefoot, 2004; Bunn, 2004; Ivankova & Stick, 2007), but instead, the primary reason for the lack of retention among online learners was more a result of personal traits (e.g., learning styles and/or difficulties, personality traits, etc.) (Harrell & Bower, 2011; Hart, 2012). According to Zawacki-Richter (2009), “learner characteristics” which appear at the micro level and deal with the teaching and learning aspect of distance education, are of utmost importance. By utilizing a holistic approach, the researchers’ in this current study investigated learning characteristics of e-Learners to determine relationships between variables such as individual preferences, e-readiness, and satisfaction from the e-learner’s point of view.
Do individual differences and/or learning preferences affect how learners study? Should these differences and preferences be considered when designing online courses? Discussion regarding these questions has gone on for many years, yet a lack of consensus continues among instructional designers. Some believe that e-learning should be developed with learners’ preferences in mind (Garland & Martin, 2005; Lee, Barker & Kumar, 2016; Siddique, Durrani & Naqvi, 2017; Wang, Wang, Wang & Huang, 2006), while others believe it unnecessary to consider learners’ preferences (Butler & Pinto-Zipp, 2005; Gupta & Anson, 2014; Lu, Yu & Liu, 2003; Santally & Senteni, 2013). The belief by many is that today’s learners are flexible and can adapt their learning preferences to any instructional experience. However, problems can arise if content and learning activities are not a match to individual expectations and/or preferences (Ali, Uppal & Gulliver, 2018). Thus, the role of learning preferences within e-learning continues to be unaddressed (Lu & Chiou, 2010). Research investigations should be conducted to understand the role learning preferences play in e-learning design (Mohr, Holtbrugge & Berg, 2012). Akdemir and Koszalka (2008) propose a similar argument that designing content according to individual preferences enhances learner achievement and satisfaction.
Differing researchers have categorized and labelled learning preferences, but the reality of considering these individual differences while designing instruction seems intuitive. Considerations of learning preferences are difficult because they may change according to content format and learner expectations. Teaching styles can also change, develop, and be altered. Rather than categorize learners it is important instead to understand the overall nature of learning. This should be done as part of the design process.
As stated by Gülbahar and Alper (2014), online learning preferences can be utilized to enhance the quality of learning, especially for learners who can adapt to different ways of learning. It is important if learners understand their idiosyncratic traits and consequences that arise from their divergent choices.
To reveal students’ online learning preferences, Gülbahar and Alper (2014) developed a scale consisting of seven factors: independent learning, social learning, audio-visual learning, active learning, verbal learning, logical learning, and intuitive learning. Following reliability and validity analysis, it was determined that the scale was valid and reliable. The scale was utilized as one aspect of the research model for this current study.
The e-Readiness construct is composed of several dimensions that work in unison and directly affect e-learning. The primary factors, which determine learners’ readiness are; effective use of information technologies, technical competencies, and individual preferences. Their level of access to technology and resources (Dada, 2006; Hanafizadeh, Hanafizadeh & Khodabakhshi, 2009; Mutula & Van Brakel, 2006) should also be considered. Watkins, Leigh, and Triner (2004) found that access to technology, technical skills, motivation, online audio and video, and Internet discussions are factors most affecting success.
Çiğdem and Öztürk (2016) examined the relationship between factors of online learning readiness and learners’ end-of-course achievements, stating, “The inferential results revealed that the students’ end-of-course grades had significantly positive relationships with their computer/Internet self-efficacy and self-directed learning orientations” (p. 98).
Different research studies regarding e-Readiness point to personal characteristics, technical competencies, access to technology and overall motivation as key factors determining readiness that affects success.
The determination of learner satisfaction can be a difficult prospect. There are a variety of opinions throughout the literature, for example, timely feedback (Lee, 2010), social presence (Abdous & Yen, 2010; McGorry, 2003), support services (Lee, 2010), technical support, and course technology (McGorry, 2003).
Beqiri, Chase, and Bishka (2009) investigated factors affecting student satisfaction and determined the highest level of satisfaction related to particular e-learning technologies. Learners with a positive attitude and an adequate level of competency in e-learning technologies reported satisfaction. In, Pena and Yeung (2010), it was determined that there is a direct relationship between online learning satisfaction and competency in computer use. Jung-Wan and Mendlinger (2011) investigated personal competence perception on learners’ acceptance and satisfaction of e-learning. Two findings were revealed: the perception of personal competence affects attitude towards e-learning, and the concept of usefulness positively affects satisfaction.
According to Palmer and Holt (2009), the satisfaction level of 70% of e-Learners is related to confidence in learning and technology use, understanding of what is expected for success and the quality of education they received throughout the process.
Similarly, Bray, Aoki, and Dlugosh (2008) found satisfaction of students who preferred individual learning is higher among those who can self-manage difficulties associated with e-learning, can find computer use easy, can communicate with instructors, and who prefer lack of social interaction while learning. Palmer and Holt (2010) determined an Instructional Management System increased students’ satisfaction when they could locate and utilize lesson information as well as had sufficient support from instructors and technical services. Reading online contributions from their classmates was also important for student satisfaction.
To determine learner satisfaction, Ilgaz and Askar (2013) developed a satisfaction scale regarding “acceptance of technology in distance education and contribution of community feeling to learning satisfaction”. The dimensions of the six-factor scale were; student-student interaction, student-teacher interaction, online lessons, technical support, printed materials, and face-to-face activities.
Bolliger and Martindale (2004) conducted a study of factors influencing student satisfaction in online courses. Their instrument was comprised of three factors: instructor, technology, and interactivity. The research determined, “Clearly, student satisfaction is a key variable in determining the success or failure of online learners, courses, and programs” (p. 66).
Kuo, Walker, Belland, and Schroder (2013) investigated how interaction and other predictors contribute to student satisfaction in online settings. The results revealed learner-content interaction accounted for the largest unique variance in students’ levels of satisfaction. It was also highlighted that, “gender, class level, and time spent online per week seemed to have an influence on learner-learner interaction, Internet self-efficacy, and self-regulation” (p. 16).
The aim of the current study was to determine learners’ satisfaction levels regarding the effectiveness of e-learning systems and their level of interaction in computer use, teaching processes, teaching content, e-Instructor competence, e-learning technologies, and positive attitude towards learning.
As e-learning continues to grow further research should be conducted to determine ways of improving the e-learning process. Moreover, making reasonable and informed judgments should be made regarding the quality of e-learning provided to learners, instructors, and policymakers. Roach and Lemasters (2006) suggest, “Researchers need to vary designs and methodologies in the study of online programs to not only compare online and on-ground instruction and learning, but also assess the importance of the findings” (p. 330).
Al-Azawei and Lundqvist (2015) concentrated on the Technology Acceptance Model (TAM) when examining satisfaction among learners. The factors considered were deep level (learning styles), surface level (gender), and cognitive (online self-efficacy). Their aim was to reveal pedagogical implications of learning styles on learner satisfaction, and their model achieved an acceptable fit and explained for 44.8% of variance, thus, “Perceived usefulness represented the best predictor; whereas, online self-efficacy and perceived ease of use failed to show a direct impact on perceived satisfaction” (p. 408).
Artino (2008) investigated students’ motivational beliefs, perceptions of the learning environment, and satisfaction with a self-paced online course. Their results revealed that task value, self-efficacy, and instructional quality were positive predictors of student satisfaction, and the final regression model accounted for 54% of variance occurring in the outcomes.
Literature review revealed that a modelling study by Toral, Barrero, Martinez-Torres, Gallardo, and Duran (2009) significantly explained learners’ satisfaction relating to content and feedback, learning community, learner responsibility, and previous learner experience. Joo, Lim, and Kim (2011) determined teaching presence, cognitive presence, perceived usefulness, and ease of use predicted learner satisfaction. Lee and Choi (2013) established a direct relationship between satisfaction and student retention, internal academic locus of control, and flow in regards to learner retention for online learning environments. Ke and Kwak (2013), on the other hand, determined learner relevance, active learning, authentic learning, learner autonomy, and computer technology competence to be the most significant predictors of learner satisfaction. Finally, Sahin and Shelley (2008), perceived usefulness and flexibility of distance education were determined to most significantly predict satisfaction in distance education.
By reviewing previous research regarding learner satisfaction, educational quality, and other aspects of online learning, it was determined there is a need for a holistic approach for gathering data and insight into e-learning. Through a holistic approach, researchers’ in this current study provided an integrative perspective to the e-learning process.
Determining variables that predict student satisfaction within e-learning was the aim of the current study. The researchers’ conducted inquiries into whether or not there was a relationship between learners’ e-learning preferences and their readiness for e-Learning. The hypotheses and theoretical model for this study are:
H1: There is a significant relationship between individuals’ learning preferences and their satisfaction within an e-learning environment.
H2: There is a significant relationship between individuals’ readiness and their satisfaction within an e-learning environment.
H3: The delivery and usability, teaching process, instructional content and interaction and/or evaluation components predict satisfaction within an e-learning environment.
Researchers’ conducted this study within an e-learning program that employed a blended learning model with synchronous and asynchronous practices combined. This study was carried out at Ankara University in Turkey. At the start if the school semester, students and instructors were notified about their synchronous lesson, schedule via Adobe Connect Virtual Classroom. Each lesson was recorded and uploaded, so that students could follow up at any time. Each course included a syllabus, SCORM (Sharable Content Object Reference Model) package, course notes, presentations and supplementary materials. Students could freely access course resources 24 hours a day, 7 days a week. Students were able to interact with their course peers and/or course instructors through course discussion boards. Every semester for 2 or 3 days, there were face-to-face sessions where learners could attend courses and meet with course instructors and peers.
In this study the researchers’ examined participants’ prior-learning preferences and readiness in regards to their satisfaction of e-learning. To determine if relationships occurred between the stated variables Structural Equation Modelling was employed. SEM (Structural Equation Modelling) is defined as a combination of statistical processes including regression, path analysis, and factor analysis. SEM is a specific methodology used to determine relationships between latent variables observed in relation to a theoretical structure (Kline, 2010).
The participants’ of this study were 363 individuals enrolled in a distance learning associate degree program or undergraduate degree completion program at Ankara University. The demographic data of participants is provided in Table 1.
Table 1
Participant demographics
Frequency (f) | Percent (%) | ||
---|---|---|---|
Gender | Female | 229 | 33.1 |
Male | 120 | 63.1 | |
Marital Status | Single | 204 | 56.2 |
Married | 145 | 39.9 | |
Age | 18–25 | 169 | 46.6 |
26–33 | 95 | 26.2 | |
34–41 | 64 | 17.6 | |
42–49 | 12 | 3.3 | |
50 and more | 3 | 0.8 |
To identify participants’ level of e-learning readiness an e-Readiness Scale was carried out at the start of the semester. The e-Readiness scale was a 5-point likert scale, including 26 likert scale items and one additional open-ended question. There were five scale factors: individual properties, ICT competencies, access to technology, motivation and attitude, and factors that affect success. Prior to use of this scale it was determined to be reliable and valid through Cronbach’s alpha analysis with a Cronbach α value of .93 (Gülbahar, 2012).
At the end of the school semester an e-Satisfaction Scale was used to identify participants’ satisfaction with e-learning. The e-Satisfaction scale was designed as a 5-point likert scale, including 29 likert scale items and one additional open-ended question. The scale had four factors: delivery and usability, teaching process, instructional content, and interaction and evaluation. The scale was determined to be reliable and valid by means of Cronbach’s alpha analysis with a Cronbach α value of .97 (Gülbahar, 2012).
An e-Learning Styles Scale developed by Gülbahar and Alper (2014) was used to determine study participants’ learning preferences. This instrument was a 5-point likert scale that included 38 likert scale items. The e-Learning Styles Scale had seven factors: independent learning, social learning, audio-visual learning, active learning, verbal learning, logical learning, and intuitive learning. Reliability coefficients for the seven factors of the e-Learning Styles Scale varied with Cronbach α values falling between .72 and .87.
To carry out quantitative data analysis and conduct SEM, the LISREL 8.71 and SPSS 17.0 statistical analysis programs were used.
SEM analysis was employed to determine participants’ learning preferences and readiness prior to online learning in regards to their e-learning satisfaction.
Latent variable learning preferences were composed of seven observed variables: independent learning, social learning, audio-visual learning, active learning, verbal learning, logical learning, and intuitive learning. Latent variable readiness consisted of five observable variables: individual properties, ICT competencies, access to technology, motivation and attitude, and factors that affect success. Latent dependent variable satisfaction was made up of four sub-dimensions: delivery and usability, teaching process, instructional content, and interaction and evaluation. These all made up the dimensions for the e-Satisfaction Scale. The fit indices obtained after primary analysis of the model were [X2 (100, N = 363) = 245.99, p < .000, RMSEA = .064, S-RMR = .050, GFI = .92, AGFI = .89, CFI = .97, NNFI = .96]. The model was determined to be within an acceptable value range. However, to develop the model further, relationships recommended by modification indices generated in the original analysis were introduced and the model subsequently re-tested. Recommendations determined from this analysis were audio-visual and independent, logical-intuitive and social-active. The observed variable individual properties were related not only to the latent variable “readiness”, but also the latent variable of learning preferences. The relationship between these variables was also introduced into the model. Further fit indices were achieved when the model was re-executed: [X2 (95, N = 363) = 178.43, p < .000, RMSEA = .049, S-RMR = .045, GFI = .94, AGFI = .92, CFI = .98, NNFI = .98, IFI = .98]. The resulting model is provided in Figure 2.
Theoretical model.
Results of the proposed research model (standardized coefficients).
A comparison of the model’s fit indices generated by LISREL in regards to other indices defined in the literature is provided in Table 2.
Table 2
Model fit indices for the measurement model (Schermelleh-Engel, Moosbrugger & Müller, 2003)
Fit indexes | Perfect fit | Accepted values | Model Results |
---|---|---|---|
χ2 | χ2/d <3 | 3< χ2/d <5 | 1.87 |
RMSEA | 0<RMSEA<0.05 | 0.05<RMSEA<0.08 | 0.049 |
S-RMR | 0≤S-RMR≤.05 | .05<S-RMR<.1 | 0.045 |
NNFI | 0.97≤NNFI≤1 | 0.95<NNFI<0.97 | 0.98 |
CFI | 0.97≤CFI≤1 | 0.95<CFI<0.97 | 0.98 |
GFI | 0.95≤GFI≤1 | 0.90<GFI<0.95 | 0.94 |
AGFI | 0.90≤AGFI≤1 | 0.85<AGFI<0.90 | 0.92 |
IFI | 0.95≤IFI≤1 | 0.90<IFI<0.95 | 0.98 |
Through analysis, the researchers determined the fit indices exhibited a very good fit model. The factor load between the independent latent variable of learning preferences and expectations, along with the indicator variables (Lambda x, λx), t values, measurement errors of independent indicator variables (delta, δ) and indicator variable rate of explanation for the latent variable (R2) are provided in Table 3.
Table 3
kx, δ, t and R2 values for the model
Independent Latent Variable | Observed Variables | kx | δ (Measurement Error) | t | R2 |
---|---|---|---|---|---|
Learning Preferences | Active Learning | 0.68 | 0.54 | 12.99 | 0.46 |
Independent Learning | 0.43 | 0.82 | 7.51 | 0.18 | |
Audio-Visual Learning | 0.68 | 0.53 | 13.39 | 0.47 | |
Logical Learning | 0.41 | 0.83 | 7.28 | 0.17 | |
Intuitive Learning | 0.60 | 0.64 | 11.35 | 0.36 | |
Social Learning | 0.66 | 0.57 | 12.51 | 0.43 | |
Verbal Learning | 0.76 | 0.42 | 15.37 | 0.58 | |
e-Readiness | Individual Properties | 0.60 | 0.56 | 11.71 | 0.44 |
ICT Competencies | 0.73 | 0.47 | 14.81 | 0.54 | |
Access to Technology | 0.73 | 0.46 | 14.65 | 0.53 | |
Motivation & Attitude | 0.77 | 0.40 | 16.11 | 0.60 | |
Factors that Affect Success | 0.76 | 0.42 | 15.53 | 0.58 | |
Satisfaction | Delivery & Usability | 0.90 | 0.18 | Constant | 0.82 |
Teaching Process | 0.96 | 0.08 | 32.38 | 0.92 | |
Instructional Content | 0.91 | 0.18 | 27.81 | 0.82 | |
Interaction & Evaluation | 0.92 | 0.15 | 29.20 | 0.85 |
The regression equation between latent variables was examined, the correlation coefficient between satisfaction variable and learning preferences was found to be .29 and the relationship was determined as significant (t = 4.93). The correlation coefficient between the satisfaction variable and readiness was .18, resulting in a significant relationship (t = 3.11). In the generated model, learning preferences and readiness explained for 15% of variable satisfaction. While the rate of explanation does appear to be low, the significance indicates that Hypothesis 1 and Hypothesis 2 were accepted.
The correlation coefficient between the satisfaction variable and explanatory delivery and usability variable was .90 (p < .05), indicating a significant and positive relation between satisfaction and delivery and usability. It explained for a percentage of satisfaction at 82%. A significant and positive relationship between teaching process and satisfaction was determined. The correlation coefficient and t value were .96 and 32.38 respectively (p < .05), while the variable of teaching process explained for the percentage of satisfaction at 92%.
A significant and positive relationship was determined between the variable of instructional content and satisfaction. The resulting analysis suggested a correlation coefficient of .91 and a t value of 27.81 (p < .05), and the variable of “instructional content” explained for satisfaction at 82%.
A significant and positive relationship between “Interaction and Evaluation” and satisfaction was determined. A correlation coefficient of .92 and a t value of 29.20 were observed. The variable of “Interaction and Evaluation” explained satisfaction at 85%. In light of these findings, Hypothesis 3 was also accepted.
satisfaction = 0.29*preferences + 0.18*readiness, Errorvar. = 0.85, R2 = 0.15 | ||
---|---|---|
(0.060) | (0.058) | (0.079) |
4.93 | 3.11 | 10.79 |
In this study, through a holistic approach, researchers’ examined the effect of learning preferences and readiness on satisfaction of e-learning participants. TA model was established based on the collected data and then tested to insure its validity and reliability.
The e-learning programs included in this study consisted of several participation modalities: note sharing, participants studying through course notes, attendance of synchronous lessons through virtual classrooms, and/or listening to recordings of e-learning course lessons. The value of the course recordings was due to the convenience of use because participants could access course recordings at their convenience. Course materials were recorded and available to attendees at any time in an asynchronous manner. The SEM analysis indicated verbal and audio-visual learning the best predicted participants learning preferences 58% and 47% respectively. This finding was a result of the materials and structure provided through e-learning. Learning preferences are occasionally referred to as learning styles, the current study did not consider preferences as styles, but instead focused on individuals’ general preferences and considered them to be individual differences within the existing environment.
In regards to readiness, “Motivation & Attitude”, was the most important structure prior to e-learning. This finding is understandable because motivation and attitude are consistently the most important variables of the learning process. Hurd (2006) determined motivation as the most important factor of distance education. Other studies revealed motivation had an important effect on student achievement (Song, Singleton, Hill & Koh, 2004; Yukselturk & Bulut, 2007), dropout rates (Lee & Choi, 2011; Park & Choi, 2009), and engagement (Barak, Watted & Haick, 2016; Richardson & Newby, 2006). Findings from this current study concurred with the aforementioned findings regarding motivation in e-learning.
Another important variable revealed “factors that affect success”, resulted from participants’ expectations of technical support and interaction opportunities in e-learning. Gray (2004) observed technical support in e-learning facilitated learning. Wiesenmayer, Kupczynski and Ice (2008), had similar findings that technical support in online learning was an important components of e-learning. Bunn (2004) revealed participants’ perception of being deprived technical support was actually to worse than an actual lack of technical support. It was also determined that interaction opportunities through e-learning had positive effects on perceived learning (Gray & DiLoreto, 2016) and satisfaction (Ilgaz & Gülbahar, 2015; Wu, Tennyson & Hsia, 2010).
In this current study it was determined that readiness and learning preferences predicted satisfaction at a rate of 15%. It was observed that “teaching process” most directly predicted satisfaction and did so at the highest rate. The variable of “teaching process” incorporates features that guide students through e-learning; namely, the study guide, syllabus, orientation process, and feedback. In past studies it was suggested that orientation and guidance services for students had an effect on learning and satisfaction (Lee, Srinivasan, Trail, Lewis & Lopez, 2011; Lim, Morris & Kupritz, 2007; Richardson & Swan, 2003).
“Interaction & Evaluation” was another predictive structure. It was observed that providing differing communication tools and evaluation processes in a flexible manner predicted students’ satisfaction. The flexibility of structures within an existing system was crucial for satisfying needs. Flexibility of structures in systems ultimately leads to satisfaction within the systems (Kuo, Walker, Schroder & Belland, 2014; Sun, Tsai, Finger, Chen & Yeh, 2008; Yukselturk & Yildirim, 2008). The effect of two other structures, “Delivery & Usability” and “Instructional Content,” was observed to be rather low. The LMS in this environment or the nature of content presentation had less effect on satisfaction. This can be interpreted as students’ placing importance on being together and/or interacting with an instructor. Results from this study supported the “factors that affect success” structure where students’ satisfaction increased as their expectations from the program were fulfilled.
In this study, e-learning participant satisfaction was considered in respects to learning preferences and readiness. It is crucial that other studies be conducted to better understand further aspects of learner satisfaction that were not determined in this research model. The current study was conducted via a student-oriented approach and consequently the student properties were more prominent. The fact that the research variables considered in this study were not found together within the literature set this research study apart from others.
While some structures in this study can be externally manipulated, other structures cannot be manipulated because they are specific to the participants who were involved. Future instructional designers should address variables that can be altered as a way of ensuring system sustainability. Non-student variables, excluded from this current study such as system, instructor, institutional operation, and external factors, can be utilized in future model type studies. It is also recommended that participants be interviewed regarding which system features they believe lead to satisfaction and then conduct analysis to related to their responses.
Abdous, M., & Yen, C. (2010). A predictive study of learner satisfaction and outcomes in face-to-face, satellite broadcast, and live video-streaming learning environments. The Internet and Higher Education, 13(4), 248–257. https://doi.org/10.1016/j.iheduc.2010.04.005
Akdemir, O., & Koszalka, T. A. (2008). Investigating the relationships among instructional strategies and learning styles in online environments. Computers & Education, 50(4), 1451–1461. https://doi.org/10.1016/j.compedu.2007.01.004
Al-Azawei, A., & Lundqvist, K. (2015). Learner differences in perceived satisfaction of an online learning: An extension to the technology acceptance model in an Arabic sample. The Electronic Journal of e-Learning 13(5), 408–426.
Ali, S., Uppal, M. A., & Gulliver, S. R. (2018). A conceptual framework highlighting e-learning implementation barriers. Information Technology & People, 31(1), 156–180.
Artino, A. R. (2008). Motivational beliefs and perceptions of instructional quality: predicting satisfaction with online training. Journal of Computer Assisted Learning, 24(3), 260–270.
Barak, M., Watted, A., & Haick, H. (2016). Motivation to learn in massive open online courses: Examining aspects of language and social engagement. Computers & Education 94, 49–60. https://doi.org/10.1016/j.compedu.2015.11.010
Barefoot, B. O. (2004). Higher education’s revolving door: Confronting the problem of student drop out in US colleges and universities. Open Learning: The Journal of Open and Distance Learning 19, 9–18. https://doi.org/10.1080/0268051042000177818
Beqiri, M. S., Chase, N. M., & Bishka, A. (2009). Online course delivery: An empirical investigation of factors affecting student satisfaction. Journal Of Education For Business, 85(2), 95–100.
Bolliger, D., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses. International Journal on E-Learning, 3(1), 61–67.
Bray, E., Aoki, K., & Dlugosh, L. (2008). Predictors of learning satisfaction in Japanese online distance learners. International Review Of Research In Open And Distance Learning, 9(3), 1–24. https://doi.org/10.19173/irrodl.v9i3.525
Bunn, J. (2004). Student persistence in a LIS distance education program. Australian Academic Research Libraries, 35(3), 253–270.
Butler, T., & Pinto-Zipp, G. (2005). Students’ learning styles and their preferences for online instructional methods. Journal of Educational Technology Systems, 34(2), 199–221.
Çiğdem, H., & Öztürk, M. (2016). Critical components of online learning readiness and their relationships with learner achievement. Turkish Online Journal of Distance Education 17(2), 98–109. https://doi.org/10.17718/tojde.09105
Dada, D. (2006). E-Readiness for developing countries: Moving the focus from the environment to the users. The Electronic Journal on Information Systems in Developing Countries, 27(6), 1–14.
Garland, D., & Martin, B. N. (2005). Do gender and learning style play a role in how online courses should be designed? Journal of Online Interactive Learning, 4(2), 67–81.
Gray, B. (2004). Informal learning in an online community of practice. Journal of Distance Education, 19(1), 20–35.
Gray, J.A. & DiLoreto, M. (2016). The effects of student engagement, student satisfaction, and perceived learning in online learning environments. International Journal of Educational Leadership Preparation, 11(1).
Gupta, S., & Anson, R. (2014). Do I matter? The impact of individual differences on a technology-mediated end user training process. Journal of Organizational and End User Computing, 26(2), 60–79.
Gülbahar, Y. & Alper, A. (2014). Development of e-Learning Styles Scale for Electronic Environments. Education and Science, 39(171), 421–435.
Gülbahar, Y. (2012). Study of developing scales for assessment of the levels of readiness and satisfaction of participants in e-learning environments. Ankara University Journal of Faculty of Educational Sciences, 45(2), 119–137. https://doi.org/10.1501/Egifak_0000001256
Hanafizadeh, P., Hanafizadeh, M. R., & Khodabakhshi, M. (2009). Taxonomy of e-readiness assessment measures. International Journal of Information Management, 29(3), 189–195.
Harrell, I. L., & Bower, B. L. (2011). Student characteristics that predict persistence in community college online courses. American Journal of Distance Education, 25(3), 178–191.
Hart, C. (2012). Factors associated with student persistence in an online program of study: A review of the literature. Journal of Interactive Online Learning, 11(1).
Hurd, S. (2006). Towards a better understanding of the dynamic role of the distance language learner: Learner perceptions of personality, motivation, roles, and approaches. Distance Education, 27(3), 303–329. https://doi.org/10.1080/01587910600940406
Ilgaz, H., & Aşkar, P. (2013). The Contribution of Technology Acceptance and Community Feeling to Learner Satisfaction in Distance Education. Procedia - Social and Behavioral Sciences, 106(0), 2671–2680. http://dx.doi.org/10.1016/j.sbspro.2013.12.308
Ilgaz, H. & Gülbahar, Y. (2015). A Snapshot of Online Learners: e-Readiness, e-Satisfaction and Expectations. International Review of Research in Open and Distributed Learning, 16(2), 171–187. https://doi.org/10.19173/irrodl.v16i2.2117
Ivankova, N. &. Stick (2007). Students’ persistence in a distributed doctoral program in education leadership in higher education: A mixed methods study. Research in Higher Education, 48, 93–135. https://doi.org/10.1007/s11162-006-9025-4
Joo, Y. J., Lim, K. Y., & Kim, E. K. (2011). Online university students’ satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictors in a structural model. Computers & Education, 57(2), 1654–1664. https://doi.org/10.1016/j.compedu.2011.02.008
Jung-Wan, L., & Mendlinger, S. (2011). Perceived self-efficacy and its effect on online learning acceptance and student satisfaction. Journal of Service Science & Management, 4(3), 243–252.
Kara, M., Erdoğdu, F., Kokoç, M., & Cagiltay, K. (2019). Challenges Faced by Adult Learners in Online Distance Education: A Literature Review. Open Praxis, 11(1), 5–22. http://dx.doi.org/10.5944/openpraxis.11.1.929
Ke, F., & Kwak, D. (2013). Constructs of student-centered online learning on learning satisfaction of a diverse online student body: A structural equation modeling approach. Journal of Educational Computing Research, 48(1), 97–122. https://doi.org/10.2190%2FEC.48.1.e
Kline, R. B. (2010). Principles and practice of structural equation modeling (3rd ed.). New York, New York: Guilford Press.
Kuo, Y., Walker, A. E., Belland, B. R., & Schroder, K. E. E. (2013). A predictive study of student satisfaction in online education programs. The International Review of Research in Open and Distributed Learning (IRRODL), 14(1), 16–39. https://doi.org/10.19173/irrodl.v14i1.1338
Kuo, Y.-C., Walker, A. E., Schroder, K. E. E., & Belland, B. R. (2014). Interaction, internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses. The Internet and Higher Education, 20, 35–50. https://doi.org/10.1016/j.iheduc.2013.10.001
Lee, J. (2010). Online support service quality, online learning acceptance, and student satisfaction. The Internet and Higher Education, 13(4), 277–283.
Lee, S., Barker, T., & Kumar, V. S. (2016). Effectiveness of a learner-directed model for e-learning. Educational Technology & Society, 19(3), 221–233.
Lee, S. J., Srinivasan, S., Trail, T., Lewis, D., & Lopez, S. (2011). Examining the relationship among student perception of support, course satisfaction, and learning outcomes in online learning. The Internet and Higher Education, 14(3), 158–163. https://doi.org/10.1016/j.iheduc.2011.04.001
Lee, Y., & Choi, J. (2011). A review of online course dropout research: implications for practice and future research. Educational Technology Research and Development, 59(5), 593–618. https://doi.org/10.1007/s11423-010-9177-y
Lee, Y., & Choi, J. (2013). A structural equation model of predictors of online learning retention. The Internet and Higher Education, 16, 36–42. https://doi.org/10.1016/j.iheduc.2012.01.005
Lim, D. H., Morris, M. L., & Kupritz, V. W. (2007). Online vs. Blended Learning: Differences in Instructional Outcomes and Learner Satisfaction. Journal of Asynchronous Learning Networks, 11(2), 27–42.
Lu, H., & Chiou, M. (2010). The impact of individual differences on e-learning system satisfaction: A contingency approach. British Journal of Educational Technology, 41(2), 307–323.
Lu, J., Yu, C.-S., & Liu, C. (2003). Learning style, learning patterns, and learning performance in a WebCT-based MIS course. Information & Management, 40(6), 497–507. https://doi.org/10.1016/S0378-7206(02)00064-2
McGorry, S. Y. (2003). Measuring quality in online programs. The Internet and Higher Education, 6(2), 159–177.
Mohr, A. T., Holtbrugge, D., & Berg, N. (2012). Learning style preferences and the perceived usefulness of e-learning. Teaching in Higher Education, 17(3), 309–322.
Mutula, S. M., & Brakel, P. (2006). An evaluation of e-readiness assessment tools with respect to information access: Towards an integrated information rich tool. International Journal of Information Management, 26(3), 212–223. https://doi.org/10.1016/j.ijinfomgt.2006.02.004
Paechter, M., & Maier, B. (2010). Online or face-to-face? Students’ experiences and preferences in e-learning. Internet and Higher Education, 13(4), 292–297. https://doi.org/10.1016/j.iheduc.2010.09.004
Palmer, S. R., & Holt, D. M. (2009). Examining student satisfaction with wholly online learning. Journal of Computer Assisted Learning, 25(2), 101–113.
Palmer, S., & Holt, D. (2010). Students’ perceptions of the value of the elements of an online learning environment: looking back in moving forward. Interactive Learning Environments, 18(2), 135–151.
Park, J.-H., & Choi, H. J. (2009). Factors influencing adult learners’ decision to drop out or persist in online learning. Educational Technology & Society, 12(4), 207–217.
Pena, M., & Yeung, A. (2010). Satisfaction with online learning: does students’ computer competence matter? International Journal Of Technology, Knowledge & Society, 6(5), 97–108.
Richardson, J. C., & Newby, T. (2006). The role of students’ cognitive engagement in online learning. American Journal of Distance Education, 20(1), 23–37. https://doi.org/10.1207/s15389286ajde2001_3
Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Network, 7(1), 68–88.
Roach, V., & Lemasters, L. (2006). Satisfaction with online learning: A comparative descriptive study. Journal of Interactive Online Learning, 5(3), 317–332.
Sahin, I., & Shelley, M. (2008). Considering students’ perceptions: The distance education student satisfaction model. Educational Technology & Society, 11(3), 216–223.
Santally, M. I., & Senteni, A. (2013). Effectiveness of personalised learning paths on students learning experiences in an e-learning environment. European Journal of Open, Distance and e-Learning, 16(1), 36–52.
Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the Fit of Structural Equation Models: Tests of Significance and Descriptive Goodness-of-Fit Measures. Methods of Psychological Research, 8(2), 23–74.
Siddique, A., Durrani, Q. S., & Naqvi, H. A. (2017). Designing adaptive e-learning environment using individual differences. Pakistan Journal of Science, 69(1), 101–109.
Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education, 7(1), 59–70. https://doi.org/10.1016/j.iheduc.2003.11.003
Sun, P.-C., Tsai, R. J., Finger, G., Chen, Y.-Y., & Yeh, D. (2008). What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50(4), 1183–1202. https://doi.org/10.1016/j.compedu.2006.11.007
Toral, S. L., Barrero, F., Martinez-Torres, M. R., Gallardo, S., & Duran, M. J. (2009). Modeling learner satisfaction in an electronic instrumentation and measurement course using structural equation models. IEEE Transactions on Education, 52(1), 190–199. https://doi.org/10.1109/TE.2008.924215
Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006). Learning styles and formative assessment strategy: Enhancing student achievement in web-based learning. Journal of Computer Assisted Learning, 22(3), 207–217.
Watkins, R., Leigh, D., & Triner, D. (2004). Assessing readiness for e-learning. Performance Improvement Quarterly, 17(4), 66–79.
Wiesenmayer, R., Kupczynski, L., & Ice, P. (2008). The role of technical support and pedagogical guidance provided to faculty in online programs: Considerations for higher education administrators. Online Journal of Distance Learning Administration, 11(4).
Wu, H., Tennyson, R. D., & Hsia, T. (2010). A study of student satisfaction in a blended e-learning system environment. Computers & Education, 55(1), 155–164. https://doi.org/10.1016/j.compedu.2009.12.012
Yasmin, D. (2013). Application of the classification tree model in predicting learner dropout behaviour in open and distance learning. Distance Education, 34(2), 218–231. https://doi.org/10.1080/01587919.2013.793642
Yukselturk, E., & Bulut, S. (2007). Predictors for student success in an online course. Educational Technology & Society, 10(2), 71–83.
Yukselturk, E., & Yildirim, Z. (2008). Investigation of interaction, online support, course structure and flexibility as the contributing factors to students’ satisfaction in an online certificate program. Educational Technology & Society, 11(4), 51–65.
Zawacki-Richter, O. (2009). Research areas in distance education: a Delphi study. International Review of Research in Open and Distance Learning (IRRODL), 10(3). https://doi.org/10.19173/irrodl.v10i3.674