Start Submission Become a Reviewer

Reading: Development of an Online Teaching Competency Scale for University Instructors


A- A+
Alt. Display

Research articles

Development of an Online Teaching Competency Scale for University Instructors


Irfan Simsek,

Istanbul University – Cerrahpasa, TR
X close

Sevda Kucuk ,

Ataturk University, TR
X close

Sezer Kose Biber,

Istanbul University – Cerrahpasa, TR
X close

Tuncer Can

Istanbul University – Cerrahpasa, TR
X close


In providing effective online education, it is crucial that the instructors have the competence to teach online. The aim of this study is to develop a valid and reliable online teaching competency scale for online instructors. The data were collected from 392 instructors working in a big state university in Turkey (Istanbul University-Cerrahpasa). The instructors have conducted online courses through synchronous and asynchronous methods during the pandemic process. The development and evaluation process of the scale included exploratory factor analysis and convergent validity. The scale consists of 15 items and represents four factors of online instructors’ competencies: pedagogy, facilitation, technology, and course administration. The total variance of the scale was found 64%. Internal consistency coefficient of the whole scale was found to be .83 according to reliability analysis. The results of the study revealed that the scale is valid and reliable for measuring instructors’ online teaching competency. The implications of the study were discussed in detail.

How to Cite: Simsek, I., Kucuk, S., Biber, S. K., & Can, T. (2021). Development of an Online Teaching Competency Scale for University Instructors. Open Praxis, 13(2), 201–212. DOI:
  Published on 20 Oct 2021
 Accepted on 29 May 2021            Submitted on 10 Jan 2021


Being a symbol of modern culture today, technology has become the most effective and indispensable element of daily life (Menz, 2009; Ozturk & Can, 2013; Zhang, 2017). As in many other fields, new technological concepts are added to the field of education every day. Due to developments in the field of information and communication, the changes in the needs of people, the increasing number of students and the amount of information, the low-cost Internet access and the widespread use of mobile devices that allow access from anywhere can be considered among the main reasons for the change and transformation in education. The reflection of the changes in the social order onto the learning environments due to these factors has been particularly effective in the differentiation of the teaching methods used (Gurley, 2018; Schmid, & Petko, 2019). Concepts related to new teaching methods and techniques such as web/computer-supported/based teaching, e-learning, virtual/cyber classroom, distance teaching have gathered prominence in the education world.

The recent technological developments and changes in social life have caused online teaching to become more widespread all over the world. Especially after the Corona Virus (COVID19), the disease that emerged in the last quarter of 2019 and spread worldwide in a very short time, was classified by The World Health Organization’s (WHO) as a global epidemic, the popularity of distance education went off charts. In order to reduce the rate of spread of this epidemic affecting the whole world, the majority of countries have adopted many strict rules to prevent social interaction. Restrictions such as completely banning closed places like cinemas, theaters, and shopping malls where people are gathering in large numbers, working from home, and organizing flexible working hours are some of the measures taken. At this point, it is inevitable that the sudden changes in the social and cultural fields affect the education systems of the countries. According to Telli Yamamoto and Altun (2020), education is the most affected field by COVID19 after health. According to UNESCO (2020) data, while the education life of approximately 300 million students (17.1% of students receiving education) was restricted in March due to the epidemic, this number reached approximately 1.5 billion (84.3%) within a month. As a result, educational institutions have had to urgently stop face-to-face education at all levels from kindergartens to higher education in this process and switch to distance education practices.

Machynska and Dzikovska (2020) stated that educational institutions that try to carry out their activities by taking urgent measures during the pandemic process face various difficulties. One of them is to decide on the learning platforms to be used in distance education and to ensure that teachers reach the competence to teach in these environments. In providing effective online teaching, it is very important that the instructors have the competence to teach in online environments. In this direction, studies have been carried out in the literature to reveal the competencies of online tutors.

Online Teaching Competencies and Self-efficacy

Online teaching competencies consist of many categories. The categories that are found in the literature include technology/technical skills, online communication skills, pedagogical knowledge, teaching methods and strategies, online education and content; field expertise, personal characteristics, process management and facilitation, planning and preparation, course management, and evaluation (Aydin, 2005; Denis et al., 2004; Klein & Fox, 2004; Reid, 2002; Richey et al., 2001; Salmon, 2012; Shank, 2004). For effective online teaching, instructors must be proficient in these dimensions. At this point, the self-efficacy of the tutors is very important. In the educational process, it is important to understand self-efficacy related to various academic practices because self-efficacy has a significant impact on participants’ goals, efforts, and achievements (Kundu, 2020). Studies that examined the importance of teachers’ self-efficacy in the online teaching process revealed a strong link between self-efficacy and technology use potential (Corry & Stella, 2018; Sun & Chen, 2016). Tutors tend to feel less self-efficient about online teaching compared to physical and online classroom settings (Johnson et al., 2020). However, instructive self-efficacy is malleable (Bandura, 1997), and the research has shown its relationship to student outcomes (Goddard et al., 2000; Tschannen-Moran & Hoy, 2001). In addition to the importance of quality in both technology and curriculum of online education, more research is needed in defining and determining the instructional self-efficacy structure in online education (Corry & Stella, 2018; Ma et al., 2021).

Literature Review

A Great amount of valuable information could be extracted from evaluations of distance and online education and instructors, which then could be of benefit to the learners and could be used to enhance better learning. With these evaluations, course designing and administration could be optimized for the best learning experience as well. However, according to Thomas and Graham (2018) given the immense growth of online education, the standardized evaluation of online courses and instructors is extremely impoverished. Berk (2013) posits that the available measures and the quality of measures are lagging far behind course production in the context of the assessment of online courses and the instructors that are conducting them. Initially, traditional face-to-face student evaluation instruments were used to measure online instructors’ effectiveness (Berk, 2013; Dziuban & Moskal, 2011).

Later, broadly speaking, checklists and rubrics were either developed in-house or acquired from other institutions to evaluate the online courses, the instructors, and especially the course design (Piña & Bohn, 2014). In evaluating the online courses and instructor’s student evaluations have always been among the most common forms of evaluations in online higher education courses (Thomas, 2018). For instance, to evaluate the online instructors, Loveland (2007) adjusted the Student Evaluation of Teaching (SET) used widely and accepted as a valid and reliable instrument to evaluate instructors in a face-to-face classroom by altering “oral” communication skills with “written” communication skills. Another measure that is adapted from traditional evaluations to evaluate distance education is Electronic Student Instructional Report II (e-SIR II) that is administered by Educational Testing Services (Klieger et al., 2014). The scale features planning and course organization, interaction, specific course activities like grading, exams, and assignments, teacher’s instruction and course material, course outcomes, student effort and engagement, and the amount of work, pace, and difficulty of the course (Liu, 2012).

In the context of evaluating online teaching, Northcote et al. (2011) tried to define the scale of knowledge and skills required for an effective online educator. Bigatel et al. (2012) also searched to find out the competencies for online teaching success. They identified attitude/philosophy, building a learning community, class administration, faculty workload management, teaching and learning, and technology-use abilities as effective online teaching competencies. Kavrat & Turel (2013) developed a scale to determine teacher competencies for online teaching. They identified the teacher’s roles as communicative, technical, social, and pedagogical. Furthermore, in their study, Gosselin et al. (2016) examined the threshold concepts, threshold attitudes, and threshold skills of the instructors. These included how instructors perceived the course design, facilitated interaction, engaging meaningfully in online learning contexts, self-efficacy, and confidence in online teaching, management of assessment process, setting up and modifying online learning, tracking student attendance and progress. However, they still believe that there is a need for more research to clarify the threshold concepts and self-efficacy levels in academic staff within the context of online teaching and learning. Reyes-Fournier et al. (2020) as well concluded that the available measures and scales that evaluate online and distance teaching efficiency have significant limitations. Thus, online teaching competencies scales cannot shed light comprehensively on instructors’ online teaching competencies from the online teaching process (Wang et al., 2019). Reyes-Fournier et al. (2020) add to that by stressing the lack of research and appropriate tools for evaluating online teaching and asserting that reliability and validity data are insufficient.

Rationale and Importance of the Study

Online teaching activities can generally be carried out synchronously or asynchronously. In asynchronous learning, students and instructors participate in teaching activities at different times/places, while in synchronous distance learning, all participants perform learning activities at the same or different locations (Allen & Seaman, 2008). However, in the online teaching process, most of the instructors stated that online education was considered as transferring the existing teaching materials to the online environment (Wang et al., 2019). Today, all educational institutions have urgently switched to distance education, and in this process, teachers have generally focused on performing face-to-face (F2F) educational activities through live sessions. At this point, video conferencing applications such as Google Meet, Microsoft Teams, Zoom, storage areas such as Google Drive, Dropbox, Yandex Drive, learning management systems such as Moodle, Google Classroom, Canvas or various Web 2.0 applications are used to increase interaction in the course. It is important that the instructors have the technical competencies to use such tools in the online education process. Gang & Shanxi (2015) also stated that technology competencies are generally taken into consideration in determining the competencies of teachers in online teaching. However, the online instructor should have the competence to guide students by organizing their learning activities in the online learning process, as well as bringing educational content to the online environment (Allen & Seaman, 2008; Wang et al., 2019). In this respect, it is clear that it is necessary to focus on the problem of “reaching the proficiency of teachers in teaching platforms to be used in distance education” stated by Machynska & Dzikovska (2020). While determining these competencies, the pedagogical competencies of online instructors (Machynska & Dzikovska, 2020), their ability to prepare themselves and students for online education, to choose the right tools with appropriate teaching methods and techniques, to facilitate learning, and to manage online courses should also be taken into consideration (Wang et al., 2019). In today’s world, where all educators from preschool to higher education assume the role of online teacher, determining the online teaching competencies of teachers is important in improving the online education process. This study was focused on determining the online teaching proficiency levels of higher education instructors based on their self-efficacy and confidence in online teaching. The aim of this study is to develop a valid and reliable online teaching competency scale for online instructors.


The correlational research design employing factor analysis techniques was used in the study. More specifically exploratory factor analysis (EFA) technique was used. EFA technique is used to reduce the number of variables by grouping according to moderate or high correlation with each other (Fraenkel et al., 2012).


The data were collected from the instructors working in a big state university in Turkey: Istanbul University-Cerrahpasa. These instructors have experienced online teaching along with the pandemic process. The face-to-face instructors that move to remote teaching due to the emergency situation have conducted online courses through synchronous and asynchronous methods during the Covid-19 pandemic process. The participants have been teaching their face-to-face courses with online methods since March 2019 due to the pandemic. The data were collected from the instructors in the 2020–2021 fall semester mid-term. For EFA, the data were collected from 392 online instructors (209 females, 183 males; aged between 25 and 64 years) who have been working in various departments of health, engineering, and social sciences faculties. The online survey was e-mailed to all instructors in the university and the volunteers filled the scale. Table 1 presents the demographic characteristics of the participants.

Table 1

Demographic Characteristics of Online Instructors.

n %


    Male 183 46.7

    Female 209 53.3


    25–35 47 12

    36–50 204 52

    51 and above 141 36

Teaching Level

    Bachelor’s degree 67 17.1

    Undergraduate 321 81.9

    Graduate 4 1.0

Discipline of Instructors

    Engineering 190 48.5

    Health 136 34.7

    Social Science 66 16.8

Used Online Teaching Platforms

    Zoom 300 76.5

    Google Meet 144 36.7

    BigBlueButton 20 5.1

    Microsoft Teams 18 4.6

    Canvas 179 45.7

    Google Classroom 52 13.3

Data Collection and Analysis

Previous studies in the literature were examined to determine the online instructor competencies in the study. Based on the previous studies (Bangert, 2006; Kavrat & Turel, 2013; Reyes-Fournier et al., 2020; Wang et al., 2019) and theoretical frameworks (Bawane & Spector, 2009; Klein & Fox, 2004; Richey et al., 2001; Shank, 2004) in the literature, an item pool was created including 32 items. The scale items are designed in 5-Likert type questions (from 1: strongly disagree to 5: strongly agree). For content and face validity, three field experts and one language expert checked the created items, and the necessary revisions were made. The scale was applied to 10 instructors as a pilot. A total of 32 items were sent to 1197 instructors working at the university via email. A total of 392 instructors voluntarily participated in the research. The scale consisting of 32 items was applied to 392 instructors for EFA. In addition, convergent validity was checked for the construct validity of the scale in order to verify the factor structure that emerged in the EFA. Cronbach’s alpha reliability was applied for the reliability of the scale. Figure 1. summarizes the operations performed during the scale development process.

Figure 1 

The development and validation process of the scale.

For construct validity, EFA analysis was conducted. The composite reliability (CR) and the average variance extracted (AVE) values were calculated for convergent validity. IBM SPSS AMOS 24.0 software was employed to scrutinize the data. First, exploratory factor analysis assumptions were checked (Field, 2013; Pallant, 2020; Tabachnick et al., 2007). Outliers and missing data, normality of data set, sample size, and sampling adequacy, inter-correlations between variables and linearity were examined. Overall, the data met all the assumptions.


The Results of Exploratory Factor Analysis and Reliability

The EFA was employed using principal components approach. The adequacy of sample has been decided as statistically significant since the KMO coefficient was found to be .846 and the Bartlett’s Sphericity test’s χ2 indicated a value of 2417,037 (p < .05). In the context of the study, the correlation matrix was examined to determine the relationships between the items and it was determined that there were relationships between the items. For this reason, the promax rotation technique, which is used when there is a relationship between factors, was preferred (Brown, 2009). In the literature, it is expressed that the cut-off value can be taken as .512 for a sample of 100 and .364 for a sample of 200 (Field, 2013). In this study, the cut-off value was determined to be .40. It is not appropriate to handle items with a value less than .30 in the communalities table under any factor (Pallant, 2016). In addition, the value of the anti-image correlation should be greater than .50 (Field, 2013). The anti-image matrix of correlation and communalities values were checked, and it was observed that the assumptions were satisfied. In the first factor analysis, 34 items were collected under 7 factors. The analysis was repeated by checking the items and removing the items that went under two or more factors. There should be a difference greater than .10 between the factor loads of the items that appear in two factors (Field, 2013). By controlling the communalities and factor load values each time, the items that need to be removed were removed sequentially, considering the significance value in the scale. As a result, a structure consisting of 15 items under 4 factors emerged. The scree plot graph of the breakpoint was presented in Figure 2.

Figure 2 

Scree Plot Graph.

The total explained variance of the scale gathered under 4 factors is 64.244%. According to items’ meanings, the first factor has been named “pedagogy”, the second “facilitation”, the third “technology”, and the fourth “course administration”. The Cronbach’s alpha coefficient of the scale was calculated for reliability (Cronbach’s α = .834). The reliability of the factors of the scale took values between .83 and .68. The reliability coefficients of the factors that are around .70 is sufficient for internally consistent scores (Nunnally, 1994). Therefore, the reliability of the scale is good. The communalities of the scale items, standardized factor loads, eigenvalues, explained total variance of the factors and their reliability values were presented in Table 2.

Table 2

Standardized Factor Loads, Explained Variance, and Reliabilities of the Scale.



27 I clearly state the learning objectives of the course. .779 .942

26 I clearly state the topics of the course. .750 .931

28 I make the preparations for the course before the live sessions. .677 .787

24 I attend live sessions enthusiastically. .564 .731

29 I use teaching methods and techniques that will enable students to participate in live sessions. .530 .449


32 I support students to build and maintain a learning community. .758 .963

33 I help students develop a positive attitude towards online learning. .664 .803

31 I enable students to establish online learning community relationships with me and each other. .661 .727


11 I effectively use the hardware tools (computer, tablet, camera, etc.) required by online education. .774 .882

12 I effectively use software tools required by online education. .756 .831

20 I solve the problems that I encounter while using online education tools myself .460 .750

Course Administration

16 I organize my courses in a modular (weekly, topic, etc.) structure in LMS. .631 .840

14 I effectively use discussion forums in LMS. .661 .819

13 I effectively use the existing components (homework, calendar, etc.) of the LMS. .517 .530

10 I organize activities that will increase communication and interaction in LMS. .454 .456

Eiegen Values 5.33 1.94 1.20 1.18

Explained total variance (Total = %64.244) %35.50 %12.92 %7.96 %7.86

Cronbach’s alpha α = .834 α = .832 α = .772 α = .702 α = .680

Item-factor Correlation Analysis

The means, standard deviations and inter-correlations between the factors are presented in Table 3. While the pedagogy factor has the highest mean score (M = 4.68; SD = .40), the course administration factor has the lowest mean score (M = 3.36; SD = .89). According to Pearson Correlation analysis, there were significant relationships among all factors (p < .01). The results show that pedagogy, facilitation, technology, and course administration were positively correlated. The scale has internal consistency, and each factor measures a discrete property serving the aim of the scale as a whole.

Table 3

Means, standard deviations, and inter-correlations between the factors.


Pedagogy (PE) 4.68 .40 1

Facilitation (FA) 4.17 .81 .46** 1

Technology (TE) 4.19 .71 .47** .42** 1

Course Administration (CA) 3.36 .89 .30** .40** .36** 1

** p < .01.

Convergent Validity

A convergent validity study was conducted for the construct validity of the scale. AVE values were explained by the dimensions in the scale, CR coefficients and finally, whether the composite reliability coefficients for convergent validity are greater than the average variance values explained were examined. According to the criteria, Items should have standardized factor loads and AVE values greater than .50, CR coefficients should be greater than .70 (Nunnally & Bernstein, 1994), and CR coefficients should be greater than AVE values (Byrne, 2016). In Table 4, the AVE and CR values of the factors are given. The scale developed in the study meets the AVE and CR criteria.

Table 4

Convergent Validity Values.


Pedagogy 0.89 0.62

Facilitation 0.87 0.70

Technology 0.86 0.68

Interaction 0.83 0.57


As a result of this scale development study conducted to determine online teaching competencies, a 4-factor structure consisting of 15 items emerged. These factors are “Pedagogy”, “Facilitation”, “Technology”, and “Course Administration”. The variance total that the scale explains has been determined to be 64.244% and the reliability coefficient of the scale was found to be Cronbach’s α = .834. In addition, convergent validity, which was carried out to check the structural validity of the scale, provided appropriate values. The scale developed according to the obtained results has turned out to be a valid and reliable scale.

When the factor structure of the scale is examined, the explained variance of the “pedagogy” dimension, which expresses the pedagogical knowledge level of the teachers, is 35.5% and it has the most important place in the structure of the scale. Instructors with a high level of pedagogical competence should clearly state the important goals and subjects of the course in the online course process. On the other hand, it is very important for the instructors to make the necessary preparations before the courses, conduct their courses willingly and use the teaching methods and techniques that will ensure the participation of the students in the course process (Bawane & Spector, 2009). As a matter of fact, the importance of the pedagogical competence of instructors in the online education process is mentioned in the literature (González-Sanmamed et al., 2014; Gosselin et al., 2016; McAllister & Graham, 2016; Murphy et al., 2011; Wang et al., 2019). For this reason, the pedagogical competence factor of the scale is paramount.

In online teaching, it is very important for instructors to facilitate the process. The facilitation aspect of instructors’ support for students to form and maintain a learning community, contributing to their development of a positive perspective towards online teaching, and enabling students to establish online learning relationships were discussed in this study. Reid (2002), Shank (2004), Bawane and Spector (2009) highlighted the importance of facilitation in their study in which they determined the Online Instructors Competencies categories. Facilitation includes “regular, active, and thoughtful classroom interactions executing planned activities, managing communications, and supervising learning processes” (Blackman et al., 2020). The facilitation dimension of the scale has an important place in revealing how the instructors perceive themselves in this aspect. This dimension, which has the same items in the study of Kavrat and Turel (2013), was named as social role. However, in this study, based on the literature, it was concluded that the factor explains the facilitation better (Al-Salman, 2011; Bawane & Spactor, 2009; Blackman et al., 2020; Reyes-Fournier et al., 2020).

In the teaching from distance process, instructors have to use software and hardware tools. The technology dimension of the scale includes the ability of instructors to effectively use the hardware and software tools required by distance education and to solve the technical problems they encounter on their own. As a matter of fact, in the online teaching process, video conferencing applications such as Google Meet, Microsoft Teams, Zoom, online storage spaces such as Google Drive, Dropbox, Yandex Drive, learning management systems such as Moodle, Google Classroom, Canvas, and various Web 2.0 applications are frequently used to increase interaction in the course. Therefore, it is very important for the instructors to have the technical competencies to use such tools in the online education process. Gang & Shanxi (2015) draws attention to the fact that technology competencies should be taken into consideration in determining the competencies of instructors in online teaching. Similarly, Roberts (2018) suggested that technology competence is very important in his study on the competencies of online tutors and that trainers should be given training to improve their skills in this field. When the studies on e-Teacher Competencies in the literature are examined, concepts such as technical knowledge (Reid, 2002), technical skills (Salmon, 2012), technology knowledge (Denis et al., 2004), Skills with Internet tools Bailie (2011) are especially emphasized in technology dimension of instructors’ competencies. Similarly, in the literature, it is seen that the technology dimension is also included in the studies of instructors’ roles (Aydin, 2005; Egan & Akdere, 2005; Lee Dong Yub, 2011; Thach & Murphy, 1995; Whitehead, 2018; Wiesenberg & Hutton, 1995; Williams, 2003).

In the online teaching process, courses are conducted over various learning management systems. The Course Administration dimension of the scale includes the applications performed by online tutors on LMS. Learning management systems have features such as module, calendar, homework, online exam, discussion, integration with live course systems. Effective use of such features of the learning management system in structuring the online teaching process will facilitate the Course Administration process. On the other hand, activities carried out over LMS are very important for effective communication and interaction processes. In online teaching, administration is to manage the time and course, demonstrate leadership qualities, establish rules and regulations (Bawane & Spector, 2009). Nowadays, course administration can be carried out easily via LMS.

When the correlation between the factors of the scale was examined, moderate positive relationships were found between factors. As a matter of fact, it can be said that competencies in all these factors interact with each other in the effective online teaching process. When the online proficiency levels of the instructors were examined, it was found that they see themselves the most competent in pedagogy dimension. This situation may be related to the fact that the instructors in the sample of the study have been teaching face to face at the university for years. The pedagogical principles in the face-to-face teaching process are naturally similar to those in the online teaching process. However, it has emerged that they consider themselves less adequate in Course Administration over LMS. It can be said that this is due to the fact that the online teaching experiences of the instructors started with the pandemic process. As the training started to be given completely online, the instructors started to manage their courses on LMSs such as Canvas and Google Classroom.


As a result, in this study, a valid and reliable scale was developed to determine the online teaching competencies of instructors in higher education. The goal of effective survey design is to measure constructs with short, concise, user-friendly questions that produce high response rates (Saleh & Bista, 2017). The developed scale offers an advantageous structure in terms of usability because it contains few items. The factor structure of the scale (pedagogy, facilitation, technology, and course administration) overlaps with the theoretical structures that reveal online teaching competencies. In the online teaching process, instructors need to make use of many technological tools. However, the high level of technological knowledge of the instructors alone does not guarantee an effective online teaching process. In this context, it is a very important result that the pedagogy dimension of the scale came to the fore. Thus, the pedagogical competence level of the instructors is one of the most important requirements of qualified and effective teaching.

The strength of the study is that a scale was developed with the data obtained from a large number of participants working in different disciplines at higher education level. However, although the sample was sufficient in the study, it has limitations in terms of collecting data from only one university, not having CFA, and gathering data from convenient sample. In the light of the data obtained from the research, various suggestions for practitioners and researchers are presented. By using this scale, online teaching proficiency levels of instructors who teach online at university level can be revealed. Faculties can carry out studies to improve the competencies of instructors according to the results obtained from the scale. In future studies, it may be recommended to verify the factor structure of the scale with CFA. The developed scale can also be used to determine the online teaching competencies of teachers at the K-12 level. Based on the items in this scale, a student evaluation form can be created, and a student version of the scale can be developed. In this way, both the instructors’ self-assessment and the students’ evaluation of the teachers can be provided in parallel. With this scale, factors affecting the competence of online instructors can be examined in depth with qualitative studies. In quantitative studies, SEM studies can be carried out to reveal the factors that affect online teaching competencies.

Competing Interests

The authors have no competing interests to declare.


  1. Allen, I. E., & Seaman, J. (2008). Staying the course—online education in the United States. Babson Survey Research Group. 

  2. Al-Salman, S. M. (2011). Faculty in online learning programs: Competencies and barriers to success. Journal of Applied Learning Technology, 1(4). 

  3. Aydin, C. H. (2005). Turkish mentors’ perception of roles, competencies, and resources for online teaching. Turkish Online Journal of Distance Education, 6(3), 58–80. 

  4. Bailie, J. L. (2011). Effective online instructional competencies as perceived by online university faculty and students: A sequel study. MERLOT Journal of Online Learning and Teaching, 7(1), 8. 

  5. Bandura, A. (1997). Self-efficacy: The exercise of control. Worth Publishers. 

  6. Bangert, A. W. (2006). The development of an instrument for assessing online teaching effectiveness. Journal of Educational Computing Research, 35(3), 227–244. DOI: 

  7. Bawane, J., & Spector, J. M. (2009). Prioritization of online instructor roles: Implications for competency-based teacher education programs. Distance Education, 30(3), 383–397. DOI: 

  8. Berk, R. A. (2013). Face-to-face versus online course evaluations: “a consumer’s guide” to seven strategies. Journal of Online Learning and Teaching, 9(1), 140. 

  9. Bigatel, P. M., Ragan, L. C., Kennan, S., May, J., & Redmond, B. F. (2012). The identification of competencies for online teaching success. Journal of Asynchronous Learning Networks, 16(1), 59–77. DOI: 

  10. Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming. Routledge. DOI: 

  11. Corry, M., & Stella, J. (2018). Teacher self-efficacy in online education: A review of the literature. Research in Learning Technology, 26. DOI: 

  12. Denis, B., Watland, P., Pirotte, S., & Verday, N. (2004). Roles and competencies of the e-tutor. Networked Learning 2004: A Research Based Conference on Networked Learning and Lifelong Learning: Proceedings of the Fourth International Conference, Lancaster, 150–157. 

  13. Dziuban, C., & Moskal, P. (2011). A course is a course is a course: Factor invariance in student evaluation of online, blended and face-to-face learning environments. The Internet and Higher Education, 14(4), 236–241. DOI: 

  14. Egan, T. M., & Akdere, M. (2005). Clarifying distance education roles and competencies: Exploring similarities and differences between professional and student-practitioner perspectives. American Journal of Distance Education, 19(2), 87–103. DOI: 

  15. Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th Ed.). Sage Publications, Inc. 

  16. Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education, 7. New York: McGraw-Hill. 

  17. Gang, L. (2015). Analysis of online tutor’s ability improvement in contemporary distance education. Journal of Shanxi Radio & TV University, 4, 17–19. 

  18. Goddard, R. D., Hoy, W. K., & Hoy, A. W. (2000). Collective teacher efficacy: Its meaning, measure, and impact on student achievement. American Educational Research Journal, 37(2), 479–507. DOI: 

  19. González-Sanmamed, M., Muñoz-Carril, P., & Sangrà, A. (2014). Level of proficiency and professional development needs in peripheral online teaching roles. International Review of Research in Open and Distributed Learning, 15(6), 162–187. DOI: 

  20. Gosselin, K. P., Northcote, M. T., Reynaud, D., Kilgour, P. W., Anderson, M., & Boddey, C. (2016). Development of an evidence-based professional learning program informed by online teachers’ self-efficacy and threshold concepts. Online Learning Journal, 20(3), 178–194. DOI: 

  21. Gurley, L. E. (2018). Educators’ preparation to teach, perceived teaching presence, and perceived teaching presence behaviors in blended and online learning environments. Online Learning, 22(2), 197–220. DOI: 

  22. Johnson, N., Veletsianos, G., & Seaman, J. (2020). U.S. faculty and administrators’ experiences and approaches in the early weeks of the covid-19 pandemic. Online Learning, 24(2), 6–21. DOI: 

  23. Kavrat, B., & Turel, Y. K. (2013). Development of a scale for teachers’ roles and competencies in online distance learning. The Journal of Instructional Technologies & Teacher Education, 1(3), 23–33. 

  24. Klein, J. D., & Fox, E. J. (2004). Performance improvement competencies for instructional technologists. TechTrends, 48(2), 22–25. DOI: 

  25. Klieger, D., Centra, J., Young, J., Holtzman, S., & Kotloff, L. J. (2014). Testing the invariance of interrater reliability between paper-based and online modalities of the SIR IITM student instructional report. Educational Testing Service. 

  26. Kundu, A. (2020). Toward a framework for strengthening participants’ self-efficacy in online education. Asian Association of Open Universities Journal, 15(3), 351–370. DOI: 

  27. Lee Dong Yub. (2011). Korean and foreign students’ perceptions of the teacher’s role in a multicultural online learning environment in Korea. Educational Technology Research and Development, 59(6), 913–935. DOI: 

  28. Liu, O. L. (2012). Student evaluation of instruction: In the new paradigm of distance education. Research in Higher Education, 53(4), 471–486. DOI: 

  29. Loveland, K. A. (2007). Student Evaluation of Teaching (SET) in Web-based classes: Preliminary findings and a call for further research. Journal of Educators Online, 4(2), 1–18. DOI: 

  30. Machynska, N., & Dzikovska, M. (2020). Challenges to manage the educational process in the HEI during the pandemic. Revista Romaneasca Pentru Educatie Multidimensionala, 12(1Sup2), 92–99. DOI: 

  31. Ma, K., Chutiyami, M., Zhang, Y., & Nicoll, S. (2021). Online teaching self-efficacy during COVID-19: Changes, its associated factors and moderators. Education and Information Technologies. DOI: 

  32. McAllister, L., & Graham, C. (2016). An analysis of the curriculum requirements for K-12 online teaching endorsements in the US. Journal of Online Learning Research, 2(3), 247–282. 

  33. Menz, R. L. (2009). Changing society: A social and spiritual vision for the year 2020 and beyond. University Press of America. 

  34. Murphy, E., Rodríguez-Manzanares, M. A., & Barbour, M. (2011). Asynchronous and synchronous online teaching: Perspectives of Canadian high school distance education teachers. British Journal of Educational Technology, 42(4), 583–591. DOI: 

  35. Northcote, M., Seddon, J., & Brown, P. (2011). Benchmark yourself: Self-reflecting about online teaching. 

  36. Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill. 

  37. Ozturk, E., & Can, I. (2013). Perceptions of 5th grades on reading e-books. The Journal of Turkish Social Research, 171(171), 137–153. DOI: 

  38. Pallant, J. (2020). SPSS Survival Manual: A step by step guide to data analysis using IBM SPSS. Routledge. DOI: 

  39. Piña, A. A., & Bohn, L. (2014). Assessing online faculty: More than student surveys and design rubrics. Quarterly Review of Distance Education, 15(3), 25. 

  40. Reid, D. (2002). A classification schema of online tutor competencies. Proceedings of International Conference on Computers in Education, 1, 1049–1050. DOI: 

  41. Reyes-Fournier, E., Cumella, E. J., Blackman, G., March, M., & Pedersen, J. (2020). Development and Validation of the Purdue Global Online Teaching Effectiveness Scale. Online Learning, 24(2), 111–127. DOI: 

  42. Richey, R. C., Fields, D. C., & Foxon, M. (2001). Instructional design competencies: The standards (Book No. ED453803; p. 184). ERIC Clearinghouse on Information & Technology. 

  43. Roberts, J. (2018). Future and changing roles of staff in distance education: A study to identify training and professional development needs. Distance Education, 39(1), 37–53. DOI: 

  44. Saleh, A., & Bista, K. (2017). Examining factors impacting online survey response rates in educational research: Perceptions of graduate students. Online Submission, 13(2), 63–74. 

  45. Salmon, G. (2012). E-Moderating: The Key to Online Teaching and Learning. Routledge. DOI: 

  46. Schmid, R., & Petko, D. (2019). Does the use of educational technology in personalized learning environments correlate with self-reported digital skills and beliefs of secondary-school students? Computers & education, 136, 75–86. DOI: 

  47. Shank, P. (2004). Competencies for online instructors. Learning Peaks LLC. 

  48. Sun, A., & Chen, X. (2016). Online education and its effective practice: A research review. Journal of Information Technology Education: Research, 15, 157–190. DOI: 

  49. Tabachnick, B. G., Fidell, L. S., & Ullman, J. B. (2007). Using multivariate statistics (Vol. 5). Pearson. 

  50. Telli Yamamoto, G., & Altun, D. (2020). The coronavirus and the rising of online education. Journal of University Research, 3(1), 25–34. DOI: 

  51. Thach, E. C., & Murphy, K. L. (1995). Competencies for distance education professionals. Educational Technology Research and Development, 43(1), 57–79. JSTOR. DOI: 

  52. Thomas, J. E. (2018). Current state of online teaching evaluation processes in post-secondary institutions. PhD Thesis, Brigham Young University. 

  53. Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17(7), 783–805. DOI: 

  54. UNESCO. (2020, March 4). Education: From disruption to recovery. UNESCO. 

  55. Wang, Y., Wang, Y., Stein, D., Liu, Q., & Chen, W. (2019). Examining Chinese beginning online instructors’ competencies in teaching online based on the activity theory. Journal of Computers in Education, 6(3), 363–384. DOI: 

  56. Whitehead, M. (2018). What is the teacher’s role in promoting online collaborative dialogue in a self-organised learning environment? [Doctoral disertation, University of Warwick]. 

  57. Wiesenberg, F., & Hutton, S. (1995). Teaching a graduate program using computer mediated conferencing software. 33p. 

  58. Williams, P. E. (2003). Roles and competencies for distance education programs in higher education institutions. American Journal of Distance Education, 17(1), 45–57. DOI: 

  59. Zhang, J. (2017). Life-Oriented Approach. In J. Zhang (Ed.), Life-oriented behavioral research for urban policy (pp. 1–8). Springer. DOI: 

comments powered by Disqus