Start Submission Become a Reviewer

Reading: The Development and Validation of the Zero Cost Textbook Satisfaction Scale (ZSS)

Download

A- A+
Alt. Display

Research articles

The Development and Validation of the Zero Cost Textbook Satisfaction Scale (ZSS)

Authors:

Alex Redcay ,

Millersville University, US
X close

Nicole Amber Pfannenstiel,

Millersville University, US
X close

Daniel Albert

Millersville University, US
X close

Abstract

This paper aimed to develop and validate the Zero Textbook Satisfaction Scale (ZSS), a measurement tool to assess student satisfaction with Zero Textbook Costs (ZTC) resources. The validated ZSS is available via a CC BY-NC-ND license and is a valuable tool for faculty and institutions seeking to understand student perceptions of their OER/ZTC adoptions and for broad scale OER/ZTC adoption initiatives seeking to understand student experiences with materials across a wide variety of courses. The ZSS was administered, revised, and validated following DeVellis (2017) 8 steps for validating measurement tools. The secondary data was analyzed using Exploratory Factor Analysis (EFA), which resulted in 11 items on 1 factor and a Cronbach’s alpha, which showed excellent internal consistency (α = .94). The ZSS validated tool is designed to be used by faculty who would like to assess student satisfaction with the open source textbooks, and compare data to other sites of adoption.

How to Cite: Redcay, A., Pfannenstiel, N. A., & Albert, D. (2022). The Development and Validation of the Zero Cost Textbook Satisfaction Scale (ZSS). Open Praxis, 14(4), 291–299. DOI: http://doi.org/10.55982/openpraxis.14.4.487
14
Views
6
Downloads
  Published on 31 Dec 2022
 Accepted on 20 Dec 2022            Submitted on 16 Mar 2022

Introduction

As textbook costs have risen, faculty, librarians, and administrators have turned to free and open textbook materials. Open Educational Resources (OER) and Zero Textbook Cost (ZTC) courses have become mainstream ways of reducing some of the cost burdens in individual classes. OER materials are openly published remixable materials. ZTC materials draw from openly published materials, library books, library materials, and library-accessed articles. While most OER materials and books often include Creative Commons licenses allowing printing (especially helpful for accessibility), most ZTC materials do not carry an open to remixing license. Both create space for faculty and instructors to mix and match resources within their curricular design to meet learner needs. As pointed out by West and Victor (2011), this also means students in an OER or ZTC course need to know about the resources, learn how to access the resources, and have the technology to support accessing the resources as they work through courses. Essentially, West and Victor point out that communicating textbook adoption decisions when no commercial textbook needs to be purchased can immediately raise terminology and student-practices issues, affecting students’ perception of a course. When students are already familiar with the idea that a new semester comes with a bookstore trip and a high fee, free textbooks do not necessarily equate to positive perceptions of a course. Instead, potential hurdles point to a need to understand student perceptions of OER/ZTC materials in higher education courses.

Student perceptions of OER/ZTC materials are one important factor in considering wider scale adoption of OER by faculty. Faculty awareness of OER has continued to see growth. In 2015–16 only 34% of faculty reported they were aware of OER. In 2021–22 awareness of OER has grown to 57% of faculty (Seaman & Seaman, 2022). The increased awareness of OER has correlated with an increase of OER use in courses. Only 5% of faculty reported using OER in 2015–16, but that has grown to 22% of faculty in 2021–22 (Seaman & Seaman, 2022). As use of OER continues to grow it will be beneficial to broadly measure student perceptions and satisfaction with the materials over a broad range of courses. This will help measure the interactions of student satisfaction with other critical factors like student cost, learning outcomes, student agency, and student learning outcomes, that are important for successful OER adoption initiatives.

To understand student satisfaction with OER/ZTC materials, this article presents a validated measurement tool focused on student perceptions and using OER/ZTC materials when adopted within a course. Scholarship and approaches to understanding student perceptions of free course materials has a rich history beginning with the openly shared COUP (Cost, Outcome, Use, Perceptions) tool (Bliss et al., 2013). In his work synthesizing existing research on Open Educational Resources adoption, Hilton notes in both 2016 and 2018 there is a lack of shareable validated tools to understand student perceptions. This article and tool seek to fill that gap. The Student Satisfaction tool is provided here and can be used by other institutions to understand student satisfaction (perception and use) in courses adopting OER and/or ZTC materials (see Pfannenstiel et al., 2020 for student data analysis). The shareable nature of this validated tool also allows institutions to compare student perception data, and understand what factors may be institution-, state-, or region-specific. This will also allow adopters and institutions to understand which factors of student perception exist across OER/ZTC adoptions nationally and internationally. The validated measurement tool allows for a validated understanding of perception, usable data to inform course and campus adoptions of OER/ZTC materials, and functional cross-institutional comparison of student perceptions in cross-institutional OER/ZTC adoption efforts.

Measurement Tools

When this measurement tool was developed, the researchers could not find a validated measurement tool for understanding student perceptions and using OER materials. While much theory-based research has been developed since Bateman, Lane and Moon called for theory-based, generalizable research in 2012, few validated measurement tools have been shared for cross-semester, cross-disciplinary, and/or cross-institutional comparison of OER/ZTC adoption, despite many adoption initiative programs.

Bliss et al. (2013) developed the COUP framework as one of the first tools to provide perception measures, focusing on the impact of cost, outcomes, use, and perceptions of open digital textbooks. Bliss et al. narrowed the impact to these elements within a framework based on the ability to separate; the impact on reducing cost can be assessed on its own, impact on use can be evaluated on its own and across a breadth of elements meaningful to specific adoption practices, impact on students perceptions can be assessed on its own, and impact on outcomes can be evaluated on its own. For each element of the framework, specific adoption practices may lead to narrowing or expanding definitions suitable for the institutional situation. Similarly, each piece may be clearly defined and built into a measurement tool to be used across various student groups for a broader understanding of impact. Finally, Bliss et al. also emphasizes that elements within the framework can impact each other; the reduced cost can impact learning outcomes if a student works less or at fewer jobs because the cost of attendance is reduced, etc. Building upon the work of Bliss, Colvard et al. (2018) explored the impact of OER on student success metrics like D/F/W rate and overall grades, important metrics to be sure, but these data can be better understood when aligned with student perceptions of the resources.

This work created the foundation for our measurement tool; however, we focused specifically on perception and used it while designing for validation. The Bliss tool includes essential questions but is limited by the various measurement formats throughout the survey. Developing and validating a tool based on the ideas from the Bliss tool, we focused on use and perception from the COUP framework, which will help with cross-semester, cross-disciplinary, and cross-institutional comparisons of student use and perceptions of OER/ZTC materials.

Drawing from this understanding of the COUP framework, we developed and validated a measurement tool to understand both self-reported student perception and the use of OER/ZTC materials. We are calling the connection of these two framework elements student satisfaction with OER/ZTC materials.

As pointed out in both Hilton studies (synthesis in 2018, review in 2016), tens of thousands of students have been asked about their perceptions of OER/ZTC materials, and more needs to be done to validate the measurement tools to compare perceptions across institutions. Additionally, as noted by Hilton (2016), many of these perception studies do not also address self-reported use of assigned OER/ZTC materials – what we call satisfaction. Students may overwhelmingly report positive perceptions of free materials, but understanding those perceptions alongside their self-report use of the materials is important if the field is to use perception data in support of future adoptions and initiatives.

Jaggars et al. (2018) introduce a survey instrument focused on three elements of student perception of OER/ZTC. Jaggars et al. discuss the reliability and predictive validity of their tool focused on student self-reported comparisons between OER/ZTC materials and commercial textbooks. Brandle et al. (2019) focus on student use of OER/ZTC materials, seeking to understand and compare student perceptions of OER across institutions within their system. Moving beyond much of the work synthesized by Hilton, Brandle et al. surveyed students from across the CUNY system campuses on where and how they used OER materials for their courses. Todorinova and Wilkinson (2019) explore OER incentive programs as academic librarians, administering a student perception study as part of program improvement assessment. Similar to many other metrics, their study found students are concerned about rising textbook costs, and generally favor OER in courses. Working in similar space, our goal for this validated survey is for programs like Todorinova and Wilkinson’s to have an easily accessible student perception metric, that produces data an institution can compare to existing data sets. The comparability offered through a validated measure can help textbook affordability program assessment and growth across the nation.

Building on the tools of these early studies and drawing from Hilton’s findings about perception and use, this study develops a validated measurement tool meshing student use and student perceptions. These existing studies and surveys build an important foundation for understanding student use and perceptions. The goal is widespread use of this tool to build a body of student satisfaction data for cross-institutional comparison, to support student learning and success, and to support future adoption initiatives.

Methods

Purpose

The purpose of this paper was to develop and validate the Zero Textbook Satisfaction Scale (ZSS) which is a measurement tool to assess student satisfaction with Zero Textbook Costs (ZTC) resources.

IRB & Design

An expedited IRB application was approved by the Millersville IRB (Institutional Review Board) in September 2019 for the larger study regarding student satisfaction with OTC resources (Pfannenstiel et al., 2020). This study is less than minimal psychological risk. Students were informed about the purpose of the study and were invited to provide their consent before data collection occurred. Data was deleted for students who did not consent to participation but completed the survey anyway. This study was non-experimental, cross-sectional, retrospective, and self-report.

Sampling and Data Collection

Faculty (N = 16) were recruited, incentivized, and mentored to participate in an Open Textbook Initiative (OTI). The OTI issued an open invitation to all faculty at one Pennsylvania university to apply to participate in the OTI. The OTI reviewed, rated, and selected faculty to participate. All faculty were willing to adopt ZTC materials in a course of their choosing for the following semester. Upon completion of the OTI mentoring program, implementation of the materials, submission of the course syllabus, and data collection, faculty received $1000 of professional development funds. Faculty received a link for the survey to send to their students. Students (N = 1142) from 18 courses were invited electronically to participate in the study, and 469 surveys were completed, which resulted in a 41% response rate. The data was cleaned, which resulted in 442 remaining participants.

Initial Development of the ZSS

As part of the OTI initiative, survey questions were designed loosely based on the COUP Framework (Bliss et al., 2013). The purpose of the measurement tool was to assess student satisfaction with the ZTC materials and specifically their perceptions of cost, use, and quality of the zero cost materials. Eight specific steps were followed to develop and validate the ZSS measurement tool, which include: (1) determine the variable to be measured, (2) create a pool of items, (3) determine the measurement format, (4) have the item pool examined by experts, and then by non-expert employees (lay people), (5) include other validated items, (6) distribute the items to a developmental sample, (7) validate the items, (8) adjust the scale length, and generate the final measurement (DeVellis, 2017; Freeman, 2019). The literature was examined to determine if a suitable and validated measurement tool existed. When no adequate measurement tool was found, an item pool was generated to reflect satisfaction (latent variable).

After the items were proposed, three reviewers revised the items to increase clarity, direct language, and simplicity, to reflect a sixth-grade reading level, and to ensure that the items captured all relevant ideas. The response type selected was an even number of responses, a 6-point Likert scale from Strongly Disagree (1) to Strongly Agree (6) because it produces higher quality data and encourages respondents to be more reflective (Losby, & Wetmore, 2012). The Likert scale was designed to have an even number (6) of response categories because having a middle response (i.e., neutral, undecided, neither disagree/agree) may be confusing, provides an imprecise response, encourages impulsive selection, limits variability of the data and lowers the quality of the data (DeVellis, 2017; Freeman, 2019; Germain, 2006; Losby, & Wetmore, 2012) Items were created for the student perspective and as a declarative statement followed by varying degrees of agreement (DeVellis, 2017; Freeman, 2019).

Revisions

After the authors created and revised the initial list of items, they were evaluated by experts and piloted to a group of students and faculty (Germain, 2006; Hertzog, 2008; Johanson & Brooks, 2009). The original items were revised based on feedback to ensure that they represented the latent variable, to increase content validity, and improve the internal consistency of items (DeVellis, 2017; Germain, 2006). The pilot provided additional feedback regarding questions that were initially thought to be clear but were not based on the data results. This survey was not able to be correlated with other validated scales since none were able to be found in the literature review. The initial and revised items can be found in Appendix A.

Administration

The final ZSS was administered to undergraduate students from 35 different majors, including education (N = 109), biology (N = 66), psychology (N = 59), communication (N = 52), and business (N = 24). The majority (76%, M = 13.8, SD = 2.7) of students reported that they were full time typically registering for 15 (N = 261, 59%) or 12 undergraduate credits per semester (N = 73, 17%). A large minority of students registered for any summer classes (N = 166, 38%) or winter classes (N = 143, 32%). The majority of students expected. Approximately half students had completed 3 or fewer semesters of college (N = 245, 55%, M = 3.9, SD = 3.1). A minority (28%) were male (N = 124; Females, N = 314; Other, N = 4).

ZSS had 11 items, and with 442 participants responding, there was an average of 40 participants per item. This is significantly above and superior to the common recommendation of ten participants per item for a quality factor analysis (DeVellis, 2017). Having a 1:10 item-to-response ratio eliminates subject variance and unstable covariation among items (DeVellis, 2017).

Results

Cronbach’s alpha

Internal consistency is an indicator of how well the individual scale items are consistent with all other scale items (DeVellis, 2017). Coefficient alpha, or Cronbach’s alpha, is used to assess internal consistency and scale reliability (Bride et al., 2004; Field, 2013). In this step, item-score correlation and item mean were examined (DeVellis, 2017). Cronbach’s alpha (α) can be interpreted as follows above .90 = excellent, .80–.89 = good, .70–.79 = acceptable, .60–.69 = marginal, and below .60 = poor internal consistency (Field, 2013). IBM SPSS 26 was used to calculate Cronbach’s alpha which resulted in the Zero Textbook Satisfaction Scale (ZSS) as having excellent internal consistency (α = .94). Individual items were also analyzed to see if deleting any item would increase the scale consistency, but all items fit well within the scale.

Split-half reliability

Split-half reliability includes splitting the items into two groups and then assessing each half of the scale. The 11 item scale was divided into two groups, with 5 items in part one and 6 items in part two. Both part one (α = .91) and part two (α = .90) have excellent internal consistency. Guttman Split-Half Coefficient is satisfactory at .87.

Kaiser-Meyer Olkin Measure (KMO) and Bartlett’s Test

KMO statistic is a measure of sampling adequacy with values closer to 1 indicating that the factor analysis has reliable factors (Field, 2013; Freeman, 2019). An acceptable KMO is above .50 (Field, 2013). The KMO for this study was .94, which is excellent. Bartlett’s Test of Sphericity reveals the validity and suitability of the responses collected to the problem being addressed through the study (Price, 2017; Urdan, 2017). For Factor Analysis to be recommended as suitable, the Bartlett’s Test of Sphericity must be less than 0.05. Bartlett’s Test value for this study was .000, determining that EFA is an appropriate PCA method to utilize.

Exploratory Factor Analysis (EFA)

EFA is used to explore the underlying factors, or theoretical constructs, of the measure that might be represented by a collection of items. EFA is used in measurement construction to determine which scale items are most strongly correlated with each other and groups these items together into factors (Field, 2013; Freeman, 2019; Urdan, 2017; Yong & Pearce, 2013). EFA uses extraction and rotation to determine how many factors and which items should go with which factor (Urdan, 2017). See Table 1 for factor loadings.

Table 1

Factor Loadings for Principal Axis Factoring for the Zero Cost Textbook Satisfaction Scale (ZSS) 11 items.


FACTOR LOADINGS

1 2

3 The free electronic textbooks/materials in this course were easy to use. 0.862

6 The free textbooks/materials for this course were effective to help me learn. 0.815

11 I would register for a future course that uses free textbooks/materials like the one(s) used in this course. 0.813

7 I understood this course’s content better using the free textbooks/materials than when using paid textbooks/materials. 0.801

8 I was able to put more effort into this course because of the free textbooks/materials. 0.800

4 The free electronic textbooks/materials in this course were easy to understand. 0.792

5 The quality of the free electronic textbooks/materials for this course were high. 0.771

9 I was able to take useful notes using the free electronic textbooks/materials just as I would have with a paid textbook. 0.766

2 I used/read the free course electronic textbooks/materials for this course. 0.755

10 I read more using free textbooks/materials than if the course required paid textbooks/materials. 0.667 0.410

1 I was more satisfied to use free electronic textbooks/materials than overpaid textbooks/materials. 0.613

Kaiser’s criterion recommends retaining factors with an eigenvalue (the quantity of information gathered by a factor) greater than 1.0 (Field, 2013; Freeman, 2019). The initial factor analysis utilizing the principal axis factor and oblique factor rotation (direct oblimin) produced 2 factors greater than 1.0. However, the second factor only had one item that also fell on the first factor but had a higher loading value (Yong & Pearce, 2013). Therefore, it was decided to only retain the 1 factor, which indicates that all the questions in the scale consist of one factor or theoretical construct, specifically satisfaction (Yong & Pearce, 2013).

Discussion

The Student Satisfaction with ZTC Materials Tool provides a validated measure for assessing student experiences with the use of ZTC materials. The tool has been used to assess a wide variety of courses as part of a ZTC initiative. This tool allows practitioners to gain a better understanding of student experiences with ZTC materials as compared with commercial textbook materials. Understanding student interaction and use of ZTC materials are crucial to understanding the impacts of ZTC and OER initiatives that go beyond the cost saving impact students experience. Analysis of student responses to this survey has shown that students’ past experiences with commercial textbooks influence their perceptions of ZTC materials (Albert et al., 2021) which could influence the learning impacts of ZTC and OER initiatives.

Measurements of the impact of OER on overall student learning have seen mixed results. A meta-analysis by Clinton and Khan (2019) showed no differences in student learning performance for OER versus commercial textbooks but did see that withdrawal rates were lower for courses that used OER. Much of the power of OER to impact student learning in the classroom is tied to the “Access Hypothesis” that suggests the impact from OER will be experienced by students who did not previously have access to course materials (Grimaldi et al., 2019). The “access hypothesis” provides an explanation as to why measuring overall learning gains from OER adoption will prove difficult since many of the students in the course would have had access to materials no matter if a commercial or OER textbook was used. It is only then the students who gained access by adopting OER that will see potential benefit.

Our Student Satisfaction Tool allows for the ability to see interactions between student satisfaction in OER/ZTC courses and other attributes like typical textbook spending and typical interactions with course materials. In previously published work (Pfannenstiel et al., 2020 and Albert et al. 2021), we found that students who typically purchase and have access to course materials report higher satisfaction with OER/ZTC materials and rate the materials as being more beneficial for learning. These findings suggest that providing access to materials for students who do not typically have access to materials is not enough since they report lower satisfaction than peers who typically access materials. This may explain why the expected impacts from the access hypothesis are not always realized (Smith et al., 2020). In measuring impact from OER adoption initiatives not only do we need to account for the fact that only a fraction of students are gaining access but we also need to consider that the students who gain access to materials have built structures to navigate courses without materials. We won’t fully see the potential of OER adoption without also providing support for students who newly gained access to course materials and would benefit from structures that help them navigate a new course experience with access to materials. The broadly applicable ZSS allows for measuring these types of impacts in OER adoption initiatives since it looks at combined student perception and use.

This general tool allows for a wide scale assessment of ZTC student experiences across the college curriculum from a wide variety of disciplines. The validated scale allows adopters of OER and ZTC to measure student perceptions of the course materials, which could lead to faculty ultimately choosing different materials or changing the course structure and instructional support for how students interact with the materials. Faculty changing from the use of commercial textbook materials to ZTC materials have more ownership over the presentation and integration of materials which makes measuring student perceptions of materials crucial as the materials are easier to modify due to the open licenses associated with them. This validated measurement tool facilitates the process of revising and improving a course over time. The more intentional choices faculty have over course materials and presentations could also prime faculty to see the importance of integrating instructional design (Hart, 2020).

Use of a validated instrument for measuring student perceptions of ZTC material implementation in courses and with broad adoption initiatives at a wide variety of institutions will allow for better measurement of the salient features of ZTC adoption. This could lead to improved professional development supporting faculty in adopting ZTC that ultimately leading to improved student perceptions and interactions with the materials. The enhanced understanding will allow for improved ZTC adoption initiatives and improved student experiences with ZTC.

Conclusions

The validated ZSS is available (See appendix A) via a CC BY-NC-ND license is a valuable tool for individual faculty to better understand student perceptions of their OER/ZTC adoptions and for broad scale OER/ZTC adoption initiatives seeking to understand student experiences with materials across a wide variety of courses. The goal of this validation is to share an easily adoptable measurement tool for faculty to deploy to understand student perceptions in their classrooms. Additionally, the hope is OER incentive programs will use this tool as part of program assessment, sharing their data and results. With OER adoption increasing from 5% to 22% (Seaman and Seaman, 2022), understanding student perceptions of the adopted material is critical to adopters and institutions nationwide. A validated measurement tool allows for important cross-institutional comparison of data. Continual improvement of student experiences with course materials is made easier with ZTC and OER materials. The ZTC Student Satisfaction Survey instrument provides a validated benchmark that faculty and program coordinators can use to facilitate future adjustments in supporting the adoption of ZTC materials.

Appendix A

Original Tool

  1. I was more satisfied to use free text/materials more than paid text/materials.
  2. The free text/materials in this class were easy to use.
  3. I found the free text/materials in this class easy to understand.
  4. I found the quality of the free text/materials for this class high.
  5. I found the free text/materials for this class to be effective to help me learn.
  6. I understood course content using the free text/materials for this class better than when using paid text/materials.
  7. I was able to put more effort into this class because of the free textbook/materials.
  8. I completed more of the readings for this class using free text/materials than if the course required paid text/materials.
  9. I would complete a future course that uses free textbook/materials like the one used in this course.

Revised Tool

  1. I was more satisfied to use free electronic textbooks/materials than overpaid textbooks/materials.
  2. I used/read the free course electronic textbooks/materials for this course
  3. The free electronic textbooks/materials in this course were easy to use.
  4. The free electronic textbooks/materials in this course were easy to understand.
  5. The quality of the free electronic textbooks/materials for this course were high.
  6. The free textbooks/materials for this course were effective to help me learn.
  7. I understood this course’s content better using the free textbooks/materials than when using paid textbooks/materials.
  8. I was able to put more effort into this course because of the free textbooks/materials.
  9. I was able to take useful notes using the free electronic textbooks/materials just as I would have with a paid textbook.
  10. I read more using free textbooks/materials than if the course required paid textbooks/materials.
  11. I would register for a future course that uses free textbooks/materials like the one(s) used in this course.

Competing Interests

The authors have no competing interests to declare.

References

  1. Albert, D. R., Redcay, A., & Pfannenstiel, A. N. (2021). The Impact of Typical Textbook Behaviors on Satisfaction with Zero Textbook Cost Materials. International Journal of Open Educational Resources, 4(1), 79–96. https://ijoer.scholasticahq.com/article/25026-the-impact-of-typical-textbook-behaviors-on-satisfaction-with-zero-textbook-cost-materials 

  2. Bateman, P., Lane, A., & Moon, B. (2012). An emerging typology for analyzing OER initiatives. Proceedings of Cambridge 2012: Innovation and Impact – Openly Collaborating to Enhance Education. OCW Consortium and SCORE, Cambridge, UK, April 16–18 2012, Milton Keynes, The Open University, pp. 19–28. http://oro.open.ac.uk/33640 

  3. Bliss, T. J., Robinson, J., Hilton, J., III, & Wiley, D. (2013). An OER COUP: College Teacher and Student Perceptions of Open Educational Resources. Journal of Interactive Media in Education, 2013(1). p.Art. 4. DOI: https://doi.org/10.5334/2013-04 

  4. Brandle, S., Katz, S., Hays, A., Beth, A., Cooney, C., DiSanto, J., Miles, L., & Morrison, A. (2019). But What Do the Students Think: Results of the CUNY cross-campus zero-textbook cost student survey. Open Praxis, 11(1). DOI: https://doi.org/10.5944/openpraxis.11.1.932 

  5. Bride, B. E., Robinson, M. M., Yegidis, B., & Figley, C. R. (2004). Development and validation of the secondary traumatic stress scale. Research on Social Work Practice, 14(1), 27–35. DOI: https://doi.org/10.1177/1049731503254106 

  6. Clinton, V., & Khan, S. (2019). Efficacy of Open Textbook Adoption on Learning Performance and Course Withdrawal Rates: A Meta-Analysis. AERA Open, 5(3), 1–20. DOI: https://doi.org/10.1177/2332858419872212 

  7. Colvard, N. B., Watson, C. E., & Park, H. (2018). The impact of Open Educational Resources on Various Student Success Metrics. International Journal of Teaching and Learning in Higher Education, 30(2), 262–276. http://www.isetl.org/ijtlhe. 

  8. DeVellis, R. F. (2017). Scale Development: Theory and applications. Sage Publications. 

  9. Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage Publications. 

  10. Freeman, A. (2019). Complex organization trauma: development of a measurement tool. Doctoral Dissertation. Millersville University. Retrieved from https://millersville.tind.io/record/5814 

  11. Germain, M. L. (2006). Stages of Psychometric measure development: The example of the Generalized Expertise Measure (GEM). https://files.eric.ed.gov/fulltext/ED492775.pdf 

  12. Grimaldi, P. J., Basu Mallick, D., Waters, A. E., & Baraniuk, R. G. (2019). Do open educational resources improve student learning? Implications of the access hypothesis. PLoS ONE, 14(3), e0212508. DOI: https://doi.org/10.1371/journal.pone.0212508 

  13. Hart, J. (2020). Importance of Instructional Designers in Online Higher Education. The Journal of Applied Instructional Design, 9(2). DOI: https://doi.org/10.51869/92jeh 

  14. Hertzog, M. A. (2008). Considerations in determining sample size for pilot studies. Research in Nursing & Health, 31, 180–191. DOI: https://doi.org/10.1002/nur.20247 

  15. Hilton, J. (2016). Open educational resources and college textbook choices: a review of research on efficacy and perceptions. Educational Technology Research and Development, 64, 573–590. DOI: https://doi.org/10.1007/s11423-016-9434-9 

  16. Hilton, J. (2018). Open educational resources, student efficacy, and user perceptions: a synthesis of research published between 2015 and 2018. Educational Technology Research and Development, 68, 853–876. DOI: https://doi.org/10.1007/s11423-019-09700-4 

  17. Jaggars, S. S., Folk, A. L., & Mullins, D. (2018). Understanding students’ satisfaction with OERs as course materials. Performance Measurement and Metrics, 19(1), 66–74. DOI: https://doi.org/10.1108/PMM-12-2017-0059 

  18. Johanson, G. A., & Brooks, G. P. (2009). Initial scale development: Sample size for pilot studies. Educational and Psychological Measurement, 70(3), 394–400. DOI: https://doi.org/10.1177/0013164409355692 

  19. Losby, J., & Wetmore, A. (2012). CDC coffee break: Using Likert Scales in evaluation survey work. Centers for Disease Control and Prevention. https://www.cdc.gov/dhdsp/pubs/docs/cb_february_14_2012.pdf 

  20. Pfannenstiel, A., Redcay, A., & Albert, D. (2020). Student Perceptions of Textbooks: Prior behaviors and beliefs can influence zero textbook cost (ZTC) adoption impact. Open Praxis, 12(4), 555–567. DOI: https://doi.org/10.5944/openpraxis.12.4.1119 

  21. Seaman, J. E., & Seaman, J. (2022). Turning Point for Digital Curricula: Educational Resources in U.S. Higher Education. Bay View Analytics. https://www.bayviewanalytics.com/reports/turningpointdigitalcurricula.pdf 

  22. Smith, N. D., Grimaldi, P. J., & Basu Mallick, D. (2020). Impact of Zero Cost Books Adoptions on Student Success at a Large, Urban Community College. Frontier in Education, 5, 579580. DOI: https://doi.org/10.3389/feduc.2020.579580 

  23. Todorinova, L., & Wilkinson, Z. T. (2019). Closing the loop: Students, academic libraries, and textbook affordability. The Journal of Academic Librarianship, 45(3), 268–277. DOI: https://doi.org/10.1016/j.acalib.2019.03.010 

  24. Urdan, T. C. (2017). Introduction to social science research principles and terminology. Statistics in plain English. Routledge. 

  25. West, P. G., & Victor, L. (2011). Background and action paper on OER: A background and action paper for staff of bilateral and multilateral organisation at the strategic institutional education sector level. Report prepared for The Williams and Flora Hewlett Foundation. http://www.paulwest.org/public/Background_and_action_paper_on_OER.pdf 

  26. Yong, A. G., & Pearce, S. (2013). A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis. Tutorials in Quantitative Methods for Psychology, 9(2), 79–94. DOI: https://doi.org/10.20982/tqmp.09.2.p079 

comments powered by Disqus