Start Submission Become a Reviewer

Reading: If at First You Don’t Succeed, Try Closing Another Assessment Loop: Implementing Online Co-C...

Download

A- A+
Alt. Display

Innovative practice articles

If at First You Don’t Succeed, Try Closing Another Assessment Loop: Implementing Online Co-Curricular Assessment

Authors:

Heather D. Hussey ,

Northcentral University, US
X close

Ashley Babcock,

Northcentral University, US
X close

Tara J. Lehan

Northcentral University, US
X close

Abstract

Higher education institutions are commonly tasked with demonstrating student learning in and out of the classroom. Although academic and student affairs share a common goal of supporting student success, they frequently do not speak the same assessment language. This lack of alignment can lead to miscommunication and missed opportunities to collaboratively promote learning and achievement. Further, it can be a struggle to implement assessment protocols if institutional stakeholders do not value and believe in the importance of their role in the assessment process. In this paper, we discuss how professionals at an online academic success center used the Theory of Planned Behavior to inform and improve an assessment protocol as part of the institution’s overall assessment plan. The steps and strategies used over multiple assessment loops are discussed to demonstrate the path taken to build a collaborative learning environment for students in and out of the online classroom.

How to Cite: Hussey, H. D., Babcock, A., & Lehan, T. J. (2020). If at First You Don’t Succeed, Try Closing Another Assessment Loop: Implementing Online Co-Curricular Assessment. Open Praxis, 12(3), 425–436. DOI: http://doi.org/10.5944/openpraxis.12.3.1093
52
Views
16
Downloads
  Published on 30 Sep 2020
 Accepted on 31 May 2020            Submitted on 31 Jan 2020

Introduction

Early in higher education, student support services were handled by both faculty and staff, intertwining the assessment of learning by design (Hunter & Murray, 2007). As academic institutions grew and fields became more developed and specialized, the two separated. Specifically, faculty focused on the classroom, reflection, and autonomy, whereas support service professionals focused on teamwork, satisfaction, and productivity (Frost, Strom, Downey, Schultz & Holland, 2010). This separation eventually led to the assessment of learning becoming a primary focus inside but not outside the classroom (Green, Jones & Aloi, 2008). The call to change back to student affairs involvement in assessment came around 1940, but it took 40 years before it became a central practice, another 10–20 years before best practices emerged in mainstream literature, and another 10 years before student affairs literature and practice regularly emphasized the importance of assessment (Schuh, 2015). However, even with this support of co-curricular assessment in the profession, there were often struggles with the implementation of assessment of learning in student affairs at many institutions. For example, assessment often has been compliance-driven and given to specific people or departments instead of being part of an institutional culture of assessment (Levy, Hess & Thomas, 2018; Roper, 2015). Although progress has been made with institutions’ using co-curricular assessment results to drive change (Levy et al., 2018), it has been uneven across institutions and departments (Alverson, Scwartz & Shultz, 2019; Roper, 2015; Tait, 2014). Further, assessment of learning has not been universally endorsed by all (Gilbert, 2015; Lederman, 2019; Suskie, 2018), with some even going as far as to say that “assessment sucks” (Openo, 2018, p. 171).

Online versus face-to-face co-curricular learning opportunities can also present different opportunities and obstacles (Russell, Rawson, Freestone, Currie & Kelly, 2018), which is important to consider as more schools move to online learning (Lederman & Lieberman, 2019; Seaman, Allen & Seaman, 2018). Especially in online settings, academic programs and support services can operate in silos, which has threatened student success through decreasing university-wide collaborations toward student learning (Nesheim et al., 2007). This lack of collaboration and unequal support for assessment can lead to student affairs professionals’ not speaking the same assessment language as faculty, which can result in miscommunication and missed opportunities (Adcroft, 2010; Lazar & Ryder, 2018). This can be especially problematic as more students enroll in online courses and programs who are underprepared and need additional learning supports outside of the classroom (Alverson et al., 2019).

At a completely online, predominantly graduate institution, there was a lack of continuity between faculty’s and academic coaches’ assessments of student learning in and out of the classroom, respectively, which resulted in several challenges to be overcome. Associated challenges included a one-way flow of information (from faculty to academic coach only), limited knowledge among faculty regarding what occurred once they encouraged or referred a student to seek learning assistance, and gaps between academic coaching services and (a) the curriculum, (b) faculty knowledge, and (c) student knowledge and competence. Further, without continuity in faculty’s and coaches’ assessment practices, including operational definitions of competence on the institutional learning outcomes (ILOs) on which they were working with students, it was difficult for them to form purposeful partnerships to promote effective teaching and learning. Faced with such challenges, we discuss how a theoretical model for behavior change was used to guide the development and continuous improvement of an assessment protocol in an online academic success center (ASC) through multiple iterations of closing the loop.

Theoretical Framework

The Theory of Planned Behavior ([TPB]; Ajzen, 1991) has been widely used for decades and shown to be a useful framework for behavior change efforts (e.g., Steinmetz, Knappstein, Ajzen, Schmidt & Kabst, 2016), including behavior change in higher education (e.g., Burns, Houser & Farris, 2018). This framework suggests (1) the main predictor of behavior is the intention to perform the behavior and (2) intentions are influenced by attitudes about the behavior, subjective norms, and perceived behavioral control (PBC). Attitudes are the negative or positive ways people view the behavior, norms are how people believe others engage in the behavior and feel pressure to do so, and behavioral control is the extent to which people believe they can perform the behavior (Ajzen, 1991; Steinmetz et al., 2016). Applied to the assessment of learning context, it can be assumed that targeting the attitudes, norms, and perceptions of behavioral control of faculty and staff can influence their intention to participate in the process and, therefore, change their behavior. Furthermore, the theory offers areas of targeted interventions should individuals be resistant to behavior change (e.g., Steinmetz et al., 2016).

Closing the Loop with TPB

Below, the closing of four assessment loops is described, including obstacles, outcomes, and opportunities. Using the TPB in each cycle offered targeted ways to support the implementation of co-curricular learning based on the behaviors exhibited by stakeholders during each assessment loop.

Baseline

A number of initial challenges were encountered at baseline. Although the ASC was focused on supporting students in their learning, co-curricular assessment had yet to be implemented for the first several years of operation. Instead, the ASC manager at the time directed the coaches to focus on developing self-directed learners who were self-managing, self-monitoring, self-modifying, and self-motivating (referred to as the four Ms). Although they are helpful characteristics for students, this focus was in contrast to faculty’s focus on skills related to the institutional learning outcomes [ILOs], which were included as embedded assessments in many of the online courses at the beginning, middle, and end of students’ programs. This incongruity resulted in inconsistent feedback on student learning and performance between coaches and faculty.

When the next-up leader who supported the ASC team engaged with the ASC manager (referred to with the pronoun “they” moving forward for gender neutrality) in subsequent conversation surrounding the move toward assessing student learning, there was reluctance on the part of the manager to make changes in the department they built relatively independently. To help increase positive attitudes about assessment, the next-up leader explained the role of the ASC’s assessment protocol in the institution’s assessment plan. In continued support of autonomy, the manager was encouraged to lead the development of the ASC assessment protocol within parameters guided by the institutional assessment plan and accreditation requirements. The manager joined the co-curricular assessment subcommittee that convened monthly at which members talked about processes and outcomes related to their developing assessment of learning efforts on their teams. The purpose was to share victories and failures (as well as what was learned from them) and hold one another accountable for making progress in their assessment efforts.

The directive to implement assessment of student learning similarly was met with mixed reactions by the coaches when it was announced at a monthly ASC meeting. Therefore, the next-up leader had a one-on-one follow-up conversation with each coach to normalize and validate their perceptions and experiences using active listening. Most of the coaches stated they believed their role was to increase confidence among students, rather than help students to develop competence on ILOs. To respect their expertise and meet them where they were, it was agreed that student confidence would be assessed as well as learning. One coach fully embraced this proposed change, whereas most of them tentatively agreed to participate.

Table 1

Applying TPB to Coaches at Baseline

Attitudes
The ASC manager did not fully value assessment. Four of five coaches questioned its purpose and place in the ASC. Three of five coaches reported assessment belonged in the classroom.
Norms
Because the ASC was relatively siloed from academic affairs, the coaches mainly had each other for norm comparison. As such, the norm was to not complete rubrics. Further, four of five coaches viewed themselves more as service providers than educators.
Intention
Four of five coaches were not intending to assess student learning.
Behavior
No coaches were assessing student learning.
PBC
Four out five coaches felt they were unable to complete the rubrics.

First Loop

Implementation

The ASC manager and next-up leader held group and individual meetings with the coaches to help address concerns surrounding assessment-related attitudes, norms, and PBC. The next-up leader knew this process would likely include adjusting how ASC professionals were seen by others as well as themselves as not just service providers but also educators (Blake, 2007; Colwell, 2006). Further, to help break down silos and make assessment part of the norm, the manager revised the co-curricular assessment protocol to be informed by the university strategic plan and aligned with the university’s mission. Specifically, the focus of measurement was changed from the four Ms to writing and statistics skills. However, the manager and coaches still were not fully on board with assessment of student learning and were not collaborating with faculty on supporting student learning. Nevertheless, they agreed to pilot a new protocol using a 3-point scale (1 = cannot demonstrate skill with assistance, 2 = can demonstrate skill with assistance, 3 = can demonstrate skill without assistance) in Excel based on student responses at the end of each session.

Table 2

Applying TPB to Coaches in the First Loop

Attitudes
Four of five coaches viewed assessment negatively. They felt it did not fit in the ASC and would be extra work.
Norms
Because the ASC was somewhat siloed from academic affairs, the coaches mainly had each other for norm comparison. As such, the norm was to not complete rubrics. Further, they viewed themselves more as service providers than educators.
Intention
Four of five coaches were not intending to assess student learning, but all five coaches agreed to implement a student self-rated measure.
Behavior
Five coaches were attempting to measure student learning on the 3-point scale of student-rated confidence to perform a skill.
PBC
Four of five coaches cited many reasons why they could and should not complete assessments.

Outcomes

A lack of variability in the scores was reported by coaches (with most students being rated at a 2), which made it difficult to track improvement and, ultimately, inform practices. Further, without a more nuanced measure, there often still were inconsistent perceptions of students’ levels of competence between coaches and faculty.

Improvements

After consulting with the Director of Institutional Assessment and the university assessment committee mainly comprising faculty, it was proposed that the ASC’s co-curricular assessment protocol leverage classroom assessment practices to further develop assessment norms in the ASC. This included focusing coaching services on promoting competence on two ILOs (written communication and quantitative reasoning). The same ILO rubrics were being used as embedded assessments by faculty in the online classroom and the same general standard of competence for each level used by faculty were employed by coaches in their sessions. For the assessment of learning, competence was operationalized as a 4 or above on Bloom’s taxonomy for master’s students and a 5 or above for doctoral students to align with university-wide standards and bridge student performance expectations in and out of the classroom.

To help increase PBC, coaches were trained on the rubrics and had them readily available in their handbooks. The rubrics to assess these ILOs used Bloom’s taxonomy to measure learning on a six-point scale. Using this taxonomy, coaches and faculty could refer to students’ skill development using the same language and university-wide understanding of performance expectations. It was hoped that this improvement would build collaborations and foster communication between faculty and coaches. Moreover, to help reduce negative attitudes related to extra work, coaches’ assessment ratings were entered into WCOnline, the tool used to schedule and document what occurred in coaching sessions. To further capture learning, students were asked to rate their perceived level of performance using the same taxonomy before the session began.

Second Loop

Implementation

The ASC adopted the written communication and quantitative reasoning ILOs as co-curricular learning outcomes and used the same ILO rubrics to rate students during coaching sessions as faculty used in the classroom. To further inform assessment norms, both students and coaches rated student performance using these rubrics at the beginning and end of each coaching session. Around this time, the manager left the institution, and an associate director with learning center experience joined the team.

Table 3

Applying TPB to Coaches in the Second Loop

Attitudes
Four of five coaches continued to push back about assessing student learning, as they fundamentally did not see this as part of their role.
Norms
The team met to discuss the assessment findings, or lack thereof, so it was not clear to the coaches who was or was not completing the rubrics.
Intention
Two coaches intended to use the ILO rubrics.
Behavior
For written communication, of 387 possible pre/post assessments, 156 were filled out by students and 202 by coaches. For quantitative reasoning, of 143 possible pre/post assessments, 44 were filled out by students and 21 by coaches.
PBC
Although trainings had been held, three coaches still seemed unsure of how to apply the assessment rubrics.

Outcomes

Analyzing the results of completed assessment ratings was found to be difficult because pre- and post-session student and coach rating processes were not equivalent. Specifically, students could only select one level of Bloom’s taxonomy to rate their competence at the start of a session, whereas coaches could select several levels at the end of their session. Further, the wording of the questions to students and coaches varied slightly, which could have also impacted scoring. Specifically, students could select all levels that applied, whereas coaches were instructed to select the highest level at which the student could perform. In addition, faculty appeared to continue to refer students to the ASC, but most faculty did not communicate directly with coaches to support student learning at the ASC, nor did coaches reach out to faculty.

Improvements

It was proposed that the exact same questions should be asked of students and coaches and that each should only select the one level they each perceived reflected the student’s highest level of performance. Trainings were held to increase coaches’ PBC with the updated assessment protocols and the language they could use with students as well as faculty. To further help solidify their knowledge and behavioral control, coaches then trained students on how to complete their self-ratings as well (Duran, 2017). To increase accountability and norms, a quality assurance (QA) mechanism was implemented during the second loop to evaluate the extent to which coaches were following the assessment protocol as expected. The QA included a 3-point rubric (exceeds expectations, meets expectations, does not meet expectations) with six criteria. Coaches who did not meet expectations on one or more criteria had coaching session(s) with the associate director. These often resulted in short-term improvements in assessment of student learning but did not endure.

Third Loop

Implementation

In the enhanced co-curricular assessment protocol, parallel pre-/post-session ratings were used by coaches and students using the same ILO rubrics for written and oral communication.

Table 4

Applying TPB to Coaches in the Third Loop

Attitudes
Four of five coaches continued to show reluctance about assessing student learning, as they did not see this as part of their role.
Norms
Four of five coaches appeared to believe assessment was not required and that others were not completing them.
Intention
Three coaches claimed to be intending to use the ILO rubrics. Two coaches said they would attempt to remember to use the rubrics.
Behavior
For written communication, of 557 possible pre/post assessments, 472 were completed by students and 400 by coaches, with 329 having both pre/post assessments. For quantitative reasoning, of 226 possible pre/post matched assessments, 144 were completed by students and 53 by coaches, with 39 having both pre/post matched assessments.
PBC
Four of five coaches had been trained on how to use the rubrics but lacked knowledge on why it was important to use the rubrics.

Outcomes

It was found that a low number of ratings were provided by students and coaches for each session. There also continued to be misperceptions among students, faculty, and coaches regarding when skill competence had been reached. During the end of the third loop, three coaches left (two writing and one statistics), all of whom were displaying negative attitudes toward the assessment protocol. When hiring their replacements, the associate director and next-up leader purposefully crafted interview questions about assessment of learning and hired individuals who were both qualified to coach in their respective areas (writing and statistics) and displayed positive attitudes toward assessment of learning. Two additional coaches were hired for a total of seven coaches (five new). Of the two remaining coaches, one had always displayed positive attitudes toward coaching and one expressed the intention to implement the rubrics.

Improvements

To help increase student scoring completion and the PBC of coaches, a protocol was developed where coaches would take the first five minutes of the session to have the student report the pre-session level of competence if it was not done beforehand. This was also an opportunity for coaches to teach students about the rubric and why it was completed. Steps were also taken to help reduce obstacles related to coaches’ beliefs about completing assessments to help increase PBC as well as positive attitudes toward assessment. For example, the associate director invited the Director of Institutional Assessment to attend a monthly meeting to discuss the importance of assessment and how the data were used at the university to inform continuous improvements. To further develop assessment norms, the QA protocol had been in use for three months, and coaches were held accountable for scores of “does not meet” on any of the criteria. This QA process helped to demonstrate to coaches the expectation that all would be completing the assessments.

Fourth Loop

Implementation

The next co-curricular assessment loop included the protocol at the beginning of the session where coaches and students reviewed and completed the rubrics, the quality assurance protocol, as well as a delineation in the Bloom’s taxonomy scores for master’s and doctoral students in written communication and quantitative reasoning. Additionally, the staff consisted of seven coaches, including the new hires who were experienced in and supportive of assessment of student learning.

Table 5

Applying TPB to Coaches in the Fourth Loop

Attitudes
Six of seven coaches appeared to have favorable attitudes toward assessment.
Norms
Six of seven coaches appeared to believe that completing assessment rubrics was important and that they would be evaluated monthly through QA protocol on their ability to meet these expectations.
Intention
All seven coaches claimed to be intending to use the ILO rubrics.
Behavior
For written communication, of 423 possible pre/post assessments, 347 were completed by students and 423 coaches, with 347 having both pre/post matched assessments. For quantitative reasoning, of 216 possible pre/post assessments, 200 were completed by students and 196 by coaches, with 182 having both pre/post matched assessments.
PBC
All seven coaches had been trained, and six of them appeared confident in their ability to meet or even exceed expectations of completing the assessments.

Outcomes

Coaches’ attitudes toward assessment appeared to become more positive, and there was an increase in rubric completion rates as well as consistency in the way they were scored. Use of the quality assurance protocol revealed that six of the seven coaches were meeting expectations when it came to participating in the assessment of learning. By the end of the fourth loop, the last coach who displayed negative behaviors toward completing the assessment of learning opted to leave. However, collaborations between faculty and coaches to support student learning were still lacking.

Improvements

Coaches’ having more positive attitudes toward assessment of student learning, a shared expectation of completing the rubrics, as well as greater PBC to complete the rubrics resulted in assessment data that could be used and analyzed to inform continuous improvement efforts. Using the data, the ASC team determined that next steps should involve developing a personalized long-term coaching plan to assist all students in developing competence for the ILO that they were working on with coaches. Learning outcomes data were also shared with coaches weekly to help inform possible improvements to their coaching practices. By going through the process of refining this co-curricular assessment tool, a new protocol was developed that allowed for greater collaboration and communication among students, faculty, and coaches with a collaboration form versus simply a referral model (Figure 1).

Figure 1 

New collaborative coaching cycle informed by ASC co-curricular learning assessment loops.

Conclusion

Through many iterations, a new protocol for determining, assessing, and supporting online students’ needs at the ASC was developed to include long-term coaching plans that better engage faculty (Banta & Blaich, 2010) as well as students in the assessment of learning through the use of common language in collaboration forms (Falchikov, 2004; Tait, 2014). The first loop included the addition of end-of-session ratings by coaches on whether a certain skill could be demonstrated (or not) independently or with assistance. In the second loop, attempts were made to parallel classroom assessment by choosing two ILOs to focus on and assess learning using ILO rubrics developed and used by faculty in the classroom. Coaches were trained on the rubrics, and scores were captured at the beginning and end of sessions by students and coaches. Three ASC coaches left during the third loop. Improvements made during this loop included revising student and coach scoring prompts to be the same as well as limiting rubric scoring on one level. Additional trainings were held with coaches to support rubric completion as well as how to discuss rubrics with students. A quality assurance protocol was also put in place to ensure coaches were following the assessment plan. A fourth coach resigned in the fourth loop. To further support co-curricular assessment, the first five minutes of sessions were dedicated to students and coaches scoring the ILO rubric and discussing scores. The Director of Institutional Assessment attended ASC meetings to discuss the importance of assessment and how the data were used to inform continuous improvement efforts. In the final loop discussed, personalized coaching plans were developed that could be shared with faculty. This form and process helps bridge teaching and learning in and out of the online classroom as well as the assessment of learning, as the same rubrics are used in both instances. Further, the use of such forms helps drive collaborations in an online learning environment where it can be easy for silos to build (cite). In the last two decades, greater effort has been devoted to tearing down silos and having student support professionals and faculty work as partners to support student learning (Levy et al., 2018; Manning, Kinzie & Schuh, 2006). Student affairs professionals must also see themselves as educators (Blake, 2007; Colwell, 2006) and value the importance of assessing student learning outside the classroom (Levy et al., 2018). Such professionals can offer support services that provide students with opportunities to engage with their course curriculum using different media, relearn concepts, and request further explanation (Alverson et al., 2019; Arendale, 2010; Fullmer, 2012). Given that faculty frequently report that they do not have sufficient time to complete all their job requirements optimally (Berebitsky & Ellis, 2018) and more underprepared students are attending college (Alverson et al., 2019), academic coaches might represent an opportunity to promote not only learning, but also engagement as well as persistence, retention, and completion among students (Bettinger & Baker, 2014; Lehan, Hussey & Shriner, 2018).

Several challenges had to be overcome in closing the assessment loops at this ASC. Even with targeted supports and interventions, buy-in was difficult to achieve initially with the manager’s and four of the five original coaches’ leaving. Coaches were trained on how to complete the assessment rubrics and then they trained the students. In the first three loops, four of five coaches displayed a reluctant and often negative attitude, and this reluctance could have resulted in a lack of training for their students. One advantage to this turnover in staff is that it allowed for the hiring of individuals who valued co-curricular assessment and intended to complete the ILO rubrics. Once there were five new coaches, all of whom displayed positive attitudes toward assessment, the amount of student and coach rubric completion increased.

With this support, the associate director was able to take additional steps to better support student learning at the center. For example, a formal needs assessment determined what gaps existed between current and desired conditions relating to ASC services from the student, faculty, and administrator perspectives (Babcock, Lehan & Hussey, 2019). Based on the findings, new protocols, outreach initiatives, services, and programs were developed. Academic coaching sessions became more practice-based with time for “show what you know” activities. To further bridge the learning happening in and out of the classrooms, students were given the option to share their personalized coaching plan with their faculty member, and faculty were encouraged to participate in group and individual sessions with their students. Along with speaking the same assessment language (Adcroft, 2010; Lazar & Ryder, 2018), faculty were now also engaged in the teaching-learning experience outside the classroom (Banta & Blaich, 2010). The increase in faculty attending the academic coaching sessions with their students also presents future opportunities to examine academic coach and faculty co-teaching collaborations to better bridge learning in and out of the classroom. However, there can be resistance from professionals in student affairs who might not see themselves as educators and question the role of assessment outside the classroom (Levy et al., 2018). It will be important to have plans and supports in place to implement and maintain continuous cycles of assessment to inform improvements (Banta & Blaich, 2010). This may be especially the case when it takes many iterations or loops (Blimling, 2013).

References

  1. Adcroft, A. (2010). Speaking the same language? Perceptions of feedback amongst academic staff and students in a school of law. The Law Teacher, 44(3), 250–266. 

  2. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. 

  3. Alverson, J., Schwartz, J., & Shultz, S. (2019). Authentic assessment of student learning in an online class: Implications for embedded practice. College & Research Libraries, 80(1), 32–43. https://doi.org/10.5860/crl.80.1.32 

  4. Arendale, D. R. (2010). Access at the crossroads: Learning assistance in higher education. ASHE Higher Education Report, 35(6). San Francisco, CA: Jossey-Bass. 

  5. Babcock, A., Lehan, T., & Hussey, H. D. (2019). Mind the Gaps: An Online Learning Center’s Needs Assessment. Learning Assistance Review, 24(1), 27–58. 

  6. Banta, T. W., & Blaich, C. (2010). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22–27. 

  7. Berebitsky, D., & Ellis, M. K. (2018). Influences on Personal and Professional Stress on Higher Education Faculty. Journal of the Professoriate, 9(2), 88–110. 

  8. Bettinger, E. P., & Baker, R. B. (2014). The effects of student coaching: An evaluation of a randomized experiment in student advising. Educational Evaluation and Policy Analysis, 36(1), 3–19. https://doi.org/10.3102%2F0162373713500523 

  9. Blake, J. H. (2007). The crucial role of student affairs professionals in the learning process. New Directions for Student Services, 117, 65–72. https://doi.org/10.1002/ss.234 

  10. Blimling, G. S. (2013). Challenges of assessment in student affairs. New Directions for Student Services, 2013(142), 5–14. https://doi.org/10.1002/ss.20044 

  11. Burns, M. E., Houser, M. L., & Farris, K. L. (2018). Theory of planned behavior in the classroom: An examination of the instructor confirmation-interaction model. Higher Education, 75(6), 1091–1108. https://doi.org/10.1007/s10734-017-0187-0 

  12. Colwell, B. W. (2006). Partners in a community of learners: Student and academic affairs at small colleges. New Directions for Student Services, 116, 53–66. https://doi.org/10.1002/ss.225 

  13. Duran, D. (2017). Learning-by-teaching. Evidence and implications as a pedagogical mechanism. Innovations in Education and Teaching International, 54(5), 476–484. https://doi.org/10.1080/14703297.2016.1156011 

  14. Falchikov, N. (2004). Involving students in assessment. Psychology Learning & Teaching, 3(2), 102–108. https://doi.org/10.2304%2Fplat.2003.3.2.102 

  15. Frost, R. A., Strom, S. L., Downey, J., Schultz, D. D., & Holland, T. A. (2010). Enhancing student learning with academic and student affairs collaboration. The Community College Enterprise, 16(1), 37–51. 

  16. Fullmer, P. (2012). Assessment of tutoring laboratories in a learning assistance center. Journal of College Reading and Learning, 42(2), 67–89. 

  17. Gilbert, E. (2015, August 14). Does assessment make colleges better? Who knows? The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/Does-Assessment-Make-Colleges/232371 

  18. Green, A. S., Jones, E., & Aloi, S. (2008). An exploration of high-quality student affairs learning outcomes assessment practices. NASPA Journal, 45(1), 133–157. 

  19. Hunter, M. S., & Murray, K. A. (2007). New frontiers for student affairs professionals: Teaching and the first-year experience. New Directions for Student Services, 2007(117), 25–34. https://doi.org/10.1002/ss.230 

  20. Lazar, G., & Ryder, A. (2018). Speaking the same language: Developing a language-aware feedback culture. Innovations in Education and Teaching International, 55(2), 143–152. https://doi.org/10.1080/14703297.2017.1403940 

  21. Leaderman, D. (2019, April 17). Harsh take on assessment…from assessment pros. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/2019/04/17/advocates-student-learning-assessment-say-its-time-different-approach 

  22. Lederman, D. & Lieberman, M. (2019, March 20). How many public universities can ‘go big’ online? Inside Higher Ed. Retrieved from https://www.insidehighered.com/digital-learning/article/2019/03/20/states-and-university-systems-are-planning-major-online 

  23. Lehan, T. J., Hussey, H. D., & Shriner, M. (2018). The influence of academic coaching on persistence in online graduate students. Mentoring & Tutoring: Partnership in Learning, 26(3), 289–304. https://doi.org/10.1080/13611267.2018.1511949 

  24. Levy, J., Hess, R., & Thomas, A. (2018). Student affairs assessment & accreditation: History, expectations, and implications. The Journal of Student Affairs Inquiry, 4(1), 4284. 

  25. Manning, K. Kinzie, J., & Schuh, J. H. (2006). One size does not fit all: Traditional and innovative models of student affairs practice. New York, NY: Routledge. 

  26. Nesheim, B. E., Guentzel, M. J., Kellogg, A. H., McDonald, W. M., Wells, C. A., & Whitt, E. J. (2007). Outcomes for students of student affairs-academic affairs partnership programs. Journal of College Student Development, 48, 435–454. 

  27. Openo, J. A. (2018). Assessment blues: How authentic assessments saved my teaching soul. Journal for Research and Practice in College Teaching, 3(2), 171–174. 

  28. Roper, L. D. (2015). Student affairs assessment: Observations of the journey, hope for the future. The Journal of Student Affairs Inquiry, 1(1), 1–13. 

  29. Russell, F., Rawson, C., Freestone, C., Currie, M., & Kelly, B. (2018). Parallel lines: A mixed methods impact analysis of co-curricular digital literacy online modules on student results in first-year nursing. College & Research Libraries, 79(7), 948–971. 

  30. Seaman, J. E., Allen, I. E., & Seaman, J. (2018). Grade increase: Tracking distance education in the United States. Babson, MA: Babson Survey Research Group and Quahog Research Group, LLC. 

  31. Schuh, J. H. (2015). Assessment in student affairs: How did we get here? The Journal of Student Affairs Inquiry, 1(1), 1–10. 

  32. Steinmetz, H., Knappstein, M., Ajzen, l., Schmidt, P., & Kabst, R. (2016). How effective are behavior change interventions based on the theory of planned behavior? A three-level meta-analysis. Zeitschrift Fur Psychologie, 224(3), 216–233. 

  33. Suskie, L. (2018, February 28). Why do (some) faculty hate assessment? A Common Sense Approach to Assessment in Higher Education. Retrieved from https://www.lindasuskie.com/apps/blog/show/45448477-why-do-some-faculty-hate-assessment- 

  34. Tait, A. (2014). From place to virtual space: Reconfiguring student support for distance and e-learning in the digital age. Open Praxis, 6(1), 5–16. http://dx.doi.org/10.5944/openpraxis.6.1.102 

comments powered by Disqus