Start Submission Become a Reviewer

Reading: Personalizing Feedback Using Voice Comments


A- A+
Alt. Display

Research articles

Personalizing Feedback Using Voice Comments


Kjrsten Keane ,

SUNY Empire State College, US
X close

Daniel McCrea,

SUNY Empire State College, US
X close

Miriam Russell

SUNY Empire State College, US
X close


While text-based feedback is normally used by college instructors to help students improve their written assignments, it is important to consider using voice comment tools for further personalization. New and easily-accessible technologies provide this option. Our study focused on surveying undergraduates who received voice comments on their written assignments. Students were queried on their preferences for feedback delivery and survey questions probed student responses both quantitatively and qualitatively. Two voice comment tools were used: Adobe Acrobat Reader and Kaizena voice comments. Results showed the majority (66.7%) of students surveyed preferred the addition of voice comment feedback over written comments alone. Appendices supply tool information, full data sets and extensive student commentary regarding their experience after receiving voice comments.

How to Cite: Keane, K., McCrea, D., & Russell, M. (2018). Personalizing Feedback Using Voice Comments. Open Praxis, 10(4), 309–324. DOI:
  Published on 31 Dec 2018
 Accepted on 30 Nov 2018            Submitted on 25 Jul 2018


Institutions of higher learning are using innovative options to meet the needs of their diverse student population, many of whom struggle academically, manage a disability, or have learned English as a second language (ESL). When educators rethink and retool their traditional lecture methods, all students can ultimately benefit. Questions emerge during the process, such as “how do we teach?” and “what should 21st century higher education look like?”

As educators who support students through critical writing processes, we strive to come down from the traditional “sage on the stage” role to become the “guide on the side” (King, 1993, p. 30). As distance learning becomes more prevalent, online courses utilize carefully designed learning management systems (LMS) rich with multimedia, accessible resources, authentic activities and assessments. Despite the many learning successes these environments have supported, more can be done to increase the number of students who meet course expectations in a positive learning environment.

Technological applications and add-ons outside of the LMS are an essential aspect of identifying potential responses to the questions posed above. Many tools surface that appeal to diverse learning styles and allow for enhanced skill development, easier research and greater understanding of assignments. Miller (2014) reported that “technology allows us to amplify and expand the repertoire that effective teachers use to elicit the attention, effort and engagement that are the basis for learning” (p. 11). In addition, Tcherepashenets (2015) observed the “liberating” power of technology lies “in its ability to provide personalized, individualized approach to learning” (p. 258). Equally important, individualized feedback is a significant part of the personalized approach implemented by innovative educators in the 21st Century.

The purpose of our study was to examine tools for individualizing feedback that have the potential to unleash the “liberating power” identified by Tcherepashenets (2015, p. 258). Our primary research question was “what is student perception of formative feedback when audio is added to traditional text?” We chose Kaizena and Adobe Acrobat Reader as two available technologies to implement in our exploration. After careful experimentation, we regularly used them in several instructional scenarios and surveyed our students to learn more about their experiences. Mainly, we wanted to know how easily students could access the audio feedback and if they had a preference for formative audio feedback over text alone.

Literature Review

Importance of Individualized Feedback

Though feedback is a complicated issue with many variables, the value of formative feedback for students is well established (Shute, 2008). In contrast to group feedback, or grades alone, individualized formative feedback provides specific comments that highlight the strengths and weaknesses of student work. Moreover, numerous studies indicate that grades alone do little to promote learning improvement (Campbell & Cabrera, 2014; Kohn, 2011; Pulfrey, Buchs, & Butera, 2011). Individualization and attention to quality take learning further, regardless of the instructional mode. Whether feedback is face-to-face, synchronous, or asynchronous, attention to the affective domain using positive commentary is necessary to ensure that a student will persist in his or her efforts to learn (Bastian, 2017). Non-confirming feedback can be effective as well if it contains specific corrective information. For example, praise for the student’s personal attributes without specific examples should be avoided (Hattie & Timperley, 2007). One of the many recommendations from Shute’s (2008) extensive review of formative feedback is to take advantage of the potential of multimedia in crafting such feedback to minimize cognitive overload (p. 179). Johanson (1999) found that voice comment feedback on written assignments was a welcome addition to the practice of student-teacher conferences because of its ability to personalize instruction. Shute’s (2008) multimedia recommendation may enhance the feedback process substantially in distance learning, where face-to-face conferencing around individual assignments and tutoring is often impossible.

One of the greatest challenges for learning institutions and instructors when designing and implementing online courses is to “provide a sense of community with constructive feedback and provide open forthcoming communications as well as recognizing membership and feelings of friendship, cohesion, and satisfaction among learners” (Desai, Hart, & Richards, 2008, p. 333). Walther, Anderson, and Park (1994) determined as early as 1994 that “when cues are filtered out, communication becomes more task oriented, cold and less personal [online] than face-to-face communication” (p. 461). Casey (2008) also reported that “one of the major complaints about computer-mediated communication in general is the lack of social cues” (p. 50).

More specifically, a goal of utilizing enhanced feedback may be to establish student perception of social presence. Short, Williams, and Christie (1976) defined social presence as “the degree to which a person is perceived as a real person in a mediated situation” (p. 427). When an asynchronous instructor is available in the form of a recorded voice, feedback is more easily understood to be coming from a “real” person. Therefore, a lack of social presence might affect learners’ performance and outcomes because they don’t recognize instructor feedback as real or genuine. Social presence has also been defined as “the perceived presence or salience of others in online discussion” (What is Social Presence?, n.d.). Desai, Hart and Richards (2008) observed, “…social presence is a strong communication component that reduces isolation between the distant learner and other learners and instructor” (p. 328). Ice, Curtis, Phillips and Wells (2007) found that development of social presence was one of the key advantages of using audio feedback. In their study,

“over 450 students in courses taught by these instructors have now received audio feedback. According to these instructors, approximately one third of their students have submitted unsolicited feedback expressing a strong preference for this technique over text-based feedback. No negative feedback has been received” (p. 19).

The authors noted that such a preference was “significant” (p. 18).

Aragon, Johnson and Shaik (2002) and Young (2006) established that student outcomes in well-designed online courses are generally similar to face-to-face classes. Indeed, Driscoll, Jicha, Hunt, Tichavsky and Thompson (2012) concluded that the quality of learning is not determined by the medium, but the course delivery pedagogy which includes feedback modalities. Instructional feedback can be as brief as a grade, or as extensive as a page of paragraphs describing the student’s strengths and weaknesses in assignment efforts. According to Ice et al. (2007), students receiving voice feedback remotely feel more involved because of a perceived decrease in transactional distance, as first identified by Moore (1997). At the onset of online education, Moore addressed the problem of the separation caused by distance between learners and instructors. He concluded, “there appears to be a relationship between dialog, structure and learner autonomy” (2007, p. 24). He further observed the benefit of increasing dialogue through the use of teleconference. At the time, teleconferencing was not easily available to students and instructors at a distance.

The Evolution of Audio Feedback

Student-instructor interaction has been shown to be the most important factor in student satisfaction with online learning in an investigation by Eom and Ashill (2016). When audio feedback is used to supplement feedback in the face-to-face classroom, student learning is enhanced with the additional advantage of being able to listen more than once (Johanson, 1999). Sloan, Stratford and Gregor (2006) maintain that students who need academic support will benefit when there is more than one mode to convey information across learning mediums. They especially noted the benefit of using voice comments along with video support as an opportunity for distance educators to meet diverse learning needs in a personalized manner that increases social cues and personal interaction (Sloan, Stratford & Gregor, 2006).

Students perceive and implement audio feedback in different and more meaningful ways than written feedback. Why? 1) it is easier to understand because handwriting is often illegible; 2) it has more depth because possible strategies for solving problems are included and the instructor can exhibit caring more easily (Merry & Orsmond, 2008).

Before the advent of online learning, Pearce and Ackley (1995) explored a number of studies of audio feedback; they were unanimous in agreement about their effectiveness and positive results. Clark recommended audiotaped feedback as “an alternative to personal conferences” not only because it saved time, but also because it is “more effective than sprinkling the essay with undecipherable, anxiety-producing marks” (qtd. in Pearce & Ackley, 1995, p. 122). They also reported that instructors recorded their oral assessments via taped recordings and sundry methods. Dental students and 9th graders, undergraduate business students and graduate students all expressed strong preferences for voice comments. Although it took a bit of time to refine their skills, instructors found that the method of recording lessons on cassette tapes was “adaptable and can be focused on important features about responding to writing…” (Pearce & Ackley, 1995, p. 32). This expanded on the earlier finding by Kirschner (1991), whose research found that the time difference spent recording the feedback compared with text was minimal, but the time instructors spent in preparing for it was significant given the available technology. Later studies found such preparation investment to be worthwhile, even as advancements in technology reduced teacher training. As recently as 2015, trends were changing: McCarthy (2015) found that “it took slightly longer” to provide written feedback than to record audio feedback (p. 166).

Klammer (1973) and King, McGugan and Bunyan (2008) testified to satisfaction on the part of students receiving recorded feedback via qualitative data. Benefits identified by the instructors included: a) time saved because speaking is faster than writing; b) avoidance of stress related to structuring a written argument; c) softened criticism; and d) encouraging tones employed. Student observations have also been positive: less cryptic feedback; motivating vocal intonations; and greater permanence compared to physical meeting outcomes (Kirschner, 1991). Pearce and Ackley’s (1995) studies confirmed similar revelations and findings. With voice comments, the tone and inflections provide additional meaning. For example, if a student writes a paragraph with two direct quotes, but cites only the first, the instructor/tutor could reply, “What is the source here?” Depending upon how the words “source here” are uttered, this could be interpreted in the text as an exasperated tone, or an acknowledgement that the instructor/tutor is sincerely interested. Wolsey (2008), found evidence to support that students prefer formative feedback embedded in their written work rather than a separate document. Research by Ice, Swan, Diaz, Kupczynski and Dagen (2010) substantiated that there is student preference for diversity in feedback since

“a marked majority or a plurality of respondents indicated a preference for a combination of written and audio feedback. However, it should be noted that at the micro level, the preference for written feedback increased significantly, indicating that perhaps a small amount of audio and a large amount of written feedback is most effective…” (p. 126).

Application to Diverse Learning Environments

Among the pedagogical approaches recommended by the National Postsecondary Education Cooperative (NPEC) is the encouragement of “various forms of electronic technologies” (Kuh, Kinzie, Buckley, Bridges & Hayek, 2006, p. 67). The addition of recorded voice technology supplies another mode of educational technology to help all students discover their strengths and weaknesses with an advantage over face-to-face conferencing: hearing the feedback more than once. Student preferences and ability to make use of the varied feedback is of interest for all educators but holds special application for educators of adults. As Knowles (1980) observed,

“…the main thrust of modern adult-educational technology is in the direction of inventing techniques for involving adults in ever-deeper processes of self-diagnosis of their own needs for continued learning, in formulating their own objectives for learning, in sharing responsibility for designing and carrying out their learning activities, and in evaluating their progress toward their objectives” (p. 56).

According to Olesova, Weasenforth, Richardson and Meloni (2011), asynchronous audio feedback is highly effective for ESL learners in addition to native speakers. In teaching English composition to ESL students, Johanson (1999), described using audio files as “Rethinking the Red Ink” to help students construct meaning that complements face-to-face conferencing regarding written assignments (p. 33). In the process of creating the audio file, Johanson (1999) found he was becoming more of a “coach” as if in an office conference. In addition, as noted above, audio files can be played back more than once to allow multiple reviews, which is a distinct advantage over an in-person office conference.

By using voice comments, instructors provide another platform outside of the LMS to expand student learning and assessment experience. While the addition of voice comments to text would seem to be appreciated and desired, according to our review of the literature, there is a need for more empirical evidence to explore student reasoning for this preference, including perhaps a preference for a certain tool over another. Noting strategies to improve online student’s writing skills, Straub and Vasquez (2015) used the advantages of Google Docs and Adobe Connect (voice conferencing application) for synchronous instruction, without adding an asynchronous voice feedback option. However, as the technological options for sharing audio comments expands rapidly, our research focuses on the use of two programs that have proven suitable for asynchronous academic applications: sound files in Adobe Acrobat Reader and Kaizena voice comments (Skylar, 2009; Trust, 2018).

Kaizena and Adobe Acrobat Reader embed feedback by the use of recorded audio voice comments linked to specific ideas or writing style issues within the text. In addition, they can provide nuances linked to instruction that are most easily communicated through voice while decreasing the transactional distance between the student and the instructor. Both tools have the potential to provide feedback containing a more personal tone and positive emotion. With training, feedback via these tools may also be delivered more rapidly and include additional information without increasing the time spent in the process.

Study Design

Our study was partly designed to address recommendations made by Delante (2017): “another stimulating research direction to take is to do a comparative study between online written feedback and voice/video or chat feedback” (p. 28). We initiated the study with the assumption that all surveyed students had experienced written feedback at this point in their educational studies. Therefore, student opinion was solicited on their use of one of two types of recorded audio feedback for improving writing skills within college writing courses. Along with students in regular academic online classes, students at the college who sought writing assistance through the Office of Academic Support were also asked to participate.

The three primary investigators collaborated, offering formative feedback to students between January 2016 to December 2017. The investigators used one of two technologies during the student writing process: Kaizena voice comments or Adobe Acrobat Reader, in addition to text-based feedback. As students completed the term in which they received audio feedback, they were sent a survey using Google Forms. A total of 125 students were sent the form as part of the study. The results of all survey respondents (n=44) were compiled at the end of the research period. Although differences emerged as we used them, our primary research question could be answered by students exposed to either technology: “what is student perception of formative feedback when audio is added to traditional text?”

Introduction to the Technology: Adobe Acrobat Reader

Adobe Acrobat Reader is a program for opening and reading PDF files and has proven to be useful in sharing one-way audio feedback files with students. The program is a free download that allows limited capabilities without purchase of the full Adobe Acrobat program. No account set-up or additional online hosting is required. After an instructor saves a submitted student document as a PDF file and opens with Adobe Acrobat Reader, recorded comments may be inserted where desired. The comments are represented by small speaker icons throughout the written assignment. When saved, the document and audio files become a single packaged document and may be shared back as an attachment via email or LMS.

Introduction to the Technology: Kaizena Voice Comments

Kaizena, a web-application and add-on for Google Drive, evolves and enhances the interaction between student and instructor. Kaizena voice comments are based on “conversations” around a text which is easily available and linked within an email notification. As a result, a two-way dialogue becomes accessible online, replacing traditional face-to-face office hours as a writing assignment is reviewed in-depth. Instructors initiate feedback via text and/or audio attached to a document and students may respond in either format as well.

Kaizena audio feedback tools use the SAMR Model (Puentedura, 2014), providing four modes of learning, while Adobe Acrobat Reader features modes one, two and four without conversational replies.

  1. Substitution: Instructor provides text-based comments on Kaizena and student replies to the comments (rather than a face-to-face discussion about the student’s work).
  2. Augmentation: Instructor includes links to additional resources in a comment, which enhances the interactivity of feedback.
  3. Modification: The instructor and student can engage in a conversation about the student’s work anytime and from anywhere. This fosters ongoing learning.
  4. Redefinition: Kaizena redefines learning by means of given feedback through technology, facilitating learners to advance forward within continuous improvement (Trust, 2018).

Participants and Setting

Subjects (n=125) were undergraduates in a variety of regular academic online classes at SUNY Empire State College. The participants included international and stateside students, many seeking writing assistance through the school’s Office of Academic Support. Three investigators sent email invitations to students asking them to complete a voluntary online survey created with Google Forms regarding audio feedback on an assortment of written assignments. All participants were enrolled in one or more online courses.

Data Collection and Analysis

The survey was developed using a Likert Scale with six questions for each, ranging from two to five choices. Of the total, 12 students were identifiable as ESL. The survey was anticipated to take about ten minutes to respond to questions. No compensation was offered, 44 students responded, and 64 students did not respond. Seventeen of the 64 non-responses were the result of invalid email addresses. A second request was sent to the valid email addresses for those who did not initially respond. Our final response rate was 35%. With the use of three investigators sending surveys to students regarding feedback on written assignments for a variety of online courses, our study incorporated both investigator and environmental triangulation, as defined by Guion (2002).

Survey Results

Quantitative Results

The following charts represent the results of survey questions regarding student feedback preference received between January 2016 and December 2017.

The data in Figure 1 illustrates the number of times that the voice comment tool was accessed by survey respondents. Some respondents became more familiar with the technology than others, as access increased. Students who received voice feedback more than once were also more likely to participate in the survey over students who received voice feedback only once.

Figure 1 

How often have you received feedback from your instructor on writing assignments?

Figure 2 shows that of 44 students who responded, 13.3% used Adobe Acrobat Reader sound files. A larger number (80%) of respondents accessed Kaizena voice comments. A small percentage (6.7%) could not identify the program utilized.

Figure 2 

Do you know which program was used to deliver your voice feedback?

Figure 3 shows the majority of students accessing voice comments (55.6%) found the process easy or mostly easy. A much smaller percentage (13.3%) found the process difficult. As we teased out more detailed responses from the survey data, we found that 38 of the 44 respondents identified as Kaizena users while 6 identified as Adobe Acrobat Reader users. While the Kaizena user experience was similar to the chart below (Figure 3), the smaller number of Adobe Acrobat Reader responses skews their experience when reported as percentages. Of the 6 Adobe Acrobat Reader users, 3 reported difficulty with the process. (See Appendix 1: Ease of Access and Student Preferences).

Figure 3 

How easily did you access the voice feedback?

Figure 4 indicates most students who accessed the voice comments did so with a PC or Mac computer or laptop. A much smaller percentage used an alternative personal electronic device. The identified iPad user was not able to access Kaizena via the device.

Figure 4 

What device/technology did you use to access the feedback?

Figure 5 suggests there may be an incompatibility with using the Chrome browser to open up the Adobe Acrobat Reader PDF containing voice comments, via certain online Learning Management Systems. Four of the Adobe Acrobat Reader users could not access their audio feedback via the LMS at all using Chrome. Saving the file and opening it outside the browser with the Adobe Acrobat program provided a workaround but increased the steps and complications for feedback access between these programs.

Figure. 5 

What browser did you use to access the feedback?

Figure 6 indicates a strong preference for personalized formative audio feedback specifically related to students’ written work. Of the 44 students who responded, 66.7% preferred audio feedback to accompany their text-based feedback, while 33% preferred written feedback alone.

Figure 6 

Do you prefer voice comments or written comments from your instructor or writing coach?

Qualitative Results

Our study included an optional written feedback section for comments (Appendix 2: Qualitative Responses).

Examples of positive views found in the comment section of the survey:

I thought it was great. I had a choice of recording or writing a comment. I was also able to see what specific areas you were referring to.

Its great. The fact that I can hear your voice makes online courses that much more interactive.

Ok I just did it and it was a lot better than reading the comments on the side it worked well and I more fully understand the comments that you made!

I loved working with the Kaizena App and hearing your remarks one-by-one was so useful when making corrections to my paper.

The voice message is understood better by me. English is my second language and the wording could be slightly difficult to understand.

Actually felt more like one-on-one learning.

Examples of negative views:

A total of three students noted that it would be easier if the voice comments were embedded in the document.

I prefer written comments because it is easy to edit my work as I read through.

It is hard to edit my work while listening to voice comments.

Would prefer if voice appeared in the Google Doc.

Note: Since these results were compiled, the Kaizena capabilities have extended to include embedded voice comments.

Several negative responses complained about the “glitches” or lack of clear audio:

Some were difficult to understand.

Speakers can become muffled.

Another student noted “glitches” as well as the following addition about providing both text and voice comments: “a bit cumbersome, but both are appreciated.”

Note: It can be expected that future advances in Internet audio transmission would make the problem of “glitches” less troublesome.


Reasons to use voice feedback abound in the literature and in the positive data results of our study. However, weaknesses were documented. In our case, “glitches” were primarily focused on access to functioning technology. Drawbacks noted by Henderson and Phillips (2015) in their study utilizing video feedback were similar: some students lacked access to videos because their computers and devices weren’t equipped to do so. In addition, some students were concerned that they lacked the privacy required for paper revision when feedback was accessed in open office settings. Others felt some anxiety about seeing and hearing the feedback from the video, which was perceived as more direct and confrontational than written feedback by some students (Henderson & Phillips, 2015). Ice et. al (2007) stressed the importance of using the student’s name in recorded feedback, which is something often overlooked with the press of time and numbers of assignments to be assessed. A panoply of dialectics using humor and personal examples as well as generally respectful discourse can more easily personalize, encourage, and affect perspectives on student work with the addition of voice for formative feedback. Kolowich (2015) observed that video voice feedback might provide too much information that is not easily reviewed more than once. The negative comment collected in our qualitative data set echoed this observation, though we are not sure whether the comment refers to a student preference to make written edits based on written feedback or trouble accessing the audio feedback repeatedly.

Instructors who use today’s digitally provided feedback options are likely to find they spend the same amount of time delivering audio feedback as written comments alone. However, they can provide more details directed toward individualized needs within that time frame. Ice et al. (2007) also found that the time spent in creating asynchronous feedback for individual writing assignments seems to take the same duration, but in addition to noting all the errors, more positive comments and suggestions for improvement may be included via additional resources such as charts and YouTube videos. Using the Kaizena Lessons feature, more resources directed toward specific writing issues can be supplied with ease. For example, many students need extra help to avoid run-on sentences; therefore, an embedded web video lesson can provide additional instruction for that particular composition issue.

Our study supports Pearce and Ackley’s (1995) finding that “students liked or were motivated better by taped feedback” (p. 32). In addition, they noted that audio feedback was more detailed than written. According to Boling, Hough, Krinsky, Saleem and Stevens (2012), “…high levels of interaction typically need to be present for learners to have a positive attitude and greater satisfaction” (p. 119). Henderson and Phillips (2015) identified themes in their data set, including student interpretation of recorded feedback as “individualized and personal” as well as “supportive: seen as caring and felt to be motivating” (p. 58). Students appreciated the degree of detail provided in terms of how their work was evaluated.

Positive student perceptions of the feedback technology are further anchored by rapidly advancing technological capabilities. Newer tools like Kaizena voice comments are saved directly in a Google Document where students can apply the recommendations immediately in their written assignments. A major hurdle may be in keeping abreast with new versions of digital feedback as they occur. Accordingly, despite the mounting evidence to support use, few instructors are providing technically enhanced feedback (Khalil, 2013; Kolowich, 2015). Harvey and Broyles provided a list of 20 resistance factors along with an antidote for each that should be considered (qtd. in Khalil, 2013). As more accessible tools become available, greater instructor usage could be expected. In fact, Lunt and Curran (2010) predicted that enthusiasm toward the use of audio feedback tools would be used to substitute for face-to-face conferencing. The Kaizena add-on voice comments agree to the extent that they characterize the feedback and replies as “conversations”. Feedback as dialogue means that the student not only receives initial feedback information, but also has the opportunity to engage the teacher in discussions about that feedback.

Crews, Bordonada and Wilkinson (2017) urged online instructional designers and instructors to encourage student feedback regarding their online content and courses as well as for their own assignments. Tait (2014) predicted that, given the newer technologies, “new entrants to e-and online learning will leapfrog their predecessor with an improved student experience” (p. 15). While there may be some minor technological “glitches,” as with all internet tools and personal computing, our study results strongly suggest that voice feedback tools are a positive addition to the instructional process. Exploring more than one tool provided further support for this suggestion. Kaizena voice comments and Adobe Acrobat Reader are both able to provide formative feedback with detailed instruction in grammatical or essay structures for written assignments. Truly, enhanced feedback with recorded voice comments has been shown to be an asset to learning.


Developing formative feedback through voice comments can be a powerful tool to develop greater self-motivation for online learning. Our investigation supported the advantages of voice comment feedback over text alone. Formative audio tools are timely and efficient and provide authentic high-quality observations encouraging dialogue that is engaging and rich in quality as well as quantity (Merry & Osmond, 2008). In this study’s survey comment section, one student saw the voice tool as saving time for both students and instructors, noting, “It is quick…a new approach on hearing both in voice, but not taking up time of both when one or the other is occupied…” Audio files are permanent, enabling a student to review the original feedback as needed, which is a distinct advantage over face-to-face conferencing. The asynchronous voice feedback format applied in our study also met the requirements of Chickering and Ehrmann’s (n.d.) seven principles for delivery methods requiring technology to:

  1. Encourage contact between students and faculty.
  2. Develop reciprocity and cooperation among students.
  3. Use active learning techniques.
  4. Give prompt feedback.
  5. Emphasize time on task.
  6. Communicate high expectations.
  7. Respect diverse talents and ways of learning (qtd. in Beldarrain, 2006, p. 144).

Our findings have provided direction for future research. Specifically, the following questions can be rich investigative opportunities.

  • How does audio feedback encourage more instructor-student dialog about written student work?
  • Some instructors complain that students seem interested only in their grade, while eschewing narrative written feedback. Does the addition of audio feedback make students more likely to access and absorb instructor suggestions?
  • Does the use of audio feedback make students more likely to apply the suggestions to their assignments over the use of text-based feedback alone?
  • Does applying voice feedback take more or less time than written feedback?

Timely formative feedback can benefit future “thoughts, feelings and/or actions” (Getzlaf, Perry, Toffner, Lamarche & Edwards, 2009, p. 4). Indeed, audio feedback may propel students forward into a series of overall positive and successful learning experiences. In reflecting on his own research comparing the kinds of feedback given, Delante (2017) recommended “a comparative study between online written feedback and voice/video or chat feedback” (p. 28). This study partially fulfilled that inquiry and supports the increasing effort by instructors to use recorded voice comments for more effective formative feedback. Our results point to a preference regarding the use of audio feedback specifically related to text in written assignments. Additional investigations into the effectiveness of using voice comments to personalize formative feedback should yield similar results, further supporting our assertion and the continued goal of promoting social presence and narrowing transactional gaps in online learning.


  1. Aragon, S.R., Johnson, S.D., & Shaik, N. (2002). The influence of learning style preferences on student success in online versus face-to-face environments. The American Journal of Distance Education, 16(4), 227-243. 

  2. Bastian, H. (2017). Student affective responses to ‘bringing the funk’ in the first-year writing classroom. College Composition and Communication, 69(1), 6-34. Retrieved from 

  3. Beldarrain, Y. (2006). Distance education trends: Integrating new technologies to foster student interaction and collaboration. Distance Education, 27(2), 139-153. 

  4. Boling, E.C., Hough, M., Krinsky, H., Saleem, H., & Stevens, M. (2012). Cutting the distance in distance education: Perspectives on what promotes positive, online learning experiences. The Internet and Higher Education, 15(2), 118-126. doi: 

  5. Campbell, C.M., & Cabrera, A.F. (2014). Making the mark: Are grades and deep learning related? Research in Higher Education, 55(5), 494-507. 

  6. Casey, D.M. (2008). A journey to legitimacy: The historical development of distance education through technology. TechTrends: Linking Research and Practice to Improve Learning, 52(2), 45-51. 

  7. Chickering, A.W., & Ehrmann, S.C. (n.d.). Implementing the seven principles: Technology as lever. Boston University School of Public Health Office of Teaching, Learning and Technology Teaching Resources. Retrieved from 

  8. Crews, T.B., Bordonada, T.M., & Wilkinson, K. (2017). Student feedback on quality matters standards for online course design. Educause Review, 5. Retrieved from 

  9. Delante, N.L. (2017). Perceived impact of online written feedback on students’ writing and learning: A reflection. Reflective Practice, 18(6), 772-804. 

  10. Desai, M.S., Hart, J., & Richards, T.C. (2008). E-learning: Paradigm shift in education. Education, 129(2), 327-334. 

  11. Driscoll, A., Jicha, K., Hunt, A.N., Tichavsky, L., & Thompson, G. (2012). Can online courses deliver in-class results? Teaching Sociology, 40(4), 312-331. 

  12. Eom, S.B., & Ashill, N. (2016). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An update. Decision Sciences Journal of Innovative Education, 14(2), 85-116. 

  13. Getzlaf, B., Perry, B., Toffner, G., Lamarche, K., & Edwards, M. (2009). Effective instructor feedback: Perceptions of online graduate students. Journal of Educators Online, 6(2). Retrieved from 

  14. Guion, L.A. (2002). Triangulation: establishing the validity of qualitative studies. University of Florida Extension: Institute of Food and Agricultural Sciences, Sept. Retrieved from 

  15. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. 

  16. Henderson, M., & Phillips, M. (2015). Video-based feedback on student assessment: Scarily personal. Australasian Journal of Educational Technology, 31(1), 2015, 51-66. 

  17. Ice, P., Curtis, R., Phillips, P., & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks, 11(2), 3-25. 

  18. Ice, P., Swan, K., Diaz, S., Kupczynski, L. & Dagen, A. (2010). An analysis of students’ perceptions of the value and efficacy of instructors’ auditory and text-based feedback modalities across multiple conceptual levels. Journal of Educational Computing Research, 43(1), 113-134. 

  19. Johanson, R. (1999). Rethinking the red ink: Audio-feedback in the ESL writing classroom. Texas Papers in Foreign Language Education, 4(1), 31-38. Retrieved from 

  20. Khalil, S.M. (2013). From resistance to acceptance and use of technology in academia. Open Praxis, 5(2), 151-163. 

  21. King, A. (1993). From sage on the stage to guide on the side. College Teaching, 41(1), 30-35. 

  22. King, D., McGugan, S., & Bunyan, N. (2008). Does it make a difference? Replacing text with audio feedback. Practice and Evidence of Scholarship of Teaching and Learning in Higher Education, 3(2), 145-163. Retrieved from 

  23. Kirschner, P.A. (1991). Audiotape feedback for essays in distance education. Innovation in Higher Education, 15(2), 185-95. 

  24. Klammer, E. (1973). Cassettes in the classroom. College English, 35(2), 179-189. 

  25. Knowles, M.S. (1980). The modern practice of adult education. Association Press. 

  26. Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33. 

  27. Kolowich, S. (2015, January 26). Could video feedback replace the red pen? The Chronicle of Higher Education, 26. Retrieved from 

  28. Kuh, G.D., Kinzie, J., Buckley, J.A., Bridges, B.K., & Hayek, J.C. (2006). What matters to student success: A review of the literature. National Post-Secondary Education Cooperative (NPEC), 2. Retrieved from 

  29. Lunt, T., & Curran, J. (2010). Are you listening please? The advantages of electronic audio feedback compared to written feedback. Assessment & Evaluation in Higher Education, 35(7), 759-69. 

  30. McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues in Educational Research, 25(2), 153-169. Retrieved from 

  31. Merry, S. & Orsmond, P. (2008). Students’ attitudes to and usage of academic feedback provided via audio files. Bioscience Education, 11(3). 

  32. Miller, M.D. (2014). Minds online: Teaching effectively with technology. Harvard University Press. 

  33. Moore, M.G. (1997). Theory of transactional distance. In D. Keegan (Ed.), Theoretical principles of distance education (pp. 23-38). Routledge. 

  34. Olesova, L.A., Weasenforth, D., Richardson, J.C., Meloni, C. (2011). Using instructional audio feedback in online environments: A mixed methods study. MERLOT Journal of Online Teaching and Learning, 7(1), 30-42. Retrieved from 

  35. Pearce, G.C., & Ackley, J.R. (1995). Audiotaped feedback in business writing: An exploratory study. Business Communication Quarterly, 58(3), 31-34. 

  36. Puentedura, R.P. (2014). Building transformation: An introduction to the SAMR model. Retrieved from 

  37. Pulfrey, C., Buchs, C., & Butera, F. (2011). Why grades engender performance-avoidance goals: The mediating role of autonomous motivation. Journal of Educational Psychology, 103(3), 683-700. 

  38. Short, J., Williams, E. , & Christie, B. (1976). The social psychology of telecommunications. Wiley. 

  39. Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189. 

  40. Skylar, A.A. (2009). A comparison of asynchronous online text-based lectures and asynchronous interactive web conferencing lectures. Issues in Teacher Education, 18(2), 69-84. 

  41. Sloan, D., Stratford, J., & Gregor, P. (2006). Using multimedia to enhance the accessibility of the learning environment for disabled students: Reflections from the skills for access project. Research in Learning Technology, 14(1). 

  42. Straub, C., &, Vasquez, III, E. (2015). Effects of synchronous online writing instruction for students with learning disabilities. Journal of Special Education Technology, 30(4), 213-222. 

  43. Tait, A. (2014). From place to virtual space: Reconfiguring student support for distance and e-learning in the digital age. Open Praxis, 6(1), 5-16. 

  44. Tcherepashenets, N. (2015). Globalizing on-line: Telecollaboration, internationalization, and social justice (4th ed.). Peter Lang. 

  45. Trust, T. (2018). Assessment-centered Tools. Online tools for teaching and learning (Blog). Retrieved from 

  46. Walther, J.B., Anderson, J.F., & Park, D.W. (1994). Interpersonal effects in computer-mediated interaction: A meta-analysis of social and antisocial communication. Communication Research, 21(4), 460. 

  47. What is Social Presence? (n.d.). IGI Global (Idea Group Inc.) [US]. Retrieved from 

  48. Wolsey, T.D. (2008). Efficacy of instructor feedback on written work in an online program. International Journal on E-Learning, 7(2), 311-329. 

  49. Young, S. (2006). Student views of effective online teaching in higher education. American Journal of Distance Education, 20(2), 65-77. 

Appendix 1. Ease of Access and Student Preferences - Quantitative Responses

Table 1

Question #3 Kaizena Responses

Kaizena ease of access number of students prefer voice comments prefer written comments
Easy 22 16 6
Mostly easy 10 6 4
Neither difficult nor easy 4 3 1
Difficult 0 0 0
Not able to access (iPad) 1 0 1
Total 37 25 12

Note: Out of 37 students receiving Kaizena voice comments, 12 preferred written comments and 25 preferred voice comments. One student could not access the voice comments using an iPad. Four found access neither difficult nor easy. Ten found access mostly easy and the largest percentage (n=22) found easy access.

Table 2

Question #3 Adobe Acrobat Responses

Adobe Acrobat ease of access number of students prefer voice comments prefer written comments
Easy 2 2 0
Mostly easy 1 0 1
Neither difficult nor easy 0 0 0
Difficult 1 0 1
Not able to access 2 0 2
Total 6 2 4

Note: Out of 6 students receiving Adobe Acrobat comments, 2 preferred written comments.

Appendix 2. Ease of Access and Student Preferences - Qualitative Responses

Note: Responses to survey questions 7 and 8 are reported together for each student, identified by a number for data purposes. #15 was the only student using an iPad and #21 and #22 were the respondents who used Smartphones.

Responses to questions:

7. Additional comments about receiving Kaizena voice feedback on your writing.

8. Could you explain why you prefer one or the other?

#7 less intrusive

#15 written feedback was much easier, because the voice feedback is difficult to download in the iPad

#17 If editing could be done on the same page with the comments, it will be better than the written comments. I prefer written comments because it is easy to edit my work as I read through. It is hard to edit my work while listening to voice comments as I have to alternate the browser.

#21 (Student identified with English as a Second Language) I enjoyed it. It is quick in my point of view on the professor and the student. A new approach on hearing both in voice but, not taking up time of both being on the phone. When one or the other is occupied they can both be responsive on their time. The voice message is understood better by me. English is my second language and the wording could be slightly difficult to understand.

#25 Some of the voice comments were difficult to understand

#26 I would prefer to have the written comments on my document because then I could save the document to look at afterwards. The comments on Kaizena also don’t attach the voice comments on the document’s intended location. For example, I have noticed I would receive a comment on how I should reword a certain sentence, but I don’t know where to look since I only have her voice to show me. The feedback is helpful but I would prefer the comments to be written. If there was anyway to save the feedback to my computer with the locations being discussed, then I wouldn’t mind using Kaizena

#28 Easier to read then listen to because speakers can become muffled.

#29 They were helpful but, there were some issues. (Glitches) So I can see and process the comments better.

#30 Written comments were Easy for me to understand.

comments powered by Disqus