EDU800 – Week Nine

This week I critically reviewed a paper entitled “Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness” by S. Erhel & E. Jamet (2013). My analysis follows.

Problem

  1. The article does a poor job of stating a specific problem to be explored – the reader sort of has to dig for it. From the title alone, the reader would expect to see facts about digital game-based learning (DGBL), instructions and feedback in reference to motivation and learning. After reading the paper twice, I determined the main thesis to be at the end of section 1.1. It states: “For this author, one of the medium’s key characteristics is the ‘coming together’ of serious learning and interactive entertainment. In other words, digital learning games can be regarded as an entertainment medium designed to bring about cognitive changes in its players” (Erhel & Jamet 2013). Yet the main thesis does not cover what’s mentioned in the title; namely, deep learning, what qualifies as instruction, and what is involved in feedback. I found other sentence pieces within the text that can add up to a proper thesis. In question seven I will rewrite it as a paragraph. Meanwhile, there are many issues with the theses as a problem of study which I will discuss in the course of this paper.
  1. No, the authors do not present enough of a need for this study in their given research. When Erhel & Jamet briefly mention the need for “serious learning and interactive entertainment” (2013) as well as “cognitive changes in its players” (page 156) they are on the verge of presenting good information yet they don’t do much with the statements in the course of their study and article. I would like to see more about instructions, learning qualifications, and motivation, at the beginning. Instruction isn’t mentioned until the second page and it greatly informs the thesis. Learning qualifications should be written more about here, for example what deep learning means and how to quantify it. From my own learning I know that motivation is two-fold: intrinsic and extrinsic (Deci & Ryan 2000). The authors cite Deci & Ryan but only to discuss intrinsic motivation and flow. Extrinsic motivation is here left out and flow is irrelevant. Yet both are cited as example to interest/amusement or challenge. The ideas don’t go together in a logical manner. This is another issue of the paper.
  1. The problem as stated by Erhel & Jamet, “Digital learning games can be regarded as an entertainment medium designed to bring about cognitive changes in its players” (2013), would imply a Likert test or eLLS scale at the very least, the tools used to measure likeability and are usually ranked from Less Likely to Very Likely or similar on a five- or seven-measure scale. The test and scale can have more information than just this, for example where test subjects write in an answer, so this would be beneficial to this research as well. Instead, the authors use an extensive literature review. This would be fine if the information corroborated what they’ve stated in the outset regarding instructions, feedback, motivation, and learning effectiveness. There’s an attempt at clarifying information from the title to the abstract and into the introduction when the authors say that, in DGBL, learning instruction is deeper than entertainment instruction but negative for motivation; that entertainment instruction, given feedback, leads to deeper learning. In my mind I run the two, learning instruction and entertainment instruction, parallel with deep learning in a circle beneath both. The circle has an asterisk for feedback beneath entertainment learning. Still there is no mention of cognitive changes, just the mention of deeper learning. This confusion is another problem of this paper.

Theoretical Perspective and Literature Review 

  1. A conceptual framework should, in this order, have a research question clearly stated; identify key concepts, (in)dependent variables and anything that will influence the study; a framework fit to the study that shows an interconnected model of ideas; an illustrated relationship (diagram or narrative) showing how variables relate to one another; and be refined to show existing knowledge. Also, it clearly explains the rationale behind relationships stated. This paper has no clear research question delineated – it is obscured in many locations within the article and decidedly unclear. For key concepts, these are again spread throughout the article and very obtuse. The authors discuss some key concepts early in the paper (section 1.1 mentions a proper thesis), some in the middle (section 1.2 discusses motivation, engagement, and goal-setting; section 1.3 discusses motivation and engagement again in brief but also learning and game environments; and 1.4 discusses value-added approach to DGBL, instructions, and types of learning), and even some in the Experiment sections (namely, instruction, motivation, learning, and KCR feedback). It is very confusing for the reader.
  1. While the authors attempt to tie the study to lots of prior research, it is not all relevant to the problem they’re investigating. As stated in question two, for example, the example from Deci & Ryan (2000) about motivation being intrinsic but also extrinsic and having nothing to do with flow. The research studied applies to four major foci: learning itself, gaming at all levels, feedback in a synopsis, or other (and “other” can range from instructional games to children and gaming in the classroom). I learned that studies in a literature review have to be all-related or subjugated into categories. By this I mean that the authors should choose literature that ties to all facets they’re investigating; here that would be instructions, feedback, motivation, and learning in the context of DGBL. Alternately, the authors could have subjugated into categories (instructions, feedback, motivation, and learning) but relate them all back to DGBL. I give the authors credit for attempting to do this in section 1, subsection 1.1 Digital game-based learning, 1.2 Motivational benefits of DGBL, 1.3 Benefits of digital learning games compared with conventional media, and 1.4 Using instructions to improve the learning effectiveness of digital games. However, with their research (see Reference section), I would have entitled the subsections 1.1 Learning effects of DGBL, 1.2 Instructional impact on DGBL, 1.3 Impact of feedback on DGBL, 1.4 Motivational benefits of DGBL, and 1.5 Instructional impact on DGBL. From there I could better cover what’s in the literature review.
  1. Yes, the literature review does conclude with a summary but, because it is ongoing throughout the paper and left rather open-ended, it was something to search for. The actual summary appears after Experiment 2, just before 4. General discussion. It says, “The usefulness of the present study therefore lies in its demonstration that knowledge of correct response (KCR) feedback, coupled with an entertainment instruction, can promote deep cognitive processing during learning, all the while enhancing learners’ motivational investment” (Erhel & Jamet, 164). This single sentence hits on every aspect of the authors’ study and makes clear their intention.
  1. The research questions and hypotheses are hidden and kind of tacked on within the context of the paper but they’re there. I would add the following paragraph at the end of section 1.1: “One of the medium’s key characteristics is the ‘coming together’ of serious learning and interactive entertainment. In other words, digital learning games can be regarded as an entertainment medium designed to bring about cognitive changes in its players. Taking learners’ goals, mastery and performance, into account can help us understand the reasons behind their engagement in DGBL. Depending on their nature, instructions can therefore play a key role during the cognitive processing of educational content. The usefulness of the present study therefore lies in its demonstration that KCR feedback, coupled with entertainment instruction, can promote deep cognitive processing during learning, all the while enhancing learners’ motivational investment. The participants who were given the entertainment instruction seem to have experienced less fear of failure than those who were given the learning instruction… they were less frightened of failure and, by so doing, led them to adopt more effective learning strategies. KCR feedback revealed how an entertainment instruction can improve the management of fear of failure.”

Research Design and Analysis

  1. The study for Experiment One was designed to test DGBL through a game called ASTRA. Forty-six participants were given differing sets of instructions to “learn” or “play” the game. The results were then tested to see if they had an effect on learner motivation and, if they did, would participants’ intrinsic motivation scores be higher and provide for deeper learning scores, pursuit of mastery goals, and flaunted game performance ability. If they did not, entertainment instruction would prompt worse deeper learning scores and promote a fear of failing. In my opinion, these do not relate well to the research questions and hypothesis that states, in part, “digital learning can be regarded as an entertainment medium designed to bring about cognitive changes in its players” (page 156). According to Drost, in part, “The reliability coefficient is the correlation between two or more variables (tests, items, or raters) which measure the same thing” (page 108). Time matters as well; the test-retest method is used to measure reliability as well (Drost, 110). With Ehrel & Jamet’s study, the split-half approach could also have been utilized but was not. Instead, the authors took one test’s results and formulated a better test with all approaches staying the same.
  2. The study’s sampling methods are too narrow overall. First, the authors only used a homogenously-grouped demographic of college students age 18-26. Next, the authors only used 46 participants from one college area, none of whom “knew too much” (page 159). The groupings were not random; there was some preference given to even the genders in each group so they would stay even across groups. According to the authors there were three more problems with the study. First, the ASTRA game environment had limited interactivity. This means that “play” operators didn’t get much out of the game. Second, learners didn’t receive very feedback as their answers were usually correct; so, feedback was almost a moot point. Third, in methodology, the authors assumed offline data would “measure effects of instruction choice” (page 165) but found that further studies are needed to (dis)prove this assumption. Overall, in my opinion, the results garnered may differ given a larger, more heterogeneously-grouped demographic of all-age students at other colleges or in the greater population.
  1. The study’s procedures and materials were underwhelming. The materials were: a five-part presentation with “oral information giving [participants] instructions at the start of the simulation, commenting on the sequences displayed on the TV screen, probing learners’ reactions to certain symptoms, and testing the learners in quizzes” (page 159). Other than quizzes, instructions were manipulated to tell the groups whether to “play” or “learn” ASTRA. Participants were also given a motivation questionnaire with associated levels of mastery and a Knowledge questionnaire with explicit questions not found in ASTRA (page 160). Interview protocol did weed out those who knew too much about the topics to be studied (namely Alzheimer’s disease, Parkinson’s disease, myocardial infarction, and stroke) like pre-med students but went too far, excluding regular students who knew too much about said topics as well. Data collection procedures included Levene’s test and an ANOVA but the tests showed equality and no significance though this was for the pre-test. When the Mann-Whitney test analysis was used, there was no problem on inference scores. On inferential scores the authors did find that the “play” ASTRA group scored higher than the “learn” ASTRA group. What this means should be explained in the numbers but remains unclear. My problem is with the math: nothing adds up as it should. I can follow the logic of the numbers and assume what is meant but there’s no guarantee I am right without raiding Ehrel & Jamet’s notes.
  1. The appropriateness and quality of the measures used was of decent quality and semi-valid. I know from reading Drost that, to be valid, “there are four types of validity that researchers should consider: statistical conclusion validity, internal validity, construct validity, and external validity” (page 115). Regarding statistical conclusion validity, no relationship exists between learning and instruction; at least, the authors have not established one. For internal validity, again the authors have not proven a relationship (more than casual) between learning and instruction. Construct validity is at play here but not for both of Drost’s main validity types, translation validity and criterion-related validity (page 116) and all subtypes, namely face validity and content validity. Face validity is present in Ehrel & Jamet’s study as it is very subjective. Content validity is also here though in the negative: no judges were consulted to give guidance to Ehrel & Jamet’s study. Their methods were not fully informed and suffer. 

Interpretation and Implications of Results 

  1. In the methodology section, the author discusses methodological and conceptual limitation of the results; namely, there were three more problems with the study. First, the ASTRA game environment had limited interactivity. This means that “play” operators didn’t get much out of the game. Second, learners didn’t receive very feedback as their answers were usually correct; so, feedback was almost a moot point. Third, in methodology, the authors assumed offline data would “measure effects of instruction choice” (page 165) but found that further studies are needed to (dis)prove this assumption.
  1. The author’s conclusion with the reported results is inconsistent and incomprehensive. By this I refer to the sentence, “The usefulness of the present study therefore lies in its demonstration that KCR feedback, coupled with an entertainment instruction, can promote deep cognitive processing during learning, all the while enhancing learners’ motivational investment” (Ehrel & Jamet, 164). The sentence itself is very good but does not convey what happened in the studies or what the authors wrote about in the literature review or experiments.
  1. The authors do relate the results of the literature review to the study’s theoretical basis but without support. It’s merely statements. For example, from page 164, Ehrel & Jamet write, “The participants who were given the entertainment instruction seem to have experienced less fear of failure than those who were given the learning instruction… they were less frightened of failure and, by so doing, led them to adopt more effective learning strategies” (2013). Nowhere in the literature review does fear of failure have mention. It’s coming out of thin air. Likewise, on page 165, Ehrel & Jamet write, “KCR feedback revealed how an entertainment instruction can improve the management of fear of failure” (2013). This seemingly comes from nowhere.
  2. The significance of this study is the learning effectiveness and its primary implications for theory are in the realm of DGBL specifically. Taking a closer look at learning effectiveness, we see pieces of what the author is trying to write about and research. Regarding future research, the study itself lends to needed information in learning effectiveness. Finally, for practice, this study would do well to expand on DGBL in the literature review and form a cohesive thesis for new study.

Abrami-et-al academia AI annotation Artificial Intelligence content-knowledge critical analysis Critical Review designed video ed research EDU800 education education research Erhel Ertmer Harris&Hofer Hrastinski Jamet Kay Kellie's Blog knowledge gap Leu Li-et-al Mills Mishra&Koehler New-Litearcy Niess&al O'Brien online-learning PCK pedagogy Richarson&Swan Saubern&al Singh&Thurman Swartz technology theory TPACK Valverde-Berrocoso-etal video podcasts Villa&al Week7 week8 week11 week12

Leave a Comment

Your email address will not be published. Required fields are marked *