In this report West contends that
current school evaluations suffer from several limitations. Many of the typical pedagogies provide little immediate feedback to students, require teachers to spend hours grading routine assignments, aren’t very proactive about showing students how to improve comprehension, and fail to take advantage of digital resources that can improve the learning process. (p. 1)
He further argues that “data-driven approaches make it possible to study learning in real-time and offer systematic feedback to students and teachers.” (p. 1)
I found it intriguing (and something like heartening) to hear West cite a certain survey of instructors who use WebQuest (“an online activity that teachers employ to send students to the web to find information or solve particular problems”) report that “most instructors believed students were engaged with these types of assignments because they enjoyed their collaborative and interactive nature. As opposed to looking for general Internet information on their own, students had to talk with one another to fulfill the assignment” (p. 4). How does such a learning environment work (or, say, synch with) the data-driven approach that West is advocating? How do you capture or measure the complex learning scenario that occurs when students interact and problem-solve in this way?
Hofstein and Rosenfeld argue that “informal science experiences—in school-based field trips, student projects, community-based science youth programs, casual visits to informal learning settings, and the press and electronic media—can be effectively used to advance science learning” (p. 106) The authors describe their definition of “informal learning” as a “hybrid” one that “highlights an important distinction between learning contexts and learning methods” (p. 106). Their ultimate argument for “blending” formal and informal learning experiences in school sciences is that “that in addition to enriching the repertoire of learning opportunities, such blending can help meet the challenge of ‘science for all,’ i.e., providing science education tailored to diverse and heterogeneous populations of future citizens” (p. 106-107).
Hofstein and Rosenfeld’s section on “Press and Electronic Media” (104-106) has some interesting points, but I can’t help but wonder, of course, how any and all of these considerations should be looked at today—and not just because of the growth an development of Internet technologies. The growth of television channels devoted wholly or largely to science (or, rather, scientific issues, topics, themes, etc.) and themes must have great implications for this section.
In this chapter, Gee aims to “further develop the argument that computer and video games have a great deal to teach us about how to facilitate learning, even in domains outside games” (p. 45). “[If] only to sell well,” Gee continues, “good games have to incorporate good learning principles in virtue of which they get themselves well learned. Game designers build on each other’s successes and, in a sort of Darwinian process, good games come to reflect better and better learning principles” (p. 45). Gee uses the real-time strategy (RTS) game Rise of Nations (RoN) as his illustrative example, noting that “[i]n a good game like RoN there is never a real distinction between learning and playing” (p. 61).
Gee’s analysis of “sandbox tutorials,” where the player “is protected from quick defeat and is free to explore, try things, take risks, and make new discoveries,” even though, for example, the player may look to be in great danger (p. 56), had me wondering what narrative-scripting challenges this might create for trying to advance a game’s story.
On the whole I found Gee’s analysis extremely smart and compelling, though the self-hating gamer in me couldn’t help but wonder if he is reaching a bit at times. I’ll be interested to see if he—or anyone else—has developed this argument further as games have grown more advanced (and in some instances, more subtle or hidden in their complexity) over the past half-dozen years. I will say, though, that his statement, “For humans, real learning is always associated with pleasure, is ultimately a form of play—a principle almost always dismissed by schools (p. 61) is a somewhat different (and very compelling and inspiring) take on the fun vs. educational question I’ve had to consider throughout my classes so far.
In this chapter, Smith and Ragan discuss the evaluation of instructional materials. This usually happens at two, bookending points in the instructional development process. The first kind, “formative evaluation,” “evaluates the materials to determine the weakness in the instruction so that revisions can be made to make them more effective and efficient” (p. 327). From this evaluation the designer can then determine “whether the instructional materials are ‘there’ yet, or whether [the designer needs] to continue the design process” (p. 327). The second second kind, “summative evaluation,” is conducted “after the materials have been implemented into the instructional contexts for which they were designed,” at which point “designers may be involved in the process of evaluating the materials in terms of their effectiveness in order to provide data for decision makers who may adopt or continue to use the materials” (p. 327).
I haven’t yet seen too much behind the scenes (or too much of how the sausage is made, perhaps) for primary and secondary education. All the factors involved in design, implementation and evaluation seem extraordinarily complex. And, as Smith and Ragan note throughout their discussion in this chapter, budgetary concerns are an omnipresent force in all these considerations. I cannot imagine being an educator having to conduct all of these activities amidst budget crises.
In this chapter Smith and Ragan look at three kinds of “assessment[s] of student learning, or in common language, ‘testing’”—“entry skills (to see if learners are ready for the instruction), preassessments (to see what learners already know of the material to be taught), and postassessments (to see what learners learned from instruction)” (p. 123). Following this exploration, they examine “the characteristics of assessment instruments: validity, reliability, and practicality,” finding that “trade-offs must frequently be made among these qualities in designing assessments” (p. 123)
I found particularly interesting Smith and Ragan’s sketch of the difficulties of using an essay as a format for assessment (p. 114). Their concern about objectivity makes sense, of course, but seems at once to overthink and under-think the problem. Having only taught college-level composition, I’d be interested to learn how essays in primary and secondary schools are used, evaluated, etc. these days (both before and after the Common Core launch). In the composition I was trained/instructed to teach, the basis for evaluating an essay was argument development, connections between texts considered, and overall critical thinking skills.
“The process of task analysis,” Smith and Ragan note, “transforms goal statements into a form that can be used to guide subsequent design” (p. 76). This form “describe[s] what the learners should know or be able to do at the completion of instruction and the prerequisite skills and knowledge that learners will need in order to achieve those goals” (p. 76). The primary steps in conducting such an analysis are:
1. Write a learning goal. 2. Determine the types of learning of the goal. 3. Conduct an information-processing analysis of that goal. 4. Conduct a prerequisite analysis and determine the type of learning of the prerequisites. 5. Write learning objectives for the learning goal and each of the prerequisites. 6. Write test specifications. (p. 76)
I also found useful the authors’ inclusion of Robert Gagné’s taxonomy of learning outcomes, comprised of five “domains” “verbal information (or declarative knowledge), intellectual skills; cognitive strategies, attitudes, and psychomotor skills” (79). Again, I don’t come from an extensive education background, but I do have some training on this front, and I’m surprised (and a little annoyed) that I haven’t come across these notions (i.e. this taxonomy) before.
Corry, Frick, and Hansen make a case for rapid prototyping and usability testing in the process of web design. They do this by way of the “Illustrative Case Study” of the article’s subtitle, which describes how a large midwestern university went about redesigning of their website. The team tasked with analyzing the existing site was (interestingly enough) “[a]n interdisciplinary team of faculty, graduate students, and staff” (p. 65). (I wonder, a decade-and-a-half of web development later, if any such institutional body will still task a project this way?) Their process was an iterative one that moved from paper prototyping to testing elements of the site itself online.
One of the more helpful aspects of this article was the four-point definition of usability that the authors (citing Dumas and Redish, 1993) put forward: “(a) usability means focusing on users; (b) people use products to be productive; (c) users are busy people trying to accomplish tasks; and (d) users decide when a product is easy to use” (p. 66).
Smith and Ragan consider the ways in which “management concerns intersect with instructional design” and examine “two primary aspects of management…1) management of instructional design projects, and 2) management concerns related to the instructional process itself, as an instructional strategy element” (313). At one of these sections there seems to be a certain type of person (or a certain set of characteristics) who (or that) seems valuable in both realms (management and instructional design)—the project manager, who must possess a “synthesis of a diverse set of skills” (313). (This profile also struck me, having spent some time working in television production, as similar to characteristics I’d seen of more successful producers. If often had—and still do have—some difficulty describing exactly what it is a producer does. “They do all sorts of stuff” or “They get stuff done” were/are my most common descriptions.) In action, this project manager, Smith and Ragan note, “is concerned with groups of variables represented by four basic constraints: performance, cost, time, and scope… [and the f]ive essential components of the project manager’s role are managing project: integration, scope, time, cost, and human resources” (313).
The authors also spend some time using their critical eyes on (the great number of) “software packages” designed to “assist…project managers in the field” (324), warning that a “danger for the novice project manager in grabbing software tools too soon is to become dependent on the tool rather than refining skills and capacity for advanced management thinking” (325). I’ve seen this happen in the nonprofit world numerous times. It can lead—or add up to—extraordinary inefficiencies.
Gee notes that his two main points in this chapter are 1) “that good video games… represent a technology that illuminates how the human mind works,” and 2) ”that good video games incorporate good learning principles and have a great deal to teach us about learning in and out of schools, whether or not a video game is part of this learning” (22). A couple of the ways that games accomplish these two things are: “a) they distribute intelligence via the creation of smart tools, and b) they allow for the creation of ‘cross functional affiliation,’ a particularly important form of collaboration in the modern world” (26). This point, in particular, stuck out for me, in part because of Gee’s further explanation: “This form of affiliation—what I will call cross-functional affiliation—has been argued to be crucial for the workplace teams in modem “new capitalist” workplaces, as well as in modern forms of social activism” (28). This formulation made me think of Hardt and Negri’s political conceptualization of Empire and Multitude and made me want to see if anyone’s written anything about such a potential connection…
I found Dickey’s article to be one of the most exciting we’ve read all term. I now believe that one of my unconscious hopes in entering the DMDL program is that someone in the field was thinking about (and acting on) many of the issues that Dickey raises. Dickey’s thesis is that the strategies and tactics that computer and video game designers incorporate in order to thoroughly engage players in gameplay can and should be examined for potential uses in education—and instructional design—today. The path that Dickey takes through this landscape of possibilities (vast in 2005; even more vast today) is “an overview of the trajectory of player positioning or point of view, the role of narrative, and methods of interactive design” (67). A couple of points that stuck with me more than others on this path is the notion of utilizing narrative devices such as backstory and cut scenes in designing for engaged learning. Considering where in the design field I might take myself in the future, I wonder if the following statement by Dickey is still true: “little has been written about the pragmatic application of narrative in instructional materials, and how to create compelling narratives to support multiple learning activities in complex, multifaceted environments, and to sustain interest over time” (74). This point—as well as many others throughout the article, really—also made me think of (and made me think to look more into) the Quest to Learn charter school in Manhattan.