Too often, especially technology enthusiasts like me, are excited to try the latest new feature our favorite website or app offers. Whether it is interactive multi-player review games on Quizlet & Kahoot or “advanced differentiated” digital reading programs, we’re always looking for ways to engage our students and improve their learning. Technology can play a powerful role in the classroom. However, we have to be more deliberate about how , when and why we use it.
I believe form should fit function.
Once we have considered and decided what the purpose of a particular learning activity is, we can decide how to structure it in order to hopefully help students achieve that outcome. Furthermore, the environment you’re teaching in and students backgrounds will play a role in shaping the design of the learning activity. For example, if your favorite LMS or discussion board gave students the option to rate each other’s posts or get points for quantity of posts, here are some aspects to consider:
- If the goal of an activity is to produce the largest number and the broadest variety of views on a specific topic, it might be helpful to publicize the number of times each student has posted. What gets measured, gets done.
- If your purpose is not to generate a broad range of views but to provoke the deepest level of thought, then it would be helpful to churn people’s ideas until the clearest, most thoughtful insights emerge. In this case a peer or instructor led rating system would be helpful.
- If the students did not choose to be in the class (see factors of motivation in Dan Pink’s book Drive) and, therefore, they lack intrinsic motivation (and the teacher has done little to draw out an innate curiosity) then using extrinsic motivators like public praise (or punishment) as demonstrated by a public leaderboard for discussion posts would be appropriate. However, we have to be aware that while such a solution might generate more students to post, or even make longer posts (if one earns “points” that way), the quality of writing will continue to lag. Further, students may become discouraged that despite their best efforts they are still at the bottom of the pack. Extrinsic motivators will generally lead to short-term solutions or unintended consequences when true learning is the goal.
A rating system could introduce the best views or simply ones that many people agreed with.
This would lead us to the trap that Eli Pariser (Ted Talk: Beware of Online “Filter Bubbles”) and Cass Sunstein (Boston Review: The Daily We) want us to be cautious of. Although a human, peer generated ranking system could be better than the algorithmic ones used by Google or Facebook (although their human curators seemed to have landed them in hot water recently) we could still remain in what Pariser calls the “filter bubble” if there isn’t a robust criteria for how and why certain content is favored over others.
Similarly, Sunstein worries that digital spaces have led to a narrowing of political views instead of a broader open frontier we generally hold as the promise of the Internet. I see this too often in Twitter chats (or even chats in grad classes). Since most participants hold similar views (or they wouldn’t have participated in the chat or taken the course) or are worried about social ostracization if they present a contrary opinion, most online chats become echo chambers. This leads to participants simply holding stronger “versions of the same view with which they began” as Sunstein pointed out. Since self-selected online spaces can limit social influences and argument pools “there is legitimate reason for concern.”
Therefore, we must consider:
- What should be the balance between personalization and serendipitous experiences?
- To what extent should we use algorithms to differentiate materials for students based on interest and ability versus deliberately introducing students to topics they might not be interested in or even explicitly disagree with?
Technology holds potential but requires carefully examination
In an online discussion there are opportunities for more dissenting views. Many people post contrary (and vitriolic) views in the comments sections of news articles, I would say there is still significant range of opinions online. However, the comments on an article are often skewed towards the political leanings of the publication (e.g. New York Times vs The Wall Street Journal) because readers often chose to read (and comment on) newspapers whose coverage matches their own biases. Further, online rating algorithms could make it less likely that you see views you disagree with, especially if the criteria by which posts are ranked are not made clear.
Having an algorithm based on other users feedback would be helpful for sifting through a forum (or product reviews) that numbered in the hundreds or thousands. Perhaps this will be the case in some MOOCs. However, courses could also be designed to have smaller cohorts of students interacting (reducing the number of point of views presented) but making an algorithm less necessary and escaping the filter bubble.
The more advanced technology becomes, the more aware we have to be of how it is shaping our views and, ironically, the more effort we must make to overcome the blind spots technology creates.
It’s interesting how we are becoming further dependent on technology to organize the explosion of information it helped create. Perhaps that’s the trade-off we have to live with: If we want access to the broadest range of views, more than we might be able to sort ourselves, we have to rely on an algorithm. Otherwise, we have to limit our access to information to a quantity to a quantity we can sift ourselves which potentially limits the diversity of views and could reinforce our preconceived notions unless we deliberately seek out points of view that differ from our own.
[NOTE: This post was inspired by a discussion in Prof. Burbules’ course on Education and Technological Reform. The coursework is part of the New Learning program at the University of Illinois at Urbana-Champaign]