More than 80% of the students participating in a 2016 Stanford History Education Group Study failed to question the legitimacy of a photograph of wilting flowers accompanied by the caption “Fukushima Nuclear Flowers.” The photo was posted on a photo sharing website and was framed to imply the flowers had birth defects caused by the Fukushima Daiichi nuclear disaster, but in reality wilting is a common mutation in daisies.
In a 2020 study released by the Stanford History Education Group, over two-thirds of undergraduate students failed to identify a “news” story as satirical; and 95% never located the PR firm behind a supposedly “nonpartisan” website
The traditional information literacy skills that students learn in college within an academic context may not carry over to their personal and civic lives due to the rapidly changing information ecosystem
To be information literate outside of college, students need to learn how information works in the age of algorithms (Head et al., 2020)
Algorithms are “lines of coding you don’t see that are intentionally used by many online platforms to personalize content to match users likes and dislikes” (Head et al., 2020, p.43)
Algorithms limit the information choices we see, and they often rely on incomplete or inaccurate data that perpetuate existing bias and societal prejudices to paint an unbalanced picture (Noble, 2018; O’Neill, 2016; Eubanks, 2018)
For example, search engine results often reinforce prejudiced attitudes (Noble, 2018), and some algorithms used in the areas of employment, loans, advertising, and criminal justice are based on past biases coded in historical data (O’Neill, 2016; Eubanks, 2018).