Americans are terrible at using technology.
Yes, that’s a sweeping generalization that needs unpacking but it got some empirical evidence this week when the Program for the International Assessment of Adult Competencies (PIAAC)–which is coordinated by the Organization for Economic Cooperation and Development (OECD)–released its latest findings. Among the things the nationally representative study in 24 countries tries to measure is “problem solving in technology-rich environments,” by which they mean (citing an OECD definition) “using digital technology, communication tools, and networks to acquire and evaluate information, communicate with others, and perform practical tasks.” When it comes to such skills, folks in the U.S. are dead last among the countries studied.
Setting aside issues of methodology and measurement for the moment (the example questions are well worth reading, though), a couple of things struck me as interesting about this effort.
First, it’s notable that the attempt to measure such skills now sits alongside measures of more traditional literacy skills and basic mathematical and computational abilities, two other areas the study measures. The study’s authors note:
…the Internet has increased instantaneous access to large amounts of information and has expanded instant voice, text, and graphics capabilities across the globe. In order to effectively operate in these environments, it is necessary to have:
- knowledge of how various technological environments are structured (e.g., an understanding of the basics of the environment, including how to use command names, drop-down menus, naming protocols for files and folders, and links in a web page); and
- the ability to interact effectively with digital information; understand electronic texts, images, graphics, and numerical data; and locate, evaluate, and critically judge the validity, accuracy, and appropriateness of the accessed information. These skills constitute the core aspects of the problem solving in technology-rich environments domain
The ability to use technology matters and efforts to understand how well people are learning these skills will only increase as we move forward.
Among the many questions this raises: Where are colleges and universities in this process? Are we making digital competencies explicitly part of our curriculum–both as specialized topics and as integrated into the curriculum broadly? Where and how are such life-long learning skills–and critical thinking about technology–being taught? Who is responsible for ensuring that faculty–let alone students–can problem solve in technology rich environments? We’re only beginning to meaningfully grapple with such questions.
Second, the basic rubric for measuring skills (bottom of this post) is fascinating in that higher level skills are understood to involve:
- multiple steps and operations
- multiple technology platforms and applications
- respondent-defined goals
- unexpected outcomes and impasses
In contrast, so much of technology in higher education aims to offer simple, easy-to-use, fully-integrated platforms, with pre-determined results, that avoid “unexpected outcomes and impasses.” In other words, much of technology use in higher ed does little to prepare students for real-life problem solving with technology.
Certainly, courses need to be “managed” and some basic functionality needs to be standardized and made easy to use. But the real stuff of learning–not course management–involves grappling with new and changing information, problem solving, and developing skills usable in ever-changing circumstances and environments. That stuff is too often neglected.
We hear it all the time: technology’s “hard,” “complicated,” and “confusing.” Yes it is…as is anything worth learning. We need to do a better job of embracing those difficulties and learning how to manage them, rather than assuming this is the domain of specialized tech-geeks. Otherwise, we’re failing to educate ourselves for our time and failing to prepare students for the world that awaits them.