Reconceptualizing Learning II
Bill Long 11/15/06
The 21st Century "Problem"
For those of us, which I assume is all of you, brought up in the 20th century, we were taught to learn in libraries by doing research through card catalogues, readers' guides to periodical literature, dictionaries, encyclopedias, bibliographies and other aids to learning. We were taught to learn in the classroom through textbooks, casebooks, or occasional primary and secondary texts describing a phenomenon under consideration. Characteristic of all of these methods, however, was a sense that knowledge was only going to be yielded up according to the categories by which it was arranged in the storage system we were using. If we wanted, for example, to learn about Shakespeare, we would have to learn based on the way that Bradley, for example, had organized his work on Shakespearean Tragedies. If we wanted to learn about a plant, a rule of physics, a chemical element, an event from the past, we could look it up in an encyclopedia, and we would be required to learn the phenomenon according to the way that the author of the article wanted to explain it. Many times that explanation was satisfactory to us, since we really didn't know what the important questions were regarding the thing we were looking up, but in all cases we were confined in our knowledge to the explanation and method provided by the author we were reading.
The purpose of this and the next essay is to argue that the new revolution in learning, that will be enabled by Internet developments still in their infancy and yet unfocused, will be based on the proposition that the learner, rather than the provider of knowledge is king. This simple reorientation in learning, dare I say a Baconian reorientation, requires a little space to explain and illustrate.
Beginning with a Newspaper Article
My thoughts were focused more particularly through a Nov. 12, 2006 NY Times article by John Markoff. Entitled "Entrepreneurs See a Web Guided by Common Sense," Markoff shows how leading computer companies today, such as Google and IBM, are trying to devise internet searches that actually are "thinking" searches rather than simply piles of documents that respond to a search criterion or two. As anyone who has used the Internet for search purposes knows, you decide on your search criteria, type them in to Google, hit the return button and then come up with a pile of results. So far, in the dozen or so years that the Internet has been active, the major technical advancement has been to weigh the quality of the results for searches, so that we get the "most important" result for our search in the first place. For example, if I wanted to look up a professor, I would type in his/her name, and the first result is almost always him/her in the place of work--linking to a university-generated web page that introduces us to the person and gives us information on his/her work and how to get in touch. The first result is not someone's critical review of scholarship which only tangentially mentions the scholar we are seeking.
That is, the first decade of Internet life has focused on making searches more precise with the quality of the results corresponding to the nature of the search. But the fundamental weakness or inadequacy of these searches is that it only can produce a "pile" of results. That is, it only can point you to whatever already exists in full text form on the Net which deals with the search you are pursuing. Often this is sufficient for almost everyone's needs. Most of our learning, I would contend, is "need to know" learning--generated by quick questions or desires to gain some basic knowledge in a few minutes. Thus, the enhanced Google searches satisfy most people.
But it doesn't satisfy everyone, and not by a long shot. That is the burden of Markoff's article cited above. The central proposition is as follows:
"Their (i.e., computer scientists) goal is to add a layer of meaning on top of the existing Web that would make it less of a catalog and more of a guide -- and even provide the foundation for systems that can reason in a human fashion. That level of artificial intelligence, with machines doing the thinking instead of simply following commands, has eluded researchers for more than half a century."
This is knowns as Web 3.0, an effort that is still is in its infancy. One person has described it as going from a world of connected data bases to a world of connected data. An example of how this might eventually be useful, Markoff says, is where a person can actually ask a fairly sophisticated question to the computer and get a focused and precise answer. Let us say that a parent desires to take a vacation. Now, in order for your search to be useful, you have to know where you are going first of all. Then, you can type in separate searches for hotels, cars, entertainment, etc. and gradually "build" a vacation. This method of searching the web is so superior to what we had a decade ago, however (books and phone calls), that almost everyone believes we have just about reached a Nirvanic state of searching.
But we haven't. To answer future searches of the following type is the goal of Web 3.0. "I am looking for a warm place to vacation and I have a budget of $3,000 and I want to be gone a week, and I have an 11 year-old child." In other words, the basic principle of searching will change from us trying to conform our requests to the way that information is already there to information's trying to conform itself to our requests.
Though the only examples that Markoff gives are in the economics and consumer sphere (where most people seem to spend most of their time), I can envision a way in which this new technology might be useful to help us reconceptualize what we know and how we learn. My final essay deals with that subject.
Copyright © 2004-2007 William R. Long