It’s a privilege to listen to someone with a really powerful and awesomely quick mind. Such was neuroscientist Professor Susan Greenfield, who was just on Andrew Denton’s Enough Rope. Read the transcript when it becomes available if you didn’t catch the program.
One of the many subjects she talked about was the way the world was changing from the “people of the book” to the “people of the screen”, the way that changed individual and collective perceptions, and the different skills that will be needed in tomorrow’s world (and to a considerable extent today’s) as a result.
Greenfield suggested that the wider advent of voice-activated interactive technologies may eventually make writing (including with a keyboard) redundant, however appalling that prospect may seem to many of us. I actually used to think that too not so long ago. I was an early adopter of speech recognition software almost 10 years ago, when my keyboard skills were very limited. But as those skills improved, I found it more comfortable to type than speak. Now I only use speech recognition software to transcribe bulk text (and even then it’s easier to scan it if it’s more than a page or two). But maybe that’s because I remain irreversibly one of the People of the Book.
It occurs to me that I’m actually a member of the crossover generation; partly of the book and partly of the screen. So are pretty well all bloggers over 30. And I may well be a quite extreme version of that crossover. My teaching and research require a lot of reading of both paper and electronic texts, and administration of CDU’s online law degree program involves staring at a screen most of the day. My blogging hobby/addiction amplifies that and extends it into the night.
And my habits and choices reflect that duality in ways that seem quite strange even to myself when I think about them. For instance, I watched Susan Greenfield expound about the people of the book and screen with the sound turned down on my TV, reading her words in subtext (since I finally invested in a digital set-top box) so I could listen to ABC Classic FM at the same time. It seems like the most natural thing in the world, and a much better way to digest ideas in an interview program.
Greenfield’s discussion about chemical and behavioural influences on brain function and the sources of creativity was also fascinating and challenging. But none of it was developed. Ideas were only hinted at but not explored. That’s partly a limitation of the genre, and partly a function of Denton’s engaging but rather superficial interviewing style. I wanted to know more about Greenfield’s work and thinking. Fortunately as one of the People of the Screen, Google lets me find out here and here and here.
She was pretty impressive wasn’t she? I am glad I stumbled across this interview (don’t usually watch Denton) but like you, was left hanging out for more.
P.S. About her work and schedule in Adelaide
here
I’m unconvinced that book vs screen is a relevant distinction. The real issue is text vs image, and we are seeing the surprising resurgence of (hyper) text, contrary to the predictions of McLuhan and others.
I had a long piece on this in the Fin ReView a while ago, and will try to dig out a link.
John
Yes, I guess that was what I was trying to say, albeit without your clarity. That’s why I reverted to typing rather than talking as soon as I could functionally do so, and that’s in part why I often prefer reading subtext on TV rather than listening to the intrusive soundtrack (now that I can with digital TV). I allowed for the possibility that this is because of my childhood conditioning as people of the book (or text if you like) and that subsequent generations will be different, but it’s conceivable that it’s something more basic. For example, text has greater utility, and is a more efficient and precise conveyor of meaning.
On the other hand, image is superior for some purposes. Especially when image is used in a creative/artistic way (as Greenfield discussed) to draw new connections and associations and insights, make us think and see things in new ways.
Text, of course, can also do this when used artistically. We’re all familiar with the phenomenon of visualising a character in fiction and then being surprised and sometimes disappointed by someone else’s realistion of that character when the book is made into a movie. Sometimes it’s better to stick with the imaginative capabiities of your own brain. One of the best things about the Lord of the Rings trilogy, I found, was that I didn’t experience that disappointment. Jackson’s sensibility was similar to mine and his imagination streets ahead in detail and realisation.
But even if we accept, as I do, that text will almost certainly remain prominent (and that Greenfield is wrong to the extent that she is predicting its decline), the method of its delivery and our interaction with it will certainly continue to involve ongoing major changes in the skills we need to learn and perfect.
John, I’d certainly be interested in reading the article you wrote if you can dig out a reference.
Mark Prensky wrote two interesting articles about this called Digital natives, digital immigrants. Note these are PDF files.
Part A
Part B
Thanks Some Dude. I found those articles by Prensky really helpful. I might even email him and find out how many mega-dollars he’d charge to design computer games to impart skills needed by law students.
But I remain to be convinced that the rapid-fire, intuitive acquisition of knowledge will ever completely supplant sequential, logical, structured assimilation of text quite to the extent Prensky argues. I think he’s right for mechanical/practical skills learning, but not quite so much for more abstract and analytical knowledge/thinking (although even there I agree there’s scope for conveying concepts through flashes of intuition and through doing rather than passive “chalk and talk” learning).
And the other problem I have with Prensky’s narrative is that the so-called “digital native” generation is nowhere near as natively homogeneous in their acquisition of a digital sensibility as he suggests. My job involves teaching and supporting students in adapting to an online learning environment, and developing the online approaches to be used. And one of the clearest lessons is that there are some young students (i.e. school leavers) who are extraordinarily digitally literate and intuitive (i.e. digital natives) and others who are almost complete foreigners to it, immeasurably less comfortable with it than this “digital immigrant”.
Probably part of the answer lies in Prensky’s idea of “legacy” versus “future” content delivery methods. We’ll probably need to continue providing both for the foreseeable future
Feel free to send me an email to discuss your ideas Ken. Until recently I taught computer games development at QANTM and have seen this concept put into practice successfully on a small scale. I have been exploring this idea with other fields (unrelated to specifically teaching hardware and software realted units) and would be interested in your ideas about law.
There’s an article in today’s SMH about how the use of SMS has revolutionised communciation for deaf people, by providing a techology they can use on the same basis as anyone else and by providing a means to communicate with the non-deaf without a third party.
smh.com.au/articles/2004/09/13/1094927513503.html
Well I’m one of these types who started playing with computers when I was 6 or 7: a digital native. These days I refuse to handwrite if I’m offered the choice.
Depending on one’s skill, typing can easily be faster than regular speech.
A little known fact about the brain is that it has a number of mechanisms which modify time perception to assist “whole mind integration”. For instance, sensory perceptions can take up to a third of a second to process. Those impressions are essentially “back-dated” by the brain so that you don’t get some sort of noticeable lagging effect. Errors in this mechanism are suspected for causing de ja vu.
I also understand that similar mechanisms creating subjective experiences for speech. Remember that speech has existed for far longer than writing, or typing; it makes sense that the brain has inborn wiring to make it seem easier and faster than it objectively is.
Speech seems natural and fluid compared to writing for most people. In the latter there is a higher awareness of the passage of objective time. It seems like typing is slower, even though for accomplished typists it is much faster.
Most of the perceived difference appears in the gap between forming the words and performing them. In speech there’s a modification of subjective time which means they appear to happen simultaneously or nearly so. In typing one can be consciously aware of the gap between thinking of the words and writing them. In my case, I find I am thinking several words ahead of I’m typing, and I tend to miss words here and there.
This is all dimly remembered from psych, mind you. I’m probably wrong in a lot of details.
A serious accident recently forced me to turn to voice dictation software to achieve my computer needs. As a result of the speed at which I can now operate various computer functions (including dictation), I seriously doubt that I would fully return to my old all-keyboard ways of doing things if that was an option. From what I can tell, voice dictation software really took off in capability and function (besides dictation) in the last two years. The software is now very capable. Persistence is necessary to train the software to recognise your voice, but once achieved, accuracy of greater than 95-99% is possible. This is probably the sort of accuracy rate I used to have when typing and errors are just as easy, and as quick, to correct. Of course, I dictate take much faster than I could ever type and I suspect most people, besides professional typists, would be the same. (See previous comment for dissenting opinion)
I recently noticed Defence was moving towards incorporating voice dictation software into their new joint headquarters so that real-time transcription of meetings would be available (digitally). Technologies such as these could take hold fairly quickly in the next few years. We may look back and wonder what all the fuss was about.
Stan
My Dragon NaturallySpeaking program is trained to a very high level of performance (better than 98%) because I’ve been using it for a long time. And it’s true that it produces text considerably faster than I can type (although it’s also true that errors aren’t quite as quick to correct, which brings the effective speed down quite a bit).
But even so, somehow I find that being physically “in touch” with what I’m creating is more satisfactory at least for work involving structured thinking and composition. I’m not sure why that is, but it’s certainly true for me. On the other hand, more mindless or mechanical text creation tasks (e.g. transcribing existing text; writing a form letter) are better done with speech recognition software. Obviously your choice will be different if you’re restricted by some physical disability.
Yes, what you say is true Ken, though I think if you were forced to use the software to correct your errors (rather than cheating by typing) you probably get amazingly good. Someone once told me it was better to get your ideas down on paper then come back and format it later. I rarely did that when I typed but I now tend to do it more when I dictate. In my case, that’s probably an improvement. Certainly to get the best out of dictation software you do have to experiment with certain styles of composition and decide which is best in certain situations. Typing will always have its uses and I still have a keyboard.
“I think he’s right for mechanical/practical skills learning, but not quite so much for more abstract and analytical knowledge/thinking”
I’ve often wondered about this. I don’t have a quote, but there’s a good line of evidence that while unnormalized IQ’s have been increased since IQ measurement began, the rate of increase has jumped since people started staring at tv screens. There could be any number of reasons for this but one obvious candidate would be that tv viewers decode massive numbers of visual images into 3D “spatializations”. People use these spacializations to work out all kind of problems, even ones with no real connection to 3D Euclidean space, e.g. complex language-based tasks, human management, psychological analysis, music, or even problems in the weird “spaces” of abstract mathematics. Indeed, often the “getting to” a problem involves cracking a good visualization.
That said, I’m definitely a spatial type so my take is probably skewed. Maybe other people “see” it different.
Ah Jim,
you see the visualisation aspect because that is how you think. One of the intriguing things about television is that it also encourages people to decide on emotional or narrative grounds – I for instance tend to create the logic of my life in terms of stories rather than visualisations.
Is this a function of television, or just that our lives have become more intense with increasing information flow, so exercises whatever dominant perceptual mode we have?
Hard to know, but our education is certainly more and more unlike the kind of teaching in, say, a traditional village Islamic school, where they do a lot of rote learning that is verbal, physically still and involves just one mental channel. We deal in multidimensional experience.
Ken,
As a retired seafarer and have a solid 50 years experience with letter writing I can identify with your notion that you can identify better with the keyboard than voice.I keenly recall bittersweet long distance phone calls from Hamburg home to Australia and the frustration of not being able to pack ones head and heart into three minutes of expensive call .Whereas the keyboard lends itself not quite as good as a distinctive handwritten letter,but it beats the three minutes from a dockyard open air phone .But it doesnt the instant gratification that a phone has .Still,a rotten connection can spoil your moneys worth in more ways than one .
Frank Quinlan