In his introduction to his translation of the Analects of Confucius, Pierre Ryckmans likened that ‘literary classic’ to a coat hook that has over the centuries acquired so many layers of coats that it can no longer be seen-has become so big that it completely obscures the corridor it was hung in. And that is not a bad metaphor for ‘copyright’ itself. Something that started, in 1709 as a fairly simple statute “An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors .. for “.. the Term of One and twenty Years” has by now become such a huge multilayer, intertwined, spaghetti cake that it is virtually impossible to sanely approach it as a totality. Not going to try.
One copyright problem that has cropped up in many discussions about the mass scanning of library books and manuscripts derives, in part (fn1), from the fact that the Berne convention prohibits “formalities”- in other words you do not have to formally register a copyright over ‘a work’ for it to be covered by copyright. This means there is no such thing as a national/ international registrar of copyright holders and therefore finding the right-holders of many out of print, but still in copyright works, so as to ask their permission re scanning their work(s), can be very time consuming if not simply impossible to do at all. And this a significant problem if you want to scan most of a large library in a few months ( rather than in a few decades).
The US over the past few years has, effectively, run a long consideration of some possible fixes for this problem. One option considered was that a (smallish) guild of American authors and some publishers could on the behalf of virtually all authors, known and unknown, sign a settlement granting world wide ‘consent’, by all, to scanning and wide distribution of whole in copyright library books, by Google . This option was ultimately rejected -the proposition was too ambitious(more here). Another option considered was some sort of legislative initiative. However copyright is a 304 year old layer cake that connects together many disparate competing elements, involves a lot of money and is by now intrinsically impossible to fully comprehend. The legislation option was/is too hard.
In the end the option that was adopted (almost by default) was that mass scanning of books for some purposes is legal, passes what the US calls the “fair use” test. The Fair use test centers on 3 main factors,roughly they are :purpose and character of the use, the nature of the copyrighted ‘thing’ and the effect of the use upon the potential market of the copyrighted work, outside of that fair use purpose. For example producing Braille editions of in copyright books should pass the fair use test- blind people are a special purpose category , making a braille edition is a significant transformation of the original print edition and making braille editions freely available is unlikely to affect sales of normal print editions.
I think Fair Use was a good call: Fair Use has a number of advantages over the other options considered: it is fairly minimalist- is less likely to result in unexpected knock on consequences. It is cheaper, more efficient and more flexible than a ‘global’ licensing management system could ever be. And the option of major legislative surgery is not really there in practice; much too complicated.
Australia currently has ‘fair dealing’, modifying our system so that it works like the US open ended Fair use test looks like the best option for us as well.
Footnote 1 The extremely long current term of copyright is another major factor in the mix. Also many books contain numerous “inclusions”: copyrights linked to authors other than the ‘author ‘ of that book and that can get very complicated.
PS If you have a few hours, the Public Index provides all you could ever want to know about the strange life and death of the Google Books Settlement ‘elephant’ (and much more besides)