«Part II Submissions by: Judy Anderson Heather S. Miller Bryan Baldus Allen Mullen Deanna Briggs John J. Riemer Diana Brooking Karina Ricker Lloyd ...»
Weidow, Judy (compiler). The Best of MOUG. 7th ed., rev., exp. Austin, TX: The Music OCLC Users Group, 2000.
Yee, Martha. "Will the Response of the Library Profession to the Internet Be SelfImmolation?" [Written Testimony Submitted to the LC Working Group on the Future of Bibliographic Control, July 2007].
Relevant Works (Not Cited) Boeke, Cynthia. "The Future of Cataloging: A Survey of Trends and Issues." [Spring 2004]. [Interviews with library leaders] Please note that this staff has gradually increased from a cut-back ‘low’ of one professional and ½ paraprofessional in 1996 following attrition of two other professional positions and reassignment of other paraprofessionals.
From: Bryan Baldus email@example.com Date: Aug 7, 2007 8:55 AM Subject: Testimony for the Working Group on the Future of Bibliographic Control To: "firstname.lastname@example.org" email@example.com
Disclaimer: These thoughts are my own and do not reflect those of my employer.
More and more libraries, particularly school and public libraries, rely on vendor cataloging and processing to receive shelf-ready materials. At the same time, many of these libraries seem to regard the vendor cataloging as something that should be included for free or almost so--they are unwilling to pay much for good, quality cataloging records. Part of this may be due to low expectations for the quality of records some vendors provide.
One major source of high-quality cataloging, until recently, has been records from the Library of Congress. Vendors often would pass along "best available" records, often CIPlevel or minimal-level records, contributing to the image of "filthy vendor" cataloging.
For vendor catalogers, like my employer, Quality Books Inc. (QBI), who actually take cataloging seriously and strive to produce a full-level record, following AACR2 and LCRIs, to produce records as close to LC-full-level quality as possible, it has become increasingly difficult to maintain such high standards. Often we hear the justification that if LC is not doing it, then why should we--we are in this to make a profit, so if the libraries don't complain about the lower quality of records, why bother?
If we lowered our standards, one reason for the lack of complaints from customers (assuming there were no complaints) could be that so many libraries do not have someone on staff who is knowledgeable enough about cataloging to know whether what they are receiving is of sufficient quality to allow their users to find the materials the libraries have acquired. Another reason might be that, if they have received the records for free, they aren't expecting much, so lesser quality records are just accepted. Perhaps some of them have someone on staff who cleans up the records after the books have been received, or after a user complains about a failed search for something known to be in the catalog. Of course, another explanation could be that they really don't need highquality records from their vendors, and therefore they will accept just anything. One sign of this could be the increase in the use of PromptCat or other OCLC cataloging partner programs where libraries demand only OCLC records, and don't care what those records look like.
As the quality of LC's cataloging declines, it increases the amount of time catalogers, including vendor catalogers, must spend in cleaning up records to meet the standards developed to facilitate sharing of records. Elaine Sanchez, in her comments to the Working Group (as posted to AUTOCAT, Friday, August 03, 2007 3:50 PM) makes this point very well.
A project we are working on for one of our library customers, involves finding a record that is an exact match for the item we are selling, either in the library's consortial catalog or in OCLC. Finding an exact match, particularly for small and independent press materials, is not a simple task. Publishers are notorious for reusing ISBNs, for publishing new editions that have only minor differences, etc. As a result, determining whether the record in the library's database or OCLC represents an exact match vs. a different edition can be a relatively time consuming process. Attempting to find a match for a video recording, especially a DVD, is even more complicated, partly due to the lack of adequate rules and training for video cataloging. LC's lack of leadership in video cataloging leaves it to other organizations, such as PCC and OLAC to develop best practices. Contributing to the problems of video cataloging is the necessity on the part of some to catalog from the container, rather than from the program screens, either due to lack of equipment or time required to view the screens. This leads to several records for the same thing in shared databases like OCLC, each with a different title, perhaps with different dates, or other information, yet the exact same content on the disc itself. Things become even more complicated when the content is licensed to another publisher (or rereleased by the same publisher) who republishes the material with minor modifications--changing the credits screens, DVD menu, packaging, or simply updating one of the dates and adding their own publishing information.
Deanna Marcum has stated that the Library of Congress has no budget line for cataloging--it has been a service they have provided without a specific budget for doing it. Why is this the case? Why not add it as one, or ask Congress for support for doing so?
Instead of diminishing the quality of LC cataloging by cutting staff, doing away with series authority control, and reducing the amount of verification of copy cataloged or CIP upgrade items, among other things, LC should demand more support for hiring professional and paraprofessional catalogers to replace those who are or will be retiring.
At the same time, I agree that more cooperation is needed between libraries, vendors, and others (such as OCLC). I would agree with many of the suggestions presented by Allen Mullen in his comments to the Working Group (as posted to AUTOCAT, August 03, 2007 11:40 AM). Some elements currently added to cataloging records, such as table of contents notes (505), might better be added as post-cataloging enhancements, rather than as part of the initial record, particularly at the CIP stage. In my experience, publishers are often prone to change the TOC titles between the time they submit information for CIP and the final publication. As a result, those doing the CIP upgrade must spend a good deal of time reviewing and editing the 505, or delete it if it differs significantly enough. While having the 505 may facilitate keyword access to the record, in a well-designed OPAC, the TOC content could be made available and accessible in a more readable format outside the record.
During the 3rd meeting of the Working Group, Rick Lugg (R2 Consulting) discussed duplicative costs, such as reviewing "perfectly good LC records," where libraries spend time making sure 100s, 245s, 260s, 490s, and other fields are right. While I would agree that much duplication may occur, and we need a solution to reduce the need for excessive double-checking, currently this is necessary for several reasons. For one, we have seen a growing decline in the quality of these LC records. If the headings or 245 are inaccurate, then how can someone find the record (and as a result the resource), and how can one expect to easily match the record against the item the cataloger has in hand, particularly by automated means? Also, since LC's series decision, libraries wishing to provide uniform access to all items in a particular series must verify the 490 (or even 440/830) since LC is no longer doing this, and at times not even transcribing properly from the finished piece (or not updating from CIP during verification). As he continued speaking, Mr. Lugg also brought up video cataloging and seemed to question the need for viewing videos in order to catalog them. Perhaps cataloging from the container is sufficient for mass-market popular feature films, but, as discussed above, this is certainly not the case for the kinds of materials (independently produced, mainly nonfiction, or lesser known fictional feature films) my employer carries.
While electronic resources have become increasingly important, and will continue to do so, significant numbers of print resources will be acquired by libraries for the foreseeable future. These analog materials require digital surrogates in the form of high-quality cataloging records to faciliate access to library users. By contrast, while electronic resources also benefit from similar treatment through the addition of controlled vocabulary, this does not seem as essential when full text is available and access can be provided through alternative means. While LC does need to continue to be a leader in digital projects including digitization, these should not come at the expense of the funding and support devoted to traditional cataloging.
##### Thank you for your time, Bryan Baldus Cataloger Quality Books Inc.
The Best of America's Independent Presses 1-800-323-4241x460 firstname.lastname@example.org LC FoBC WG – Economics and Organization of Bibliographic Data Deanna Briggs, MLS, MBA candidate email@example.com Senior Bibliographic Data Specialist Copyright Clearance Center 7/27/2007
1) In the second public meeting of the WG, cataloging was characterized as a “public good,” implying that it should be supported as a public service regardless of cost. However, libraries and other stakeholders do operate within budgets which constrain the services that they can offer and maintain. Considering this reality, just what are the economic challenges facing different stakeholders in regard to the creation, maintenance, and/or sharing of bibliographic data and related structures and standards? What trade‐offs are being made between quality and economic constraints? How can the allocation of human, technological, and monetary resources, at both the collective and individual levels, be optimized to meet consumer and management needs, as described in the WG’s first public meeting? I think that improvement, economically, in bibliographic control costs requires transparent communication and cooperation among those within the larger “bibliographic control supply chain.” Who are the exemplars in certain areas and what can we (library community) learn from them? E.g. see my notes on Q4 relative to HarperCollins (doing well in marketing). As Karen Coyle noted on her blog, I also very much agree that I wish we could quantify what parts of the bibliographic record that we control provide us with the greatest benefits. We (librarians) often think it is everything, but it is not, and I presume it would really differ if we could look at bibliographic control among works with varying containers, e.g. if an ebook record, the user probably really wants to know the file type of the book since this piece of metadata may tell the user whether he/she already possesses the means to access the work. Therefore we should spend more time ensuring that “critical” data elements are consistently supplied and well‐formed. However, I understand “critical elements” for various items would highly differ and defining even those would be difficult.
2) The WG would like to clarify the commonalities, as well as differences, in cataloging practices among the various types of stakeholders involved in bibliographic control. How do different stakeholders function within the larger cataloging community? What challenges face the varied stakeholders in terms of cataloging practice? What relationships and/or collaborations currently exist or need to exist to help meet those challenges? What additional challenges do stakeholders foresee in the coming years? It seems to me that different stakeholders have either a “metadata” or “cataloging” approach. The libraries, for the foreseeable future, must at least have the cataloging approach, i.e. describing an item on a one‐up basis, which is very costly. The vendors have a metadata approach, i.e. view of their particular bibliographic sphere in aggregate wherein they do not (typically) engage in item in hand cataloging or description. The vendors use the data, which has at some point in time, been created or improved by those who are engaged in the cataloging approach. If you have strong metadata, this seems unavoidable. Over time, if the cataloging approach becomes very weak, all of our institutions using the data (libraries, vendors, etc.) will suffer. This is a great burden on those who take the “cataloging” approach. I think we need more open communication with publishing community to become more of an ally rather than an adversary. Sometimes we get caught up in the nuances (i.e. x publisher doesn’t know what they’re talking about) of publisher “bibliographic behavior” (myself included) and that doesn’t do us too much of a service. We all make mistakes and live in ignorance in some capacity. For instance, I think of this example from O’Reilly http://radar.oreilly.com/archives/2006/10/the_persistence_of_bad_data.html . If the wide bibliographic community received CIP/early publishing data (ONIX), we could use that data to start early description of that item. But how can we trust the data? Isn’t trust what every data conversation is about? One thing that could help us get some trust in this regard is to communicate and understand the publisher’s data flow. What systems do the publishers use? How many different systems handle the data, at one point or another, which would eventually fall into an ONIX feed? That’s why I like reading publisher blogs like O’Reilly – you learn about their data operations.