A Few Good Reasons Why we use MARC Report

 

(Here are some unsolicited remarks from one of our Florida customers.)

 

The primary purpose of using MARC Report (MRT) in cataloging is to improve the quality and accessibility of the library's bibliographic database.

 

MARC records obtained from any and every source contain errors.   Those downloaded from vendors and from other library databases are especially problematic, but Library of Congress records also frequently contain significant errors.

 

The coding and tagging errors found by MRT affect display and indexing of MARC records, as well as the ability to retrieve records for reports.   With MRT, records are validated against current MARC21 and OCLC standards, standards necessary for record sharing as well as migration of records when online systems change.   Good records migrate, bad records do not.

 

Correcting errors before the records go into the database is far more efficient, cost-effective and time-saving than doing them once they are loaded.     Problems created by incorrect coding and tagging in a MARC record tend to become magnified once the record is part of a complex online system.   Items become effectively "invisible" in a database if records are incorrectly coded.

 

An extremely  valuable feature of MRT is that it can be customized by the user to validate local practices, such as local cataloging source codes, local subject headings for special types of materials, etc.    It can also be configured to check for locally indexed subject thesauri in records (and to disallow others, for ex. Sears headings, NLM, etc.).

 

The Quick Review feature of MRT allows each cataloger to run customized reports when working on batches of  similar  records to make sure that all necessary features are included.  For example, a batch of Large Print  books can be checked to make sure that all fields requiring special coding and/or content for LP are present (008 reprod code; 020 ISBN (lg. print); 300 tag  xxx p. (large print);   650 tag Large type books, and so forth).

 

In general, using the default configuration of MRT, the number of "records in error" on any given file of MARC records averages 25-50%.   For example,  if MRT were run on a file of 100 Library of Congress MARC records (obtained from any source),  it would probably find errors in 25-50 of them, depending on the type and complexity of the records.    Original cataloging records created locally contain a much higher percentage of errors, sometimes even when created from templates.    Vendor supplied records usually have a 90-100% error rate.

 

Some of the errors found by MRT that significantly affect the way in which an online database indexes and displays records are:

 

ˇ         Non-filing indicator errors in title fields - causing titles to be lost in title browse

ˇ         Invalid or missing indicators for indexing, print constants, subject thesauri, etc. - affects display and indexing

ˇ         Missing or invalid subfield delimiters and/or content designators - affects display and indexing

ˇ         Type of record (in MARC leader) does not match other record content, or is invalid - affects search qualification by type of material in OPAC  (very common in vendor records for AV/Audio materials)

ˇ         Incorrect coding in 007 tag for AV material - affects display of material type and search qualification by material type in OPACs  (eg. CD vs. Cassette)

ˇ         Dates of publication not matching 008/260$c - affects search qualification by date in OPAC

ˇ         Invalid ISBN numbers - many systems use ISBN as an overlay criteria and for acquisitions.  

ˇ         Subject headings missing or invalid - can be customized for locally indexed subject thesauri;

ˇ         Punctuation errors in all tags and fields - can seriously affect the clarity of information in brief and full record display.

 

MARC Report assists inexperienced technical services and cataloging staff in learning careful and thorough MARC coding and tagging, while freeing them to concentrate more on cataloging logic and principals that validation programs cannot check.