Search This Blog
Showing posts with label Library Catalogs (ILS). Show all posts
Showing posts with label Library Catalogs (ILS). Show all posts
Wednesday, March 12, 2014
Raiders of the Lost Archives - Cataloging zines
Love this idea... Reminds me of the cataloging flashmob I participated in...
Tagged ->
Library Catalogs (ILS),
Video
Friday, March 7, 2014
Linkeddata and libraries survey
For those doing linkeddata work in libraries:
http://bibliothek.univie.ac.at/limesurvey/
http://bibliothek.univie.ac.at/limesurvey/
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS)
Tuesday, January 7, 2014
Thoughts on CATALOGING, RDA, and metadata in netflix
I have so many thoughts on this nexflix article but they can all be summed as humans and machines working together to organize, describe and provide relevance (the best of both worlds!) : semantic cataloging. Of course, libraries have been organizing, categorizing, and describing materials from the beginning, but RDA is a big step forward. With the end of print card catalogs and record limits (for the most part), the amount of data within a library catalog record can be much more expansive. Other library databases like repositories and digital libraries, generally have not faced record limits nor have they been tied to MARC (which has its own pros/cons). Of course, quantity doesn't always equal quality, either, but under RDA, we can provide as much description as we would like.
Another aspect of RDA is breaking up more data into smaller bits. Information that might have only appeared in a free text note field or was omitted from a library catalog record, may now be included in -- in some cases, as part of a controlled vocabulary, such as relator codes. These CODES provide information about the relationship of a particular person to a variety of things and can be used to build different kinds of linking, relevance, and all sorts of things! Libraries could create mechanisms so that users and others can more easily use the data to dynamically build lists or collections that are relevant to them (there's the semantic aspect!) Of course, in order to use the data to make new things, it has to be open.
Netflix has had a similar evolution in metadata. Thinking to what our nexgen library catalog systems could be like, let's look at what Netflix has done (and what a few folks have done with their data, which could only happen with at least, some of the data being open).
Tagging/Data
It starts with people creating data and machine data collection:
"They [workers] capture dozens of different movie attributes. They even rate the moral status of characters. When these tags are combined with millions of users viewing habits, they become Netflix's competitive advantage. "
Much like traditional cataloging work, tagging is only as good as the tagger. The advantage that libraries have had is that the staff who do this sort of work (cataloging) most likely have some sort of training or relevant education.
In most popular social media (facebook, twitter, etc.) and image gallery sites (flickr, youtube, etc.) sub-tags if any, are limited: geographic (GIS , frequently from phone or camera gps coordinates in the exif metadata), subjects (topics as input by the uploader or tagger), names (user who uploaded or who tags other users in item), dates (item uploaded), access (public/private/select user group), system file information (file format, name, etc.) and rights (copyright, permissions, etc.) are among the most common. For some image sites, exif data will automatically be loaded in, most frequently date, type of camera, file information and general image specs (size, resolution, etc.) ; other information such as rights (copyright) is less likely to be picked up. Facebook's support of metadata is marginal* (EXIF metadata is stripped out) and while Flickr does support the most metadata for images*, it relies primarily on the user to fill out the forms correctly to describe and assign the metadata. (See photometadata.org for more information about EXIF and social media).
In terms of search, crowdsourced metadata can be a challenge. It is only as good (and complete!) as the user who creates it. If you have ever searched for hashtags in twitter, or tags in Flickr, you will see they are used every way imaginable. Hashtags are used as a statement #fail #thisisstupid #greatread, duplicated #ala (multiple things with the same keyword), or misspelled #teh (the), with little in the way of quality control placed on them.
Structure
However, there is some structure in place, which facilitates searching by hashtag/tag vs. date.
While libraries have had better systems in that the metadata was created by experts and experienced staff, much of the data in a traditional MARC record is unstructured. Funny, no? We think of MARC as being so structured and while it is in terms of field order and use and the fixed field (character placement is essential there), it is not so structured within some fields, like the 5XX fields or even within the 245 (title/statement of responsibility) field. As long as the indicators are correct and the subfields are input correctly, the content within that field is really a type of free text. albeit with some rules for inputting. For example, while the 245 was and remains under RDA as a transcription field (key it as you see it), there are still "shortcuts" (i.e., ways to minimize data recorded) under RDA (See: a nice overview of changes between AAC2 and RDA). So, while it's transcription, it's not exactly ALWAYS word for word (albeit more so with RDA).
The third major component is that the data is open, or at least partially open.With siloed data, this experiment would have not been possible. Having siloed data decreases its ability to be used by others, as well.
So, how was Netflix able to make this successful from a metadata standpoint?
In fact, there was a hierarchy for each category of descriptor. Generally speaking, a genre would be formed out of a subset of these components:
6510 Sardinia (Italy) $v maps $v Early works to 1800
650 0 $a Beach erosion $z Florida $z Pensacola Beach $x History $y 20th century $v Bibliography.
Thinking back to nexgen systems: RDA is providing a fairly good foundation to go beyond the traditional catalog. When done right (more vs. less, quality AND quantity), cataloging will net structured data bits that can be repackaged and relationship information that can build provide links between previously unrelated items (at least within the catalog); provided the data is open to be used and mechanisms are built so that users can create their own catalog experience. In that world, cataloging truly becomes semantic.
References:
Open Bibliographic Data, http://opendefinition.org/bibliographic/
Photometadata.org photometadata.org
AACR2 compared to RDA, field by field: http://www.rda-jsc.org/docs/5sec7rev.pdf
How netflix reverse engineered hollywood: http://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/
*Disclaimer: I have no idea what the backend systems of sites do with metadata; my thoughts are based upon the user experience.
Another aspect of RDA is breaking up more data into smaller bits. Information that might have only appeared in a free text note field or was omitted from a library catalog record, may now be included in -- in some cases, as part of a controlled vocabulary, such as relator codes. These CODES provide information about the relationship of a particular person to a variety of things and can be used to build different kinds of linking, relevance, and all sorts of things! Libraries could create mechanisms so that users and others can more easily use the data to dynamically build lists or collections that are relevant to them (there's the semantic aspect!) Of course, in order to use the data to make new things, it has to be open.
Netflix has had a similar evolution in metadata. Thinking to what our nexgen library catalog systems could be like, let's look at what Netflix has done (and what a few folks have done with their data, which could only happen with at least, some of the data being open).
Tagging/Data
It starts with people creating data and machine data collection:
"They [workers] capture dozens of different movie attributes. They even rate the moral status of characters. When these tags are combined with millions of users viewing habits, they become Netflix's competitive advantage. "
Much like traditional cataloging work, tagging is only as good as the tagger. The advantage that libraries have had is that the staff who do this sort of work (cataloging) most likely have some sort of training or relevant education.
In most popular social media (facebook, twitter, etc.) and image gallery sites (flickr, youtube, etc.) sub-tags if any, are limited: geographic (GIS , frequently from phone or camera gps coordinates in the exif metadata), subjects (topics as input by the uploader or tagger), names (user who uploaded or who tags other users in item), dates (item uploaded), access (public/private/select user group), system file information (file format, name, etc.) and rights (copyright, permissions, etc.) are among the most common. For some image sites, exif data will automatically be loaded in, most frequently date, type of camera, file information and general image specs (size, resolution, etc.) ; other information such as rights (copyright) is less likely to be picked up. Facebook's support of metadata is marginal* (EXIF metadata is stripped out) and while Flickr does support the most metadata for images*, it relies primarily on the user to fill out the forms correctly to describe and assign the metadata. (See photometadata.org for more information about EXIF and social media).
In terms of search, crowdsourced metadata can be a challenge. It is only as good (and complete!) as the user who creates it. If you have ever searched for hashtags in twitter, or tags in Flickr, you will see they are used every way imaginable. Hashtags are used as a statement #fail #thisisstupid #greatread, duplicated #ala (multiple things with the same keyword), or misspelled #teh (the), with little in the way of quality control placed on them.
Structure
However, there is some structure in place, which facilitates searching by hashtag/tag vs. date.
While libraries have had better systems in that the metadata was created by experts and experienced staff, much of the data in a traditional MARC record is unstructured. Funny, no? We think of MARC as being so structured and while it is in terms of field order and use and the fixed field (character placement is essential there), it is not so structured within some fields, like the 5XX fields or even within the 245 (title/statement of responsibility) field. As long as the indicators are correct and the subfields are input correctly, the content within that field is really a type of free text. albeit with some rules for inputting. For example, while the 245 was and remains under RDA as a transcription field (key it as you see it), there are still "shortcuts" (i.e., ways to minimize data recorded) under RDA (See: a nice overview of changes between AAC2 and RDA). So, while it's transcription, it's not exactly ALWAYS word for word (albeit more so with RDA).
The third major component is that the data is open, or at least partially open.With siloed data, this experiment would have not been possible. Having siloed data decreases its ability to be used by others, as well.
So, how was Netflix able to make this successful from a metadata standpoint?
- a defined (controlled) vocabulary (subject headings, authorities): " The same adjectives appeared over and over. Countries of origin also showed up, as did a larger-than-expected number of noun descriptions like Westerns and Slasher..."
- a structure (for catalogers, a similarity to how subject headings are formatted in a traditional library catalog), in netflix:
- Region, Awards named first (at least for Oscars)
- Adjectives (Keywords, subject headings)
- Dates and places named last (akin to a geographic subdivision)
In fact, there was a hierarchy for each category of descriptor. Generally speaking, a genre would be formed out of a subset of these components:
Region + Adjectives + Noun Genre + Based On... + Set In... + From the... + About... + For Age X to Y"Akin to traditional subject headings:
6510 Sardinia (Italy) $v maps $v Early works to 1800
650 0 $a Beach erosion $z Florida $z Pensacola Beach $x History $y 20th century $v Bibliography.
- data bits that can be repackaged: "little "packets of energy" that compose each movie.... "microtag."" (the smaller the data bits, the more they can be repackaged in different ways)
Thinking back to nexgen systems: RDA is providing a fairly good foundation to go beyond the traditional catalog. When done right (more vs. less, quality AND quantity), cataloging will net structured data bits that can be repackaged and relationship information that can build provide links between previously unrelated items (at least within the catalog); provided the data is open to be used and mechanisms are built so that users can create their own catalog experience. In that world, cataloging truly becomes semantic.
References:
Open Bibliographic Data, http://opendefinition.org/bibliographic/
Photometadata.org photometadata.org
AACR2 compared to RDA, field by field: http://www.rda-jsc.org/docs/5sec7rev.pdf
How netflix reverse engineered hollywood: http://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/
*Disclaimer: I have no idea what the backend systems of sites do with metadata; my thoughts are based upon the user experience.
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS),
metadata,
rda,
semantic web
Thursday, August 8, 2013
Linked data presentations
Reading list: linked data & ex-libris
- Linked data and Ex Libris products – introduction - Lukas Koster, University of Amsterdam, Netherlands
- Publishing Aleph data as linked open data - Silke Schomburg, HBZ, Germany
- Linked open dedup vectors – An experiment with RDFa in Primo - Corey Harper, New York University, USA
- Exploiting DBPedia for use in Primo - Ulrike Krabo, OBVSG, Austria
- Linking library and theatre data - Lukas Koster,University of Amsterdam, Netherlands
- Linked data and Ex Libris products – summary - Lukas Koster, University of Amsterdam, Netherlands
- Ex Libris – linked data outlook - Axel Kaschte, Ex Libris
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS),
metadata,
semantic web
Wednesday, August 7, 2013
RDA/FRBR reading list
Lots of RDA/FRB in this list:
- Karen Coyle: Understanding the Semantic Web: Bibliographic Data and Metadata, Chapters 1 and 2 http://alatechsource.metapress.com/content/g212v1783607/?p=b4700bc9fec34b12a3f42a94a9fd9d4f&pi=0
- Diane Hillmann, Karen Coyle, Jon Phipps and Gordon Dunsire: RDA Vocabularies: Process, Outcome, Use http://dlib.org/dlib/january10/hillmann/01hillmann.html
- Barbara Tillett: What RDA Is and Isn't http://www.loc.gov/bibliographic-future/rda/trainthetrainer.html
- RDA Prospectus: http://www.rda-jsc.org/rdaprospectus.html
- (Presentation with Slides and Notes) Tom Delsey: Moving Cataloguing into the 21st Century. http://tsig.wikispaces.com/Pre-conference+2010
- RDA Scope and Structure http://www.rda-jsc.org/docs/5rda-scoperev4.pdf
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS),
metadata,
RDA/FRBR,
social media,
tutorials
Wednesday, May 8, 2013
Global Change Queue (Batch edit) @ELUNA 2103 notes
Global Data Change Queue Notes
http://works.bepress.com/julene/ (many batch edit presentations)
Eluna presentation: http://works.bepress.com/julene/5/
What can GDC do?
- Can edit marc tags, fields
- can delete, edit, add
- can set preferences
- can limit by user names including create rules but not implement - so some one person could create rules but someone else has authority to run; can define by user role what can be edited (R note could be useful for a review/test process)
Examples:
- all records must have ____ (specific criteria; R note in the case of POs 910 = PA + lacking 245 indicators )
- like a global find and replace (R note: YES! yes! So, could fix typos in 5xx fields! or invalid MARC tagging in PO ; looks useful)
How to do it:
- create record set (R note: we could use old provisional records with incorrect marc indicators as a test)
- RULE: create a rule use if/then statements
- further define rules through sets - (R note: daisy chain together) to edit multiple fields - one rule for each field but then change them
- Preview /Review before change
- Will highlight changes
- Jump through set of records (e.g., 10 records at a time - your choice)
- If you find something that doesn't belong, you can remove it manually during preview
- If rule doesn't work, you will get a notice
- Update or review changes before you actually run
Run job or schedule
More powerful/easier to use than marcedit
More examples - updated authorized headings (RDA)
fixed fields
add OCLC #s
cleanup recon
add/remove standard notes
changed locations - pick and scan for item tho (of course you have to have the barcode.... but you don't have to have piece - R note) doesn't interfere with cataloging work - because whoever has record open has it (“locked” sort of) ; can schedule
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS),
meetups/conferences,
metadata,
notes,
procedures,
Technology,
tips,
Train,
Training
Friday, April 12, 2013
Cataloging: Cuttering resources
I put this together for someone else and thought I would share it with you too! ________________________________________
cutter tables at
http://www.itsmarc.com/crs/mergedProjects/cutter/cutter/basic_table_cutter.htm
and the cataloging calculator is a pretty nifty tool:
http://calculate.alptown.com/
This is a good overall resource:
http://www.itsmarc.com/crs/mergedProjects/cutter/cutter/contents.htm
One of the main things to be aware of in cuttering, is the local shelflist. ;-)
As for creating call numbers, for us that would be LC classification, so there is the subject analysis part to get the class and then the cutter. LCSH can be browsed via this list
http://www.biblio.tu-bs.de/db/lcsh/index.htm
I'm not sure how detailed it is, but it seems like a good overall tool.
cutter tables at
http://www.itsmarc.com/crs/mergedProjects/cutter/cutter/basic_table_cutter.htm
and the cataloging calculator is a pretty nifty tool:
http://calculate.alptown.com/
This is a good overall resource:
http://www.itsmarc.com/crs/mergedProjects/cutter/cutter/contents.htm
One of the main things to be aware of in cuttering, is the local shelflist. ;-)
As for creating call numbers, for us that would be LC classification, so there is the subject analysis part to get the class and then the cutter. LCSH can be browsed via this list
http://www.biblio.tu-bs.de/db/lcsh/index.htm
I'm not sure how detailed it is, but it seems like a good overall tool.
Friday, December 21, 2012
Survey on Research practices of historians
Ithaka S+R’s Research Support Services for Scholars program has released the report of their NEH-funded study, Supporting the Changing Research Practices of Historians(http://www.sr.ithaka.org/news/understanding-historians-today-%E2%80%94-new-ithaka-sr-report). Here’s a brief description of the project from the report’s Executive Summary:
In 2011-2012, Ithaka S+R examined the changing research methods and practices of academic historians in the United States, with the objective of identifying services to better support them. Based on interviews with dozens of historians, librarians, archivists, and other support services providers, this project has found that the underlying research methods of many historians remain fairly recognizable even with the introduction of new tools and technologies, but the day to day research practices of all historians have changed fundamentally. Ithaka S+R researchers identified numerous opportunities for improved support and training, which are presented as recommendations to information services organizations including libraries and archives, history departments, scholarly societies, and funding agencies.
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS)
Monday, November 26, 2012
Bibliographic Framework Initiative (MARC replacement) update
New document released from LoC:
Bibliographic Framework as a Web of Data: Linked Data Model and Supporting
Services
http://www.loc.gov/marc/transition/pdf/marcld-report-11-21-2012.pdf
The new, proposed model is simply called BIBFRAME, short for Bibliographic Framework. The new model is more than a mere replacement for the library community's current model/format, MARC. It is the foundation for the future of bibliographic description that happens on, in, and as part of the web and the networked world we live in. It is designed to integrate with and engage in the wider information community while also serving the very specific needs of its maintenance community - libraries and similar memory organizations. It will realize these objectives in several ways:
1. Differentiate clearly between conceptual content and its physical manifestation(s) (e.g., works and instances)
2. Focus on unambiguously identifying information entities (e.g., authorities)
3. Leverage and expose relationships between and among entities
In a web-scale world, it is imperative to be able to cite library data in a way that not only differentiates the conceptual work (a title and author) from the physical details about that work's manifestation (page numbers, whether it has illustrations) but also clearly identifies entities involved in the creation of a resource (authors, publishers) and the concepts (subjects) associated with a resource. Standard library description practices, at least until now, have focused on creating catalog records that are independently understandable, by aggregating information about the conceptual work and its physical carrier and by relying heavily on the use of lexical strings for identifiers, such as the name of an author. The proposed BIBFRAME model encourages the creation of clearly identified entities and the use of machine-friendly identifiers which lend themselves to machine interpretation for those entities.and thus we start our march to semanticizing our bibliographic data by looking to linking data, which will allow us to have more flexibility in terms of constructing records (& relationships), better authority and bibliographic control (fix in one place, change is propagated across records which consist of aggregated data presented in a framework (most likely in near future, fields), and the ability for our data bits to be harvested (if our data is open) and used outside of traditional library catalogs ...
Tagged ->
3D,
Big Data,
Data,
Library Catalogs (ILS),
linkeddata,
metadata,
semantic web,
social media,
tutorials
Wednesday, September 19, 2012
You can get there from here: AACR2 / MARC>RDA / FRBR / Semantic web
Although my graphics didn't turn out too nicely at slideshare, overall I think this covers what I'd like my staff to understand about RDA and FRBR in terms of foundation knowledge. We build from here...
Rda intro.pptx from robin f
Tagged ->
3D,
Big Data,
Data,
FRBR,
Library Catalogs (ILS),
metadata,
my projects,
rda,
semantic web
Thursday, August 30, 2012
Linked data, Libraries - How, Why & MARC
Two excellent presentations together, one focused on MARC/linked data; the second focuses on user experience but in terms of how linked data and the FRBR (Functional Requirements of Bibliographic Records) model could possibly impact that experience.
The first (Philip E. Schreur, Stanford) focuses on bibliographic data and MARC. Really great overview and explanation of linked data and its potential impact on libraries with a focus on MARC records and library data, name authority records/control, bibliographic data, and how linking data works. Also discusses challenges of traditional data control (siloed data, etc.) and how linked data can address those challenges. Great example using a music bib record (~13 minutes).
The second presentation (Jennifer Bowen, Univ. of Rochester, at about 16 minutes in) starts out with a study of users but then moves to whether linked data can meet their needs (and how). Examples of tools include Drupal (the newest version of which does have semantic web functionalities built in). FRBR (Functional Requirements of Bibliographic Records) data model/linked data discussion starts about 37 minutes.
http://www.infodocket.com/2012/08/28/new-video-from-cni-linked-data-for-libraries-why-should-we-care-where-should-we-start/
The first (Philip E. Schreur, Stanford) focuses on bibliographic data and MARC. Really great overview and explanation of linked data and its potential impact on libraries with a focus on MARC records and library data, name authority records/control, bibliographic data, and how linking data works. Also discusses challenges of traditional data control (siloed data, etc.) and how linked data can address those challenges. Great example using a music bib record (~13 minutes).
The second presentation (Jennifer Bowen, Univ. of Rochester, at about 16 minutes in) starts out with a study of users but then moves to whether linked data can meet their needs (and how). Examples of tools include Drupal (the newest version of which does have semantic web functionalities built in). FRBR (Functional Requirements of Bibliographic Records) data model/linked data discussion starts about 37 minutes.
http://www.infodocket.com/2012/08/28/new-video-from-cni-linked-data-for-libraries-why-should-we-care-where-should-we-start/
Tagged ->
Libraries,
Library Catalogs (ILS),
linkeddata,
semantic web,
social media,
tutorials
Friday, August 10, 2012
AACr2 The Movie (Humor)
I don't even have a comment for this... I do so hope they do RDA the Movie.
AACR2 Trailer from David Ross on Vimeo.
Tagged ->
Libraries,
Library Catalogs (ILS),
metadata
Monday, August 6, 2012
Search terms (a case study: article on automation and staffing)
I always find it interesting to see what search terms people use to find what they are looking for (or not looking for).
On that note, I thought I would share a quick behind the scenes of keyword searching for my article, The Effect of Automation on Academic Library Staffing: A Discussion . There is no abstract or keywords attached to this title, so the results are truly from within the fulltext searching. These stats are generated through the Bpress statistics tool.
The paper was originally presented at COMO in Sept 2012; published as part of GL Quarterly in the Spring issue (April 2012). There have been 79 downloads of the article. Many of downloads were generated from direct links, such as via social media tools, via the journal issue table of contents, emails, or other direct links.
I've loosely grouped the searches together in terms of overall theme.
The paper was originally presented at COMO in Sept 2012; published as part of GL Quarterly in the Spring issue (April 2012). There have been 79 downloads of the article. Many of downloads were generated from direct links, such as via social media tools, via the journal issue table of contents, emails, or other direct links.
I've loosely grouped the searches together in terms of overall theme.
- Search queries include:
- on staffing:
- library staffing unit
- responsibility of cataloging unit in academic library
- the duties of automated staff in academic library (hmm... we are robots?)
- change (impact) in nature of work:
- the effect of automation on production
- academic libraries maintaining shelf list cards why
- library cataloging department academic making changes
- outsourcing cataloging in libraries 2012
- on academic libraries (general):
- recommendation given in an automated academic libraries
- academic library white paper or position paper
- URLS:
- My library's URL (that's interesting - I guess they wanted to see what had been published from library staff and faculty from my library)
- the paper URL (did not download the paper)
Tuesday, June 5, 2012
OCLC Reclamation Project - what it is and how it works
Nice little presentation about the OCLC Reclamation Project from Pierce College Library (a Voyager ILS library). What is the reclamation project? It syncs and sets holdings in OCLC.
Tagged ->
Libraries,
Library Catalogs (ILS),
metadata
Sunday, May 6, 2012
Discovery Tools Free E-forum
Discovery Tool Implementation and Selection
May 15-16, 2012
Hosted by Nara Newcomer and Bill Walsh
Please join us for an e-forum discussion. It’s free and
open to everyone!
Registration information is at the end of the message.
Each day, sessions begin and end at:
Pacific: 7am – 3pm
Mountain: 8am – 4pm
Central: 9am – 5 pm
Eastern: 10am – 6pm
Description
The discovery interface market is exploding, with current
products including EBSCO Discovery Service, Serials Solutions Summon, Ex Libris
Primo Central, OCLC WorldCat Local, Blacklight, VuFind, Aquabrowser, and
more. These tools (both vendor-supplied
and open-source) promise faceted browsing as well as ability to integrate
multiple content silos; some come with vendor-provided cloud indexes to
articles, e-books, and other electronic content. The newest development is a cloud-based back
end designed to replace the ILS, including OCLC’s WorldShare, Serials Solutions
Intota, and Ex Libris Alma. In today’s
world, tools differ not just in the quality of features provided, but in
overarching functionality provided and in scope of coverage. The e-forum will address factors to consider
when selecting a tool, strategies for evaluating tools, and keys to successful
implementation.
Nara Newcomer is Assistant Music Librarian at East
Carolina University, where her duties include cataloging, public services work,
and maintaining the music library’s web pages.
Nara has worked closely with discovery selection and implementation at
ECU, including Serials Solutions Summon, WorldCat Local, and SirsiDynix
Symphony e-library OPAC. She is leading
the creation of the Music Library Association’s “Music Discovery Requirements”
document, which explores and provides recommendations for meeting the unique
demands music materials pose for discovery.
Bill Walsh is Head of Technical Services at Georgia State
University Library, which implemented EBSCO Discovery Service last summer after
evaluating other options. He is co-chair
of the library’s discover group.
*What is an e-forum?*
An ALCTS e-forum provides an opportunity for librarians
to discuss matters of interest, led by a moderator, through the e-forum
discussion list. The e-forum discussion list works like an email listserv:
register your email address with the list, and then you will receive messages
and communicate with other participants through an email discussion. Most
e-forums last two to three days. Registration is necessary to participate, but
it's free. See a list of upcoming e-forums at: http://bit.ly/upcomingeforum.
*To register:*
Instructions for registration are available at: http://bit.ly/eforuminfo. Once you have
registered for one e-forum, you do not need to register again, unless you
choose to leave the email list. Participation is free and open to anyone.
Tagged ->
Libraries,
Library Catalogs (ILS)
Thursday, April 19, 2012
Reflections on OCLC (Jay Jordan retires)
Some interesting thoughts sprinkled throughout the interview portion...
----------
OCLC now serves 72,035 libraries in 170 countries, with more than 260 million records—up from 39 million records in 1998. Its businesses have shifted radically, with an increasing proportion of its revenues derived from the library automation business rather than metadata services.
Jordan’s tenure has not been without controversy. Some in the automation field charge that OCLC has an unfair competitive advantage as a nonprofit. Its records use policy has been a point of contention, too, and OCLC is still involved in a lawsuit over members sharing of bibliographic data.
But Jordan’s track record has also been marked by innovative R&D and hires of respected librarians to facilitate that. Under his leadership, OCLC has taken a role in major surveys of customer perceptions of libraries worldwide that have impacted library service. It launched WebJunction, an online learning community that has particular value for staff in small and rural libraries, as well as the Geek the Library community awareness campaign. In addition to the WorldShare platform and WorldShare Management Services, earlier this year, it debuted the beta cloud-based Website for Small Libraries, so they can build low-cost websites with basic patron inventory management features.
more at
http://lj.libraryjournal.com/2012/04/people/end-of-an-era-at-oclc-jay-jordan-reflects-on-his-14-year-tenure/
----------
OCLC now serves 72,035 libraries in 170 countries, with more than 260 million records—up from 39 million records in 1998. Its businesses have shifted radically, with an increasing proportion of its revenues derived from the library automation business rather than metadata services.
Jordan’s tenure has not been without controversy. Some in the automation field charge that OCLC has an unfair competitive advantage as a nonprofit. Its records use policy has been a point of contention, too, and OCLC is still involved in a lawsuit over members sharing of bibliographic data.
But Jordan’s track record has also been marked by innovative R&D and hires of respected librarians to facilitate that. Under his leadership, OCLC has taken a role in major surveys of customer perceptions of libraries worldwide that have impacted library service. It launched WebJunction, an online learning community that has particular value for staff in small and rural libraries, as well as the Geek the Library community awareness campaign. In addition to the WorldShare platform and WorldShare Management Services, earlier this year, it debuted the beta cloud-based Website for Small Libraries, so they can build low-cost websites with basic patron inventory management features.
more at
http://lj.libraryjournal.com/2012/04/people/end-of-an-era-at-oclc-jay-jordan-reflects-on-his-14-year-tenure/
Tagged ->
Leadership,
Libraries,
Library Catalogs (ILS),
metadata,
social media
Tuesday, April 10, 2012
A history of automation and its impact on staffing (White paper/case study)
This white paper written by Virginia Feher and I started out as a conversation about the changes brought about by automation within the UGA Libraries. Over the years, I have documented some of the changes in automation (including 5 migrations/conversions of the ILS - Library catalog!) through discussions with colleagues within the Libraries and outside. This article not only provides a good overview of the impact of automation on one technical services unit within a library, but also a brief history of that automation. I also believe these changes are analogous to what has happened across technical services librarianship. We frame our conversation around a question posited by Horny in 1985.
----------------------------
To improve access to their collections, academic libraries automated cataloging functions, replacing the card catalog with the integrated library system (ILS), greatly impacting the day-to-day activities of library staff. How does automation affect staffing in an academic library? Horny (1985), while discussing the effects that changing technologies might have on librarianship, speculated that libraries would require support staff with “higher levels of knowledge and skill,” which would result in “more interesting and lucrative” jobs, “attracting an excellent caliber of staff” (p. 57).
For the purpose of examining the effects of automation on academic library staffing, this paper will
provide a discussion of changes in workflow and staffing at the University of Georgia (UGA) Libraries
Cataloging Department starting in the late 1970s, focusing on the Database Maintenance (DBM) Section. The discussion will demonstrate how an increasingly automated environment at the UGA Libraries resulted in the reorganization of duties and, because of the need for employees with greater technical expertise, the re-classification of staff positions to higher levels.
read the full article at
COMO White Paper - The effect of automation on academic library staffing: A discussion
http://digitalcommons.kennesaw.edu/glq/vol49/iss2/7/
----------------------------
To improve access to their collections, academic libraries automated cataloging functions, replacing the card catalog with the integrated library system (ILS), greatly impacting the day-to-day activities of library staff. How does automation affect staffing in an academic library? Horny (1985), while discussing the effects that changing technologies might have on librarianship, speculated that libraries would require support staff with “higher levels of knowledge and skill,” which would result in “more interesting and lucrative” jobs, “attracting an excellent caliber of staff” (p. 57).
For the purpose of examining the effects of automation on academic library staffing, this paper will
provide a discussion of changes in workflow and staffing at the University of Georgia (UGA) Libraries
Cataloging Department starting in the late 1970s, focusing on the Database Maintenance (DBM) Section. The discussion will demonstrate how an increasingly automated environment at the UGA Libraries resulted in the reorganization of duties and, because of the need for employees with greater technical expertise, the re-classification of staff positions to higher levels.
read the full article at
COMO White Paper - The effect of automation on academic library staffing: A discussion
Recommended Citation
Fay, Robin and Feher, Virginia C. (2012) "COMO White Paper - The effect of automation on academic library staffing: A discussion," Georgia Library Quarterly: Vol. 49: Iss. 2, Article 7.
Available at: http://digitalcommons.kennesaw.edu/glq/vol49/iss2/7
Available at: http://digitalcommons.kennesaw.edu/glq/vol49/iss2/7
http://digitalcommons.kennesaw.edu/glq/vol49/iss2/7/
Tagged ->
History of Technology,
Libraries,
Library Catalogs (ILS),
metadata,
my projects
Wednesday, February 15, 2012
OCLC/WorldCat discussion paper on RDA
Many points of interest in the OCLC discussion paper on RDA including how catalogers may use OCLC records postRDA implementation:
Proposed Future Cataloging Policy for Member Contribution to WorldCat after RDA Implementation
Proposed Future Cataloging Policy for Member Contribution to WorldCat after RDA Implementation
- Catalogers are not required to update or upgrade existing records to RDA.
- Catalogers may re-catalog items according to RDA if it is considered useful. Such recataloging should only be done with access to the item. All descriptive fields would need to be reconsidered and revised to conform to RDA instructions. The revised record would then be changed to Desc (Leader/18) coded as c or i as appropriate with 040 $e rda added.
- Catalogers may update individual fields in pre-RDA records to reflect RDA practices if it is considered useful. Fields involving the transcription of data require access to the item in order to change transcribed data. The partially changed record would retain the indication of the rules under which it was initially cataloged, i.e., no changes would be made to the coding of Desc (Leader/18) and 040 $e would not be added or changed.Catalogers should use access points as established in the authority file, whether those forms are coded as RDA or AACR2.
The paper in full is available http://www.oclc.org/us/en/rda/discussion.htm
Tagged ->
Library Catalogs (ILS),
metadata,
rda
Monday, September 19, 2011
COMO/GLA Paper presentation schedule
GEORGIA LIBRARY ASSOCIATION
ACADEMIC PAPER PRESENTATIONS
October 6, 2011
Olympia I Room, 10:00a – 11:50 a
10:00 Pete Bursi: YBP Award Winner
Why We Still Matter
10:12 Jackie Radebaugh
Using the Social Design Model to Enhance Electronic Browsing and
Document Linking in Online Journal Databases
10:24 Jason Puckett
Open Source Software and Librarian Values
10:36 Emily Rogers
Teaching Government Information in Information Literacy Credit Classes
10:48 Charles Forrest
Information, Learning, Research:
Evolution of the Academic Library Commons
11:00 Break
11:10 Jon Bodnar
Questioning Social Media in Academic Libraries: Reflections on Directions for Future Research
11:22 Virginia Feher, Robin Fay
The effect of automation on academic library staffing: A Discussion
11:34 Yadira Payne, LuMarie Guth, and Chris Sharpe: Ebsco Award Winners
No melting pot: Results and Reflections from the 201 1 Southeastern Federal Depository Coordinators Salary Survey Project
11:46 Adjourn
Tagged ->
Library Catalogs (ILS),
my projects
Thursday, June 9, 2011
New intiative on bibliographic framework
This may also be of interest to those with semantic web interests:
---------
“Transforming our Bibliographic Framework<http://www.loc.gov/marc/transition/news/framework-051311.html>,” a statement from the Library of Congress (LC). A web site has been established for the Bibliographic Framework Transition Initiative<http://www.loc.gov/marc/transition/index.html> (http://www.loc.gov/marc/transition) and that will be the central place for plans, news, and progress.
---------
“Transforming our Bibliographic Framework<http://www.loc.gov/marc/transition/news/framework-051311.html>,” a statement from the Library of Congress (LC). A web site has been established for the Bibliographic Framework Transition Initiative<http://www.loc.gov/marc/transition/index.html> (http://www.loc.gov/marc/transition) and that will be the central place for plans, news, and progress.
Tagged ->
Libraries,
Library Catalogs (ILS),
metadata
Subscribe to:
Posts (Atom)