going to the co op: converting authority headings from

15
29 Where we were at, and why we did it For a long time the University of Liverpool Library had used its own local form of authority control. The basis of this authority was what we called a fullest formapproach, and our headings did not necessarily match the typical format of incoming records, despite a ruling early on in the development of RDA (Resource Description and Access) preferring fuller forms of names to abbreviations where there was a question as to most commonly used or preferred form of name. Just to explain what I mean by fullest form, we would use the fullest form of the name known, together with any birth/death dates if known. This was to maximise the opportunity to differentiate between authors, and had the advantage of avoiding uncertainty over such dilemmas as deciding commonest form of name, as is a common practice in authority designation based on frequency of use. So, for example: Wed get: Lewis, C. S.$q(Clive Staples),$d1898-1963 Wed use: Lewis, Clive Staples,$d1898-1963 Another reason why this practice persisted was our OPAC (Online Public Access Catalogue) display. Brackets didnt display, which as far as we were aware was standard in Innovative (Interfaces Inc.) OPACs, so the NACO(Name Authority Cooperative program) style entry above would appear as: Lewis, C S Clive Staples, 1898-1963 ...which would make such authors appear confusingly as if they had multiple middle names. Another reason why there was little will to change was that there was no awareness of available automatic updating software, so no use for authority records other than as guides for staff and users to used entries from unused entries. Many libraries struggled to maintain authority control during the explosion in e-resource content at the beginning of the century 2005-2010, which was a period of tremendous increase in e-resource provision in libraries, bringing along with it increased metadata, and resultantly increased workload for cataloguers. The University of Liverpool was an early and enthusiastic adopter of e-resources, and we elected to catalogue as much e-material as possible, but, partly as a result of workload management approaches, on-the-job tidying approaches of the cataloguing team, and effective use of Sierras powerful global update utility, this significant workload was effectively accommodated. Problems began to appear, however, as a result of a reduction in cataloguing staff during a reorganisation in 2014. Cataloguing staff were repurposed to other positions, and individually purchased materials were copy- catalogued by Acquisitions library assistants, who did not undertake the same degree of correction and tidy-up as the preceeding cataloguer team. The lone remaining (part time) cataloguer dealt primarily with difficult and original cataloguing, I as the metadata manager dealt primarily with en masse collections, and only the Metadata Library Assistant (also part time) dealt with manually tidying-up the large e-book packages we continued to acquire. We developed global update approaches, with which she manages some aspects of package data, and have efficient solutions to some aspects, such as ISBN and series formatting consistency, but attempting this approach for authority data proved less effective and successful. Catalogue and Index Going to the Co-Op: converting authority headings from local to global headings using data from OCLC Worldshare at the University of Liverpool Martin Kelleher, Metadata Manager, University of Liverpool

Upload: others

Post on 27-Mar-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

29

Where we were at, and why we did it For a long time the University of Liverpool Library had used its own local form of authority control. The basis of this authority was what we called a “fullest form” approach, and our headings did not necessarily match the typical format of incoming records, despite a ruling early on in the development of RDA (Resource Description and Access) preferring fuller forms of names to abbreviations where there was a question as to most commonly used or preferred form of name. Just to explain what I mean by fullest form, we would use the fullest form of the name known, together with any birth/death dates if known. This was to maximise the opportunity to differentiate between authors, and had the advantage of avoiding uncertainty over such dilemmas as deciding commonest form of name, as is a common practice in authority designation based on frequency of use. So, for example: We’d get: Lewis, C. S.$q(Clive Staples),$d1898-1963 We’d use: Lewis, Clive Staples,$d1898-1963 Another reason why this practice persisted was our OPAC (Online Public Access Catalogue) display. Brackets didn’t display, which as far as we were aware was standard in Innovative (Interfaces Inc.) OPACs, so the ‘NACO’ (Name Authority Cooperative program) style entry above would appear as: Lewis, C S Clive Staples, 1898-1963 ...which would make such authors appear confusingly as if they had multiple middle names. Another reason why there was little will to change was that there was no awareness of available automatic updating software, so no use for authority records other than as guides for staff and users to used entries from unused entries. Many libraries struggled to maintain authority control during the explosion in e-resource content at the beginning of the century 2005-2010, which was a period of tremendous increase in e-resource provision in libraries, bringing along with it increased metadata, and resultantly increased workload for cataloguers. The University of Liverpool was an early and enthusiastic adopter of e-resources, and we elected to catalogue as much e-material as possible, but, partly as a result of workload management approaches, on-the-job tidying approaches of the cataloguing team, and effective use of Sierra’s powerful global update utility, this significant workload was effectively accommodated. Problems began to appear, however, as a result of a reduction in cataloguing staff during a reorganisation in 2014. Cataloguing staff were repurposed to other positions, and individually purchased materials were copy-catalogued by Acquisitions library assistants, who did not undertake the same degree of correction and tidy-up as the preceeding cataloguer team. The lone remaining (part time) cataloguer dealt primarily with difficult and original cataloguing, I as the metadata manager dealt primarily with en masse collections, and only the Metadata Library Assistant (also part time) dealt with manually tidying-up the large e-book packages we continued to acquire. We developed global update approaches, with which she manages some aspects of package data, and have efficient solutions to some aspects, such as ISBN and series formatting consistency, but attempting this approach for authority data proved less effective and successful.

C a t a l o g u e a n d I n d e x

Going to the Co-Op: converting authority headings from local to global headings using data from OCLC Worldshare at the University of Liverpool Martin Kelleher, Metadata Manager, University of Liverpool

30

Further reasons to change In June 2017, I undertook Innovative’s load table training,

1 a 3-day course. During the training, we dealt with

authority load tables, and through discussion with the trainer, discovered that there was automatic authority updating functionality available - and not only that, but that we had it available as part of our software, although it was not installed and active. This was good news, not least because, in addition to workload issues, there were changes in system functionality, and those changes, it had already become clear, had not been kind to our fullest form approaches to authority control. The left-anchored browse search worked well with our records, which, even if they did not match searches as entered by users, would appear in result sets, generally neatly compiled in single entries to select and peruse, near whatever searches. This had been replaced, however, with keyword functionality, and searches with initials did not necessarily match reliably with records where the author heading had none. Further, by then we were already using a discovery system, EBSCOs EDS (EBSCO Discovery System) in which clicking on a name entry would mostly limit to results that were exact matches, which caused particular problems for a decreasingly maintained authority index. A more positive reason for looking for a more standard approach was the possibilities lurking on the horizon in the already heralded linked data approach. The idea of data linking based upon universal identifiers was already offering a future functionality of interlinking resources with significant appeal, but it seemed this would primarily be based, as the term implied, on large scale universalised data, which local authority data, essentially by definition, was not. User survey confirmed concerns (2015) These concerns were supported further by a user survey we undertook in 2015.

2 There were a number of

complaints as part of the survey, comments such as:

“If you just type in the author it can be really, really hard to find what you want. You just get confronted with a great big long list” “Quite a lot of my authors are listed two or three times and I don’t know which one I need for the specific book that I want … the author had three different entries and it was only one of them had the book that I wanted on it”

Such comments leant some credence to the idea that a decrease in consistency in author headings was not only a pedantic cataloguing concern, but had direct consequences for the users. The report deduced that:

“For the most part, these issues could be alleviated with the use of better search techniques. However, the following excerpt also appears to suggest that this is also in part a consequence of inconsistencies within the author authority headings on the Library Management System and the limited resources in position to properly address these”

A comment which was very much in line with the concerns of myself and others regarding the reduction in time available for catalogue data consistency management and the impacts on customer experience. 1Innovative Interfaces, Inc. Load Profile Training Manual. (Innovative Interfaces, Inc.,2016).

2Woods, Jeff. 2015. Discover: Survey, Usability Testing and Focus Group Report. https://livrepository.liverpool.ac.uk/3003105/

C a t a l o g u e a n d I n d e x

31

October 2018* – the project begins As a result of these concerns, I put authority control on the metadata strategic plan, and in October 2018 we began the project. Very early on, the project team elected to go for NACO, partly in line with a general departmental strategy towards standardisation, it being the most ubiquitous and heavily used in terms of incoming data. To allow us to use NACO headings, we also quite early on investigated and resolved the “bracket” issue with OPAC display, by investigating other Innovative libraries, finding those who had brackets in entries on the relevant displays, and asking how such functionality was managed (the staff at the University of Warwick being those that came to our assistance and helped us to discover resolution). My line manager undertook a literature review, and both she and I emailed various lists to survey practices in the industry as a whole. I emailed both UK (UK-bibs) and international (AUTOCAT) email lists, and amassed significant data, not least by being provided with data from another enquirer into the same, Wendy Gunther. Our findings were that there were various approaches being undertaken, and those undertaking a fully manual approach were struggling to manage their indexes in the same way we were (the reduction in catalogue staff was an industry trend, so many were similarly feeling the impact on workflows in the same way we were). Some institutions were undertaking approaches facilitated by commercial metadata vendors, the two main providers being Backstage

3 and MARCIVE

4, and reported good results from both vendors.

There was also a semi-automatic approach used at Akron University5, feeding catalogue data through

MarcEdit to attain authority records. The literature6, 7

from our LMS providers Innovative seemed to assume that commercial vendors such as Backstage or MARCIVE would be used for large scale conversion and authority load, so there were no instructions on how to manage large scale conversion or authority record creation otherwise. I did, however, also have some prior knowledge regarding possible approaches, and had been interested in a semi-automatic approach used at Aston University which had been demonstrated at the CILIP CIG conference in Edinburgh a few months earlier.

8 I was also aware that OCLC (Online Computer Library

Center) had indicated that they could provide data, including author data, through master record provision, which it had been indicated would be costed.

9 In order to compare with other vendors I checked with OCLC

regarding an up-to-date price of author data provision, and it was indicated that we could, by that point, access NACO compliant OCLC data now at no extra cost, so effectively freely as part of our existing subscription. *Correction of original conference presentation: project began October 2018, not December 2018 3Backstage Library Works. MARS 2.0 Authority Control. Condensed Planning Guide. V. 2013.05a. (Backstage Library Works, 2013).

http://ac.bslw.com/mars/guide [Accessed: 16/10/2018] 4MARCIVE, Inc. Authorities and Bibliographic Data Processing : a detailed description. (San Antonio : MARCIVE Inc., 2018).

5Monaco, Michael J. 2018. "Automating Authority Work". (Ohio Valley Group of Technical Services Librarians Annual Conference

2018) https://events.library.nd.edu/ovgtsl2018/slides/Automating%20Authority%20Work.pdf [Accessed: 15/10/2018] 6Sanders, Martha. 2016. “How Automatic Authority Control Processing (AACP) Works” https://iii.rightanswers.com/portal/app/portlets/

results/viewsolution.jsp?solutionid=160818165428070&page=1&posi,tion=4&q=aacp [Accessed: 15/10/2018] 7Innovative Interfaces, Inc. “Plan an authority control project”. https://www.csdirect.iii.com/documentation/authcontrol.php [Accessed:

15/10/2018] 8Peaden, Will. 2018. “Magic of MarcEdit, or, how I learned to stop worrying and love metadata”. Cataloguing & Index, 193, 56-69,

https://cdn.ymaws.com/www.cilip.org.uk/resource/collection/F71F19C3-49CF-462D-8165-B07967EE07F0/C&I_193.pdf 9OCLC. Authorities: Format and Indexes. (OCLC). https://www.oclc.org/support/services/worldcat/documentation/authorities/

authformat.en.html [Accessed: 13/3/2019]

C a t a l o g u e a n d I n d e x

32

Options We spent a while evaluating options. Those employing manual approaches were generally not faring well in terms of managing their authority indexes, and the semi-automatic approaches of Akron and Aston also seemed comparatively work-intensive on the scale we’d want to apply them in terms of the conversion of the catalogue from local format to NACO. The primary option then considered was one popular with libraries: using either MARCIVE or Backstage to convert records, supply authority data, and continue to update the same as part of an ongoing approach. This obviously had great appeal, especially in terms of efficiency of local workload, one of the primary factors and problems that had driven the need for the project. There was, however, the issue of expense, and we could only afford a budget for a proportion of our holdings. We evaluated the extent of our physical holdings, permanent e-resources, and higher priority subscribed material, and determined that they could be converted, and further holdings of the same priority maintained and provided for on an ongoing basis. OCLC, although providing master record data as part of our subscription, and resultantly effectively free, did so without further management on a general basis, so using OCLC as a vendor would require significantly greater local effort. They were, however, more fully embedded in the NACO process, as the provided literature made clear. It was because they were the only approach which seemed likely to be feasible to convert a significant portion of our data, largely on an economic basis, that made them my primary choice from this point onwards. Testing The next stage of the project was testing. It was decided to benchmark the three options we were considering against each other, using the same set of data sent to all three vendors. This would be a 1000 record sample, and we determined various different kinds of records to request data for, of different resource formats, which were selected by myself and the cataloguer. I decided to additionally check the entirety of a section of the catalogue, “Smith, I”, to the extent provided as part of the service. In addition to this selection, I procured the extra data that would be supplied by OCLC for material we would not be able to afford as part of the paid selections, to compare impact of partial conversion by the vendors to full (but less controlled) data from OCLC. I loaded these three datasets into the catalogue, where they could be viewed and checked with full catalogue functionality and compared to existing records. On top of that, I decided to compile the entirety of the master record headings that I downloaded from OCLC master records of our holdings, with the hope of impressing the rest of the project team with the extent of the OCLC option, which I and the Metadata Library Assistant laboriously compiled into excel spreadsheets.

C a t a l o g u e a n d I n d e x

33

This preparation took significant time, and while undertaking the work, I provided the rest of the project team with relevant instructions and description of what there was, how to access it, and so forth, and we subsequently had a meeting to discuss findings. However, having spent my time preparing the data for assessment by others on the project, I undertook a minimum of evaluation myself, and at the point of evaluating the findings, discovered that the rest of the project team had been expecting me to undertake the majority of evaluation too, and had made few assessments themselves, other than the Special collections cataloguer, whose findings were fairly critical. My own position was similarly one of not being entirely convinced with any of the vendors, and, at this point, we were in a state of real doubt over the viability of the providers, when my line manager decided we should take a step back and devise a more systematic form of evaluation. We compiled a collection of tests, the cataloguer compiled some examples from the sample representing various resource types, and we split the task of evaluation between myself, my line manager, and the head of the section. At around the same time, I followed up the director of CCD’s comment that the deputy director of the library had complained regarding author discoverability. I decided to investigate his complaint. His example, a significant author tellingly known prevalently by initials (E.M. Forster), confirmed and worsened my fears regarding the issues our fullest form data presented our now keyword-based systems (which didn’t even seem to pick up initials or initialised names from statements of responsibility), which strengthened my resolve to convert to a more standard approach. So, we ran through the data from the sample, applying some fairly basic evaluations, scoring, and each comparing on a number of different elements and comparing on a final cumulative scores. …and the winner was OCLC Evaluating on both quality and consistency, I was surprised by the result. I’d been primarily hoping for OCLC to succeed as the chosen system on the basis of extent, so was surprised to find it also turned out to be slightly more consistent than either of the costed vendors, although one of these came a close second. The third place was significantly behind, however. Interestingly, this evaluation and my earlier evaluations had also found OCLC to be the only one of the three to not apply an author heading inaccurately to a work, one vendor doing so more than the other. It made the choice an easy one, which by that point was a relief. Implementation – one into many As mentioned, the OCLC approach had been expected to be more labour-intensive than either of the fully costed services, and presented a particular problem. OCLC master records had one record for each expression where we tend towards one per manifestation, or sometimes more. Our local system bibliographical numbers were the only local data on the OCLC records (in the 907 field), so would be the best data to match on. Where multiple local records matched a single OCLC record, multiple local bib(liographic) numbers would be provided on a single master record. So, looking at the list below, each line representing a master record 907 field, you can see the kind of matching problems we had.

C a t a l o g u e a n d I n d e x

34

We therefore had an issue of only being able to match on one bib number per record imported, whereas we needed separate incoming records for each local bib number to successfully overwrite all matching records. Workaround mania We used MarcEdit for preparation, and I formulated an approach of splitting off multiple bib numbers into additional fields, then using the “select records” feature in MarcEdit to create additional record sets for each entry in an additional field. Where duplicates went past 4-5, the dataset for the greatest number would then be manually edited by the Metadata library assistant, there being too many local bib numbers per record to want to process globally (quite a slow process in MarcEdit), making manual creation more efficient. So, to go through this approach step by step, I’d: a) Change the first bib number’s $a in the 907 field to $b - easy to undertake by matching on the earlier data in

the field:

C a t a l o g u e a n d I n d e x

35

b) Use the swap field utility to swap the remaining bib numbers (still in $a) into a nonstandard 999 field, where they would now be separated by spaces:

c) Use the modify subfield option to add a subfield $b to any further duplicates in the 999 field, leaving the first

in the field in $a:

C a t a l o g u e a n d I n d e x

36

d) Use the swap field utility to switch additional local bib numbers into 998: This process was then repeated to 997 and 996. Subsequent additional bib numbers (representing the 6th duplicate and more of each record) would then be manually deduplicated. The result would then be a number of additional local system numbers separated out into additional fields, which could then be used to generate additional records to match the configuration on our system:

C a t a l o g u e a n d I n d e x

37

e) I’d been to a presentation by Terry Reece (the creator of MarcEdit),10

who hadn’t immediately come up with an alternative to the above process, but did suggest or streamline the succeeding process. Having split off local bib numbers into separate fields, I could then create additional records, one per bib number, by using the select records function in MarcEdit on each of the 99x fields to select records repeatedly on each of the 99x fields and create an additional record set for each, then change the relevant 99x field in each file to being 907 fields. In this way, I managed to circumvent the one-to-many issue and proceed onto the next part of the process. Safety first In preparation for the load, I backed up the whole catalogue in MARC files on the staff hard drive. While I had been prepping the Worldshare MARC files in MarcEdit, I’d duplicated the name headings into 911-918 fields.

100=911 110=912 111=913 600=914 610=915 611=916 700=917 710=918 711=918 (plus $7conf – See below)

10

Reece, Terry. “MarcEdit and metadata trends. 2019. Presentation”. (UCL, London. 6/6/2019.) https://www.youtube.com/watch?v=PFO9tDLxXFg (Part 1) [Accessed: 28/11/2020] https://www.youtube.com/watch?v=dXN5bLsud8E (Part 2) [Accessed: 28/11/2020]

C a t a l o g u e a n d I n d e x

38

I then created a new load table to load nonstandard local fields in, rather than directly overwriting the active, standard fields. The new load table brought in these fields and otherwise did nothing to the record, matching on local bib number. Somehow I couldn’t arrange the insertion line in the load table for the range of numbers, so put in an entry for each field, which hit a limit of 8 entries, hence the requirement to use 918 for both 710 and 711 as indicated above, the addition of a “$7conf” subfield entry being an effective way of differentiating between the 2 fields. I then loaded in all the records. Once the data was in, I would copy the existing data into 921-929:

100=921 110=922 111=923 600=924 610=925 611=926 700=927 710=928 711=929

...and then copied all the 911-918 incoming data into the relevant fields. This way data was doubly backed up, both by the initial bulk exports and also in each record. In a conversion project of this scale, to backup both on a full collection and individual record level seemed reasonable, and was proven to be so.

C a t a l o g u e a n d I n d e x

39

Load, check and roll back when needed The whole process was implemented in early September 2019, in the two weeks before the students arrived. All records updated were given a 599 field indicating they had been overwritten, although records without any entries could not be identified whether they had been matched or not (since they may well have been “overwritten” with nil incoming data also, so remained literally unchanged). 85% of our approximately 2,000,000 were overwritten in the first attempt. Records which were managed by our Special Collections staff within a recent period had already been converted to RDA, so were omitted from the switchover, already having been converted. Various staff across multiple sections and subsections of library staff were asked to do some checking, particularly Special Collections, but we received much positive feedback, so the project was considered to be a success. But… Even though the newly input authority data was generally very effective, there were some problems. The basic records for print offprints were often mismatched, and still require correction. Similarly, large selection of science fiction article records (the University hosts one of the largest science fiction archives in the world) are overmatched, in that single examples of regular columns would often be repeatedly matched incorrectly against multiple records for different issues entries, misrepresenting the coverage of all but a single issues instalment of the article. Further issues also seemed to particularly plague science fiction, with a higher than typical error rate in accuracy and consistency, to the point that some of the data seemed to be science fiction itself. For example...

C a t a l o g u e a n d I n d e x

40

John Wyndham did write time travel science fiction, but that’s no excuse for having died 250 years before he was born: Similarly, for Philip K Dick, his middle name was changed in the blink of a global update from Kindred to Kendred:

Or did it? Maybe it was the other way round? Who can tell? Maybe it was both….. …And I’d decided to start bringing in $0s to indicate definite NACO entries, and also to use to generate authority records, but these subfields (translated as |0s; Sierra uses pipes not dollar signs for subfields) had to be transformed to nonstandard |7s, since |0s ended up displaying on the catalogue and affecting the indexing of results, whereas |7s didn’t do either. So this:

C a t a l o g u e a n d I n d e x

41

Had to be changed to: Furthermore, there was the remaining unmatched 15%. I queried this missing data with Paul Shackleton from OCLC, and he indicated that a large amount of data was not matching because of nonstandard publication data in incorrect fields and/or subfields (split between 260 and a nonstandard 262). I edited this data in Sierra and MarcEdit and corrected into an AACR2 format, which, after the data was sent back for more matching attempts, resulted in a substantial addition to records we could load master record data to, although because of the lag between resubmitted records and master record supply by OCLC, we had to perform a rollback because Special Collections had edited some of their records – this was a slightly messy operation to assess the extent of the problem and determine the criteria for records to roll back, so I decided against further attempts to tidy up remaining nonupdated records. Nonetheless, this little operation upped the converted records rate by a significant 5%, from 85 to 90%, leaving only a minority of records unconverted. Finally, OCLC don’t seem to use $t analytic entries, so local records with such entries had to be manually reverted, but these would then be the original local format of heading (from 927 data) … So manual corrections were required to correct these old headings to NACO headings. So, the project had included some pockets of problems, but, overall, it seemed to have been a definite success. Ongoing (early 2020) So this was the core of the conversion undertaken, but there was still work to be done for ongoing maintenance. We still didn’t have the authority module active, or the authority records created or loaded. I also hadn’t wholly firmed up an update procedure ... We always have numerous projects in process at any one time, and several of these took priority for a while, including an enhancement project being partly brought forward to upgrade pagination data in 300 fields to facilitate automated stock management with IMMS software, and work for a Greenglass project, but more delay was to come because of… Takeover! Strike! Plague! Our negotiations to activate the authority module took a downturn when we were told we didn’t already have access to the module after all, so we were waiting for a quote to purchase the module. While awaiting the quote, the LMS provider Innovative was taken over by Proquest, at which point they became understandably preoccupied with internal reorganisation, so progress with module negotiation ceased. Furthermore, UCU (the University and College Union) were on strike, and I’m a member, so I dutifully disrupted the project by being on strike. Thirdly, and almost immediately upon return from strike, COVID-19 hit us, of course, just like it hit the rest of the world, and contingency plans for operations dominated our time for a while. Worse, the grim financial outlook resulting from lockdown meant a new financial restraint which threatened the purchase of the authority module. Luckily, I’d been in contact with Martha Sanders, Innovative’s principal authority expert, and she came to the rescue. She contacted Innovative’s Dublin boss and it was discovered that we had purchased the module after all in 2016, so the load profile trainer had been right after all and further questionable expense was averted, ironically providing a silver lining to a sky full of clouds.

C a t a l o g u e a n d I n d e x

42

Authority record creation As we settled into lockdown, we continued to progress ongoing maintenance. I decided to use Will Peaden’s authority record generation approach over Akron’s, not least because the latter required the authority module to have already been installed, and also involved data in SQL, whereas the Aston approach was all in more familiar MARC and .txt formats. The two approaches were similar, as was a further approach suggested by Martha Sanders (by Stacey Wolf, University of North Texas

11, 12), but Aston’s approach to authority generation

seemed fairly user-friendly, so I went with that. Trialling the process, I discovered I could skip part of the procedure: Aston’s approach required generation of NACO URIs as step one, but I already had that data from the imported |0s, so could go straight to the second stage of the operation, after exporting the data from Sierra and tidying up with notepad find/replace and de-duplicating using that function in Excel. We then went straight to the z39.50/SQL client, which seemed to work erratically on MarcEdit 7, so I switched back to using MarcEdit 6 as Aston had been using at the time of demonstration of the process nearly two years earlier. It’s a slow process, undertaken in batches of about 2500, and takes long periods to generate the authority record data, but being at home because of the plague helped, not least because we could keep running generation into the evening since were all working from home. Consultancy and Module Activation At around this time we were undertaking an Innovative consultancy to improve metadata efficiencies, so took the opportunity to ask for the best approach for implementation of the authority process and module.

13 We

sketched out a schedule for implementing authority module implementation, and the consultant, Eva, liaised with the head office in Dublin to finally get our authority module installed and activated, which happened at the end of July, after which I sorted out an overwriting load table. I wanted a load table that overwrote rather than added duplicate records, so went for the “anam” over the more standard “a” load table, and localised the table by adding a missing field, although why it had been omitted wasn’t clear. We then loaded the name authority records. So, finally, the system was in place, almost bringing us up to date, finally, in August 2020. 11

Wolf, Stacey. 2019. “Automating the Authority Control Process.” (Ohio Valley Group of Technical Services Librarians Annual Conference 2019) https://digital.library.unt.edu/ark:/67531/metadc1506776/ [Accessed: 12/11/2019] 12

Wolf, Stacey. 2020. “Automating Authority Control Processes”. Code{4}lib journal. 47, https://journal.code4lib.org/articles/15014 [Accessed 18/2/2020] 13

Lachonius, Eva. University of Liverpool Consultation. (Innovative Interfaces Inc., 2020)

C a t a l o g u e a n d I n d e x

43

So are we finished? Not quite. There still remain a few areas to correct, install, and tidy up. We still need to check the system to ensure it’s working, and still need to finish loading the other authority records. We only loaded the personal name authorities, likely the largest aspect, but we still need to load Corporate, Conference, and Subject Authority records. I’ve got a plan on how to establish a practice of regular authority record record generation with $0 data from a continuing OCLC record supply, but whether this approach is adequate, or whether other approaches may be needed, still has to be investigated. We were meant to undergo NACO training in March, but virus planning led to a delay in that, so we will now undertake the training toward the end of the month, (September 2020). Also, not to have big ambitions, but I’m looking to establish a UK NACO Funnel to input into and benefit from the updating of ‘live’ authority data, all of which is very exciting! If in doubt, ask So, that’s where we are, and a summary of plans. I'd just like to add - something which was confirmed during this project was the value of asking the experts where beneficial, so I’m just going to list them here, and thank them directly now for their help. Thanks: OCLC services – Paul Shackleton, also George Bingham Marcedit – Programmer Terry Reese at CILIP session Innovative – Colin Shaper for original info, Martha Sanders (authority consultant), Eva Lachonius during consultancy Brackets issue - Graeme Leng-Ward and Ed Kirkland, University of Warwick Plans for generating authority records – Will Peaden from Aston University, also Mike Monaco from the University of Akron, Stacey Wolf from the University of North Texas Email lists (particularly Wendy Gunther), other colleagues, users via survey

C a t a l o g u e a n d I n d e x