Wednesday, September 29, 2010

Stroyan, Sullivan Named Candidates for ALA Presidency


Susan Stroyan and Maureen Sullivan, two academic librarians, have been named candidates for the 2012-13 presidency of the American Library Association (ALA), according to an ALA press release.
Stroyan, a 34-year veteran who's served in public, special, multitype, and academic libraries, is now the Information Services Librarian at the Ames Library at Illinois Wesleyan University in Bloomington, IL.
Among service to the profession, she has chaired the ALA Awards Committee and served three times on the National Conference Executive Committee of the Association of College and Research Libraries (ACRL). She is a former president of the Illinois Library Association.
She has a B.S. in library science from Illinois State University, and an M.S. and Ph.D in library science from the University of Illinois.

"I have worked in libraries since I was 16 years old," Stroyan said. "My front line experiences have provided me the opportunity to perform or supervise most aspects of library work. I am honored, humbled, and excited at the possible opportunity to serve ALA as President."
Sullivan, who serves as as a consultant to libraries of all types, is a professor of practice in the Ph.D. program, Managerial Leadership in the Information Professions, at the Simmons College Graduate School of Library and Information Science.
She spent 12 years as the human resources administrator in the libraries at the University of Maryland and at Yale University. She is a past president of ACRL and was ACRL's Academic/Research Librarian of the Year (see LJ interview). She co-chaired ALA President Roberta Stevens' initiative, Our Authors, Our Advocates.
She has a BA in history and an MLS from the University of Maryland.
"I'm delighted to have this opportunity," she said. "The new strategic plan offers an excellent framework for ALA to lead in the digital world. ALA must be at the table when key decisions are made that will affect the future of intellectual freedom, access to information, literacy, and lifelong learning."

Sue Stroyan

HISTORY OF  SUSAN STROYAN

BLOOMINGTON, Ill. – Illinois Wesleyan University Information Services Librarian Susan Stroyan has been named as one of two candidates for the 2012-13 presidency of the American Library Association (ALA).
Founded in 1876, the ALA is the oldest and largest library association in the world, with members in academic, public, school, government and special libraries. It serves the more than 122,000 libraries in the United States, working to strengthen libraries, the profession, and the public’s access to information.
“I am honored, humbled, and excited at the possible opportunity to serve ALA as President,” said Stroyan. “I’m inspired by our colleagues around the country that have achieved high levels of success by leading their communities to new heights of library awareness in these difficult economic times.”
Results of the election will be announced in May 2011.
Employed at Illinois Wesleyan since 1992, Stroyan was named Illinois Academic Librarian of the Year in 2000. She earned a bachelor’s degree in library science from Illinois State University in 1972, and a master’s degree and doctorate from the University of Illinois at Urbana-Champaign in 1973 and 1986 respectively. She was named Illinois State University Honored Alumna in 2002.
Active in the ALA since 1975, Stroyan has served as a member-at-large on ALA Council and the ALA Self Study Committee. She chaired the ALA Awards Committee, and was a member of National Conference Executive Committee.
Stroyan has promoted education outside the ALA. She has been a participant and mentor in the Small College Mentor Program. She served as president for the Beta Phi Mu International Library and Information Studies Honor Society. She has also held offices in state and regional library associations including serving as president of Illinois Library Association from 1995-1996.

Tuesday, September 28, 2010

Local bodies fail to remit library cess to the LLA

 
 
CHENNAI: The recently-inaugurated Anna Centenary Library (ACL) built at Kotturpuram in the city with world class facilities has won appreciation from various quarters, but the state public library department is grappling with severe financial crunch even to procure books due to huge arrears in the remittance of library cess by the local bodies.

A whopping sum of Rs 116 crore, including Rs 60.42 crore from the ten municipal corporations across the state, was due to the Local Library Authorities (LLA) as of March 31, 2010. The Chennai Corporation is a major defaulter, owing Rs 22.90 crore as library cess, officials of the public library department said.

The library cess is being levied under section 12(1) (a) of the Tamil Nadu Public Libraries Act 1948 in the form of surcharge on the property tax or house tax levied under the Tamil Nadu District Municipalities Act 1920. As per a G.O. issued by the Education Department in 1992, the library cess was enhanced from five paise to ten paise per Re 1, working out to 10% of the total property tax collected.

The entire amount of the library cess has to be transferred to the LLA by the local bodies, from which expenditures like purchase of books and periodicals and development of infrastructure at the public libraries are met with.

The Rs 200-crore ACL has been constructed utilising the LLA funds with the government contributing Rs 20 crore as grant. Nearly 5 lakh books has been procured at an estimated cost of Rs 80 crore. However, due to paucity of funds, a sum of Rs 50 crore had to be borrowed from the finance department to make payments to publishers, the officials added.

With the public library department having drawn an ambitious plan to stack about 12 lakh books in ACL over a period of five years, which will make it the second largest library in the country next to the National Library in Kolkata, the officials are concerned over the lukewarm response of local bodies towards remittance of library cess to the LLA.

When contacted, Chennai Corporation commissioner Rajesh Lakhoni told TOI that the corporation had been allocating some amount for the library every year after getting the council's approval. But, there was no government order that 10% of the property tax collected should be remitted to the LLA, he argued.

However, a G.O issued by the school education department on April 23, 2008, had clearly instructed that the Chennai Corporation should collect library cess at the rate of 10 paise per rupee while levying property tax with effect from April 1, 2008, the officials pointed out.


Read more: Local bodies fail to remit library cess - The Times of India http://timesofindia.indiatimes.com/city/chennai/Local-bodies-fail-to-remit-library-cess/articleshow/6639646.cms#ixzz10p5e1D3N

Monday, September 20, 2010

RSS: Really Simple Syndication

 

What is RSS?

RSS stands for "Really Simple Syndication", and is a protocol that websites use to distribute their content. Websites that use RSS create special files, called "feeds", that are updated periodically to contain the site's latest information.

 

What does that have to do with me?

Using RSS feeds is an easy, efficient way to keep track of when a site adds new content. Instead of checking Site A, Site B, and Site C every day to see if there's anything new, you can subscribe to each site's RSS feed and will be instantly notified by your RSS Reader when new content is added.

 

I think it get it. But... what's an RSS Reader?

An RSS Reader is an application you use to view RSS feeds. Similar to how an email application can gather messages from multiple email accounts, an RSS Reader gathers feeds from multiple sites so you can view them in one location.
There are two types of RSS Readers: Web Readers and Desktop Readers. The difference between the two is fairly minimal. The main advantage of Web Readers is that they store all of your subscribed feeds on the internet, so you can access them on any computer. On the other hand, Desktop Readers download subscribed feeds to your hard drive, so you can view them even if you don't have an internet connection.

 

I'm on board. Where can I get an RSS Reader?

Google Reader (Web), FeedDemon (Windows), and NewsFire (Mac) are popular readers that are available for free. Our guides have written plenty about other readers if you would like more options.
Once you select an RSS Reader, you can start subscribing to site feeds.

 

Okay, I've got an RSS Reader. Now how do I subscribe to a site's feed?

If a feed exists for a site, many modern browsers will display an RSS icon (RSS icon) in the right corner of the location bar. Also, most sites display an RSS icon somewhere on the page. Clicking on either of these icons will allow you to subscribe to the site's feed.
On About.com, if you scroll to the bottom of just about any page and look to the right, you will see our RSS icon. Clicking on this icon will take you to a page with links to four different feeds. Each About.com GuideSite offers RSS feeds for the site's latest headlines, hottest articles, and most popular articles. In addition, you can also subscribe to About Today, which is a daily-updated feed of interesting articles from the entire About.com network. If you use My Yahoo!, Google Reader, or My AOL, there are special links that will add the selected feed directly to your Reader. Otherwise, click on the "RSS" link. Depending on which browser you use, you will either be prompted to add the feed to your default RSS Reader or you will have to copy the URL and enter it manually into your RSS Reader.

 

 you've been so helpful & polite. However, I still have a few more questions. Where can I get more information?

Many of the About.com guides have written articles about RSS. Here are two of the best:
If you would like more information about RSS from a developer's or blogger's point of view, check out these articles:
If you're still not satisfied, an About.com search for 'What is RSS?' brings up even more information from other guides.

Thursday, September 16, 2010

Chennai now boasts South Asia’s largest library


In a big boost to book lovers, the publishing industry and to the public library networking concept, the Anna Centenary Library (ACL), a magnificent eight-storey structure said to be South Asia’s largest and most elegantly designed state-of-the-art library, was unveiled here on Wednesday evening.

The new library is triggered by an inspiration to match the most impressive library in the region, the National University Library in Singapore. With this, the DMK government added another public building of notice to the city’s architectural landscape since its prestigious new Assembly complex.

Built over a massive area of 3.75 lakh sq ft in Kotturpuram, the state PWD has spent about Rs 180 cr on this big-ticket expenditure.

The ACL, which was declared open here on Wednesday by Chief Minister M Karunanidhi in the presence of Finance Minister K Anbazhagan and School Education Minister Thangam Thennarasu among other top ministers and dignitaries, is designed to stock a massive 1.20 million books in all major languages of the world, besides providing access to two lakh ‘e-books’ and 20,000 ‘e-journals’.

The ACL will not only be networked to all other public libraries across the state led by the famous Connemara Public Library in Chennai, but will also accommodate the country’s oldest manuscript library here called ‘The Oriental Manuscripts Library’, officials said. The latter is now housed in the Madras University Library complex.

A “very special feature” of the ACL is that it would straightaway have a digital edge, being a partner of the World Digital Library (WDL) project, says Thennarasu. This will give it access to primary sources of knowledge of countries and cultures across the globe. So far,  the Allama Iqbal Library of the University of Kashmir is the only library in the country connected to the WDL network.

To start with, four floors of the centrally air-conditioned Library are ready. The more than two-year-long project will take some more time to be fully operational. However, sources said, this delay has been offset by a number of novel and attractive features of world class standards incorporated in the ACL. These include a Braille section for the visually impaired, a captivating separate section for children’s books with a huge replica of the “Tree of Knowledge” rooted in its heart, a 1,280-capacity auditorium, two conference halls to hold major seminars and an amphitheatre that can take in more than 800 people at a time. A food court with a range of cuisine is another facility. All these should attract more youths in particular, officials hoped.

Tuesday, September 14, 2010

Intellectual freedom

Intellectual freedom is the right to freedom of thought and of expression of thought. As defined by Article 19 of the Universal Declaration of Human Rights, it is a human right. Article 19 states:


Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.[1]

Intellectual freedom is promoted by several professions and movements. These entities include, among others, librarianship, education, and the Free Software Movement.

Issues


Intellectual freedom is a broad topic covering many areas. Some of these topics are academic freedom, Internet filtering, and censorship.

Intellectual freedom and librarianship


The profession of librarianship views intellectual freedom as a core responsibility. The International Federation of Library Associations and Institutions' (IFLA) Statement on Libraries and Intellectual Freedom "calls upon libraries and library staff to adhere to the principles of intellectual freedom, uninhibited access to information and freedom of expression and to recognize the privacy of library user." IFLA urges its members to actively promote the acceptance and realization of intellectual freedom principles. IFLA states: "The right to know is a requirement for freedom of thought and conscience; freedom of thought and freedom of expression are necessary conditions for freedom of access to information."[3]


Individual national library associations expand upon these principles when defining intellectual freedom for their constituents. For example, the American Library Association's Intellectual Freedom Q & A defines intellectual freedom as: "[T]he right of every individual to both seek and receive information from all points of view without restriction. It provides for free access to all expressions of ideas through which any and all sides of a question, cause or movement may be explored. .... Intellectual freedom encompasses the freedom to hold, receive and disseminate ideas."[4]


The Canadian Library Association's Position Statement on Intellectual Freedom states that all persons possess "the fundamental right ...to have access to all expressions of knowledge, creativity and intellectual activity, and to express their thoughts publicly."[5] This right was enshrined into law in 2004 in British Columbia, which grants protection against litigation for libraries for their holdings.[6]


Many other national library associations have similarly adopted statements on intellectual freedom

Intellectual freedom under authoritarian rule


Intellectual freedom is often suppressed under authoritarian rule[7] and such governments often claim to have nominal intellectual freedom, although the degree of freedom is a matter of dispute. The former USSR, for example, claimed to provide intellectual freedom, but some analysts in the West have stated that the degree of intellectual freedom was nominal at best.[8]



"Although true-blue defenders of communism and fascism differed in their professed objectives relative to human welfare, the systems were alike in two essential respects: in the suppression of civil liberties, representative government, and intellectual freedom.... This was generally recognized in the United States.

Wednesday, September 8, 2010

Natural Language Processing (NLP)

Natural Language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages.[1] In theory, natural-language processing is a very attractive method of human-computer interaction. Natural-language understanding is sometimes referred to as an AI-complete problem, because natural-language recognition seems to require extensive knowledge about the outside world and the ability to manipulate it.




NLP has significant overlap with the field of computational linguistics, and is often considered a sub-field of artificial intelligence.

History
The history of NLP generally starts in the 1950s, although work can be found from earlier periods. During the 70's many programmers began to write 'conceptual ontologies', which structured real-world information into computer-understandable data: MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), Plot Units (Lehnert 1981).

During this time, many chatterbots were written including PARRY, Racter, and Jabberwacky.
Starting in the late 1980s, as computational power increased and became less expensive, more interest began to be shown in statistical models for machine translation.

Tasks and limitations


Although NLP may encompass both text and speech, work on speech processing has evolved into a separate field. Natural language generation systems convert information from computer databases into readable human language. Natural language understanding systems convert samples of human language into more formal representations such as parse trees or first-order logic structures that are easier for computer programs to manipulate. Many problems within NLP apply to both generation and understanding; for example, a computer must be able to model morphology (the structure of words) in order to understand an English sentence, and a model of morphology is also needed for producing a grammatically correct English sentence.



 Subproblems

Speech segmentation

In most spoken languages, the sounds representing successive letters blend into each other, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, in natural speech there are hardly any pauses between successive words; the location of those boundaries usually must take into account grammatical and semantic constraints, as well as the context.

Text segmentation

Some written languages like Chinese, Japanese and Thai do not have single-word boundaries either, so any significant text parsing usually requires the identification of word boundaries, which is often a non-trivial task.

Part-of-speech tagging

Word sense disambiguation

Many words have more than one meaning; we have to select the meaning which makes the most sense in context.



Syntactic ambiguity

The grammar for natural languages is ambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information. Specific problem components of syntactic ambiguity include sentence boundary disambiguation.

Imperfect or irregular input

Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, OCR errors in texts.

Speech acts and plans

A sentence can often be considered an action by the speaker. The sentence structure alone may not contain enough information to define this action. For instance, a question is sometimes the speaker requesting some sort of response from the listener. The desired response may be verbal, physical, or some combination. For example, "Can you pass the class?" is a request for a simple yes-or-no answer, while "Can you pass the salt?" is requesting a physical action to be performed. It is not appropriate to respond with "Yes, I can pass the salt," without the accompanying action (although "No" or "I can't reach the salt" would explain a lack of action).

 Statistical NLP

Main article: statistical natural language processing

Statistical natural-language processing uses stochastic, probabilistic and statistical methods to resolve some of the difficulties discussed above, especially those which arise because longer sentences are highly ambiguous when processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of corpora and Markov models. Statistical NLP comprises all quantitative approaches to automated language processing, including probabilistic modeling, information theory, and linear algebra[4]. The technology for statistical NLP comes mainly from machine learning and data mining, both of which are fields of artificial intelligence that involve learning from data.



 Major tasks in NLP

Automatic summarization

Foreign language reading aid

Foreign language writing aid

Information extraction

Information retrieval (IR) - IR is concerned with storing, searching and retrieving information. It is a separate field within computer science (closer to databases), but IR relies on some NLP methods (for example, stemming). Some current research and applications seek to bridge the gap between IR and NLP.

Machine translation - Automatically translating from one human language to another.

Named entity recognition (NER) - Given a stream of text, determining which items in the text map to proper names, such as people or places. Although in English, named entities are marked with capitalized words, many other languages do not use capitalization to distinguish named entities.

Natural language generation

Natural language search

Natural language understanding

Optical character recognition

Anaphora resolution

Query expansion

Question answering - Given a human language question, the task of producing a human-language answer. The question may be a closed-ended (such as "What is the capital of Canada?") or open-ended (such as "What is the meaning of life?").

Speech recognition - Given a sound clip of a person or people speaking, the task of producing a text dictation of the speaker(s). (The opposite of text to speech.)

Spoken dialogue system

Stemming

Text simplification

Text-to-speech

Text-proofing

 Concrete problems

Some concrete problems existing in the field include part-of-speech tag disambiguation (or tagging), word sense disambiguation, parse tree disambiguation, and Anaphora Resolution. While there are typically attempts to treat such problems individually, the problems can be shown to be highly intertwined. This section attempts to illustrate the complexities involved in some of these problems.



 Part of speech tagging and Word sense disambiguation

An early AI goal was to give a computer the ability to parse natural language sentences into the type of sentence diagrams that grade-school children learn. One of the first such systems, developed in 1963 by Susumu Kuno of Harvard, was interesting in its revelation of the depth of ambiguity in the English language. Kuno asked his computerized parser what the sentence "Time flies like an arrow" means. In what has become a famous response[5], the computer replied that it was not quite sure. It might mean;





The common simile: time moves quickly just like an arrow does;

measure the speed of flies like you would measure that of an arrow ('time' being an imperative verb and 'flies' being the insects) - i.e. (You should) time flies as you would (time) an arrow;

measure the speed of flies like an arrow would - i.e. Time flies in the same way that an arrow would (time them);

measure the speed of flies that are like arrows - i.e. Time those flies that are like arrows;

A type of flying insect, "time-flies," enjoys a single arrow (compare Fruit flies like a banana);

And ALL fruit flies in the same manner - like bananas do;

 Parse tree disambiguation

English and several other languages don't specify which word an adjective applies to. For example, in the string "pretty little girls' school".





Does the school look little?

Do the girls look little?

Do the girls look pretty?

Does the school look pretty?

Does the school look pretty little? ("pretty" here meaning "quite" as in the phrase "pretty ugly")

Do the girls look pretty little? (same comparison applies)

This is essentially a problem of how to structure the sentence into a parse tree, and many factors may influence which is the correct tree.



 Anaphora resolution

The sentences "We gave the monkeys the bananas because they were hungry" and "We gave the monkeys the bananas because they were over-ripe" have the same surface grammatical structure. However, the pronoun they refers to monkeys in one sentence and bananas in the other, and it is impossible to tell which without semantic knowledge (i.e., knowledge of the real-world properties of monkeys and bananas).



 Intonation

NLP is often done as a form of text processing. Even speech input is typically transformed into a text string by a speech recognizer. However, there is much information included in the prosodic, or intonational, properties of an utterance.



An example of this is that a speaker will often imply additional information in spoken language by the placement of emphasis on individual words. The sentence "I never said she stole my money" demonstrates the importance emphasis can play in a sentence, and thus the inherent difficulty a natural language processor can have in parsing it. Depending on which word the speaker places the stress, this sentence could have several distinct meanings:



"I never said she stole my money" - Someone else said it, but I didn't.

"I never said she stole my money" - I simply didn't ever say it.

"I never said she stole my money" - I might have implied it in some way, but I never explicitly said it.

"I never said she stole my money" - I said someone took it; I didn't say it was she.

"I never said she stole my money" - I just said she probably borrowed it.

"I never said she stole my money" - I said she stole someone else's money.

"I never said she stole my money" - I said she stole something of mine, but not my money.

 Evaluation of natural language processing

 Objectives

The goal of NLP evaluation is to measure one or more qualities of an algorithm or a system, in order to determine whether (or to what extent) the system answers the goals of its designers, or meets the needs of its users. Research in NLP evaluation has received considerable attention, because the definition of proper evaluation criteria is one way to specify precisely an NLP problem, going thus beyond the vagueness of tasks defined only as language understanding or language generation. A precise set of evaluation criteria, which includes mainly evaluation data and evaluation metrics, enables several teams to compare their solutions to a given NLP problem.

Different types of evaluation


Depending on the evaluation procedures, a number of distinctions are traditionally made in NLP evaluation.



Intrinsic vs. extrinsic evaluation

Intrinsic evaluation considers an isolated NLP system and characterizes its performance mainly with respect to a gold standard result, pre-defined by the evaluators. Extrinsic evaluation, also called evaluation in use considers the NLP system in a more complex setting, either as an embedded system or serving a precise function for a human user. The extrinsic performance of the system is then characterized in terms of its utility with respect to the overall task of the complex system or the human user. For example, consider a syntactic parser that is based on the output of some new part of speech (POS) tagger. An intrinsic evaluation would run the POS tagger on some labelled data, and compare the system output of the POS tagger to the gold standard (correct) output. An extrinsic evaluation would run the parser with some other POS tagger, and then with the new POS tagger, and compare the parsing accuracy.



Black-box vs. glass-box evaluation

Black-box evaluation requires one to run an NLP system on a given data set and to measure a number of parameters related to the quality of the process (speed, reliability, resource consumption) and, most importantly, to the quality of the result (e.g. the accuracy of data annotation or the fidelity of a translation). Glass-box evaluation looks at the design of the system, the algorithms that are implemented, the linguistic resources it uses (e.g. vocabulary size), etc. Given the complexity of NLP problems, it is often difficult to predict performance only on the basis of glass-box evaluation, but this type of evaluation is more informative with respect to error analysis or future developments of a system.



Automatic vs. manual evaluation

In many cases, automatic procedures can be defined to evaluate an NLP system by comparing its output with the gold standard (or desired) one. Although the cost of producing the gold standard can be quite high, automatic evaluation can be repeated as often as needed without much additional costs (on the same input data). However, for many NLP problems, the definition of a gold standard is a complex task, and can prove impossible when inter-annotator agreement is insufficient. Manual evaluation is performed by human judges, which are instructed to estimate the quality of a system, or most often of a sample of its output, based on a number of criteria. Although, thanks to their linguistic competence, human judges can be considered as the reference for a number of language processing tasks, there is also considerable variation across their ratings. This is why automatic evaluation is sometimes referred to as objective evaluation, while the human kind appears to be more subjective.

Standardization in NLP


An ISO sub-committee is working in order to ease interoperability between Lexical resources and NLP programs. The sub-committee is part of ISO/TC37 and is called ISO/TC37/SC4. Some ISO standards are already published but most of them are under construction, mainly on lexicon representation (see LMF), annotation and data category registry.



 Journals

Computational Linguistics

International Conference on Language Resources and Evaluation

Linguistic Issues in Language Technology
 
 
Software tools


Main article: Natural language processing toolkits

General Architecture for Text Engineering (GATE)

Modular Audio Recognition Framework

MontyLingua

Natural Language Toolkit (NLTK): a Python library suite

Thursday, September 2, 2010

WorldCat : "Web Scale" discovery and delivery

"Web scale" discovery and delivery of library resources


OCLC, as a longtime advocate of the use of technology to make library collections more discoverable and manageable, has consistently investigated how people's relationships to information have evolved with the advent of the Web. Not surprisingly, the results have shown a preference for self-service on this global medium. The 2003 OCLC membership report "Environmental Scan: Pattern Recognition" found that most people, when asked to draw an association, still think mainly of "books" rather than electronic content and services that are increasingly available. The followup report "Perceptions of Libraries and Information Resources" in 2005 determined users do not rely on Web-based library resources very often—nor do they particularly equate libraries with the Web.



Our 2007 report on "Sharing, Privacy and Trust in Our Networked World" found further that people did not perceive a role for libraries in the Web's newly "social" universe, where users promote themselves and share content within massive user communities. (Librarians largely agreed with that assessment.)



Without a strategy, the Web's too big

The issue is scale. Many libraries have set up individual Web presences. Taken together, however, these have not had the desired impact owing to the sheer size of the Web landscape and scarce tactics for enabling library-service links in the information environments where users congregate. A more unified, programmatic approach is necessary so that libraries can have an effective footprint.



As a worldwide union catalog, WorldCat has helped its contributing libraries give patrons access to a much larger cooperative collection, achieving a scale that no single institution could reach by itself. Now, WorldCat is building an even more expansive Web scale that takes this behind-the-scenes content network and moves it outside the library environment into the all-digital lives of today's information seekers and creators.



How large is this public? Consider that every day :



•More than 2 billion Web searches are performed

•eBay and Amazon.com are both visited by approximately 2 million shoppers

•Facebook grows by 250,000 user accounts

The Web has many tools for putting knowledge in front of these users, and many more that let them organize or add to a knowledge base. By using the tools strategically, WorldCat pervasively distributes data about—and opens new pathways into—the catalogs, services and reliable electronic content of its member institutions. Libraries are integrated into the wider Web experience, and a segment of this tremendous global traffic is captured and connected to them.



WorldCat.org: A platform and program for Web exposure

WorldCat.org is the focal point of OCLC's Web-scale strategy. Both a Web portal to the WorldCat catalog and a supporting program of data syndication that draws users from other popular Web destinations, it presents a common, relevant and compelling Web presence for libraries that promotes local content and value.



Access to library materials on a highly useful, usable and universal platform

The variety of services available on WorldCat.org and easy access to holdings for thousands of libraries encourages users to return to the site even as they move from one physical location to another.



Higher visibility on the most popular Web sites

Partnerships with key search engines such as Google, Google Books, Yahoo! Search and Windows Live Search—which index WorldCat data for popular and unique works—mean Web users see authoritative library content amongst search results for regular Web content.



More traffic to your online services

Collectively, the utility of WorldCat.org is demonstrated by one key metric: click-throughs to participating libraries. More than 2 million users each month connect an average of 700,000 times to materials in local libraries.



Seamless delivery of materials

Users don't want to search—they want to get to the information. On WorldCat.org, they can quickly localize their search for specific content and reach a local catalog record plus other fulfillment options. IP-authenticated users can link right to electronic full text, OpenURL resolvers and other local and group services.



A potent toolset for discovery

Functionality embedded in the WorldCat.org interface helps people better find and evaluate materials, browse collections and perform research. They can:



•Use a powerful advanced search, or search-result faceted refinement, to target specific items or a narrow range of materials

•More quickly localize to a library with suggested locations based on IP-geomapping

•Obtain or export bibliographic citations for individual items and lists

•For any author or creative principal, explore that person's associations with specific subject matter and other works and people via the WorldCat Identities profiling utility

A user-centric environment with social networking tools

Wherever they go on the Web, people have come to expect "Amazon-like" features that let them create their own information experiences and rely upon the opinions and expertise of online peers. WorldCat.org joins their lineup of Web workspaces by letting them contribute relevant content such as ratings, reviews and lists of library-owned items. And users easily cross-link WorldCat.org content with accounts at social bookmarking Web sites such as Del.icio.us and Digg.



People can put WorldCat where they want it

Easy-to-install plug-ins for browser toolbars and Facebook pages let Web users have access to WorldCat searching even when they're away from WorldCat.org. Also, any blogger, organization or library can post the modular WorldCat search box to a site and share WorldCat with their online audience.



A system for managing and distributing institutional metadata

Web-scale exposure of information that describes libraries—rather than the things they own—is achieved through the WorldCat Registry, a free service that lets any library centrally maintain and share data about its identity across common audiences such as vendors and consortia. (For participating libraries, Registry data also controls deep links to local services on the WorldCat.org platform).

WorldCat

WorldCat is a union catalog which itemizes the collections of 71,000 libraries in 112 countries[1] which participate in the Online Computer Library Center (OCLC) global cooperative. It is built and maintained collectively by the participating libraries.

History


Created in 1971, it contains more than 150 million different records pointing to over 1.4 billion physical and digital assets in more than 470 languages.[1] It is the world's largest bibliographic database. OCLC makes WorldCat itself available free to libraries, but the catalog is the foundation for other fee-based OCLC services (such as resource sharing and collection management). WorldCat was founded by Fred Kilgour in 1967.[2]



In 2003, OCLC began the "Open WorldCat" pilot program, making abbreviated records from a subset of WorldCat available to partner Web sites and booksellers, to increase the accessibility of its member libraries’ collections. In 2006, it became possible to search WorldCat directly at its website. In 2007, WorldCat Identities began providing pages for 20 million 'identities', predominantly authors and persons who are the subjects of published titles.



 Limitations

Eastern European and Eurasian library holdings are not well-represented in the system