crawling the webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · processing steps in...
TRANSCRIPT
![Page 1: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/1.jpg)
Crawling the Web
Creating Data Indices
![Page 2: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/2.jpg)
Today’s lecture
• Crawling
• Connectivity servers
![Page 3: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/3.jpg)
Basic crawler operation
• Begin with known “seed” URLs
• Fetch and parse them
–Extract URLs they point to
–Place the extracted URLs on a queue
• Fetch each URL on the queue and repeat
Sec. 20.2
![Page 4: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/4.jpg)
Crawling picture
Web
URLs crawled and parsed
URLs frontier
Unseen Web
Seed pages
Sec. 20.2
![Page 5: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/5.jpg)
Simple picture – complications
• Web crawling isn’t feasible with one machine – All of the above steps distributed
• Malicious pages – Spam pages – Spider traps – incl dynamically generated
• Even non-malicious pages pose challenges – Latency/bandwidth to remote servers vary – Webmasters’ stipulations
• How “deep” should you crawl a site’s URL hierarchy? – Site mirrors and duplicate pages
• Politeness – don’t hit a server too often
Sec. 20.1.1
![Page 6: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/6.jpg)
What any crawler must do
• Be Polite: Respect implicit and explicit politeness considerations
–Only crawl allowed pages
–Respect robots.txt (more on this shortly)
• Be Robust: Be immune to spider traps and other malicious behavior from web servers
Sec. 20.1.1
![Page 7: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/7.jpg)
What any crawler should do
• Be capable of distributed operation: designed to run on multiple distributed machines
• Be scalable: designed to increase the crawl rate by adding more machines
• Performance/efficiency: permit full use of available processing and network resources
Sec. 20.1.1
![Page 8: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/8.jpg)
What any crawler should do
• Fetch pages of “higher quality” first
• Continuous operation: Continue fetching fresh copies of a previously fetched page
• Extensible: Adapt to new data formats, protocols
Sec. 20.1.1
![Page 9: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/9.jpg)
Updated crawling picture
URLs crawled and parsed
Unseen Web
Seed Pages
URL frontier
Crawling thread
Sec. 20.1.1
![Page 10: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/10.jpg)
URL frontier
• Contains URLs to be crawled
• Can include multiple pages from the same host
• Must avoid trying to fetch them all at the same time
• Must try to keep all crawling threads busy
Sec. 20.2
![Page 11: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/11.jpg)
Explicit and implicit politeness
• Explicit politeness: specifications from webmasters on what portions of site can be crawled
– robots.txt
• Implicit politeness: even with no specification, avoid hitting any site too often
Sec. 20.2
![Page 12: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/12.jpg)
Robots.txt
• Protocol for giving spiders (“robots”) limited access to a website, originally from 1994
– www.robotstxt.org/wc/norobots.html
• Website announces its request on what can(not) be crawled
– For a URL, create a file URL/robots.txt
– This file specifies access restrictions
Sec. 20.2.1
![Page 13: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/13.jpg)
Robots.txt example
• No robot should visit any URL starting with "/yoursite/temp/", except the robot called “searchengine":
User-agent: *
Disallow: /yoursite/temp/
User-agent: searchengine
Disallow:
Sec. 20.2.1
![Page 14: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/14.jpg)
Processing steps in crawling
• Pick a URL from the frontier • Fetch the document at the URL • Parse the URL
– Extract links from it to other docs (URLs)
• Check if URL has content already seen – If not, add to indexes
• For each extracted URL – Ensure it passes certain URL filter tests – Check if it is already in the frontier (duplicate URL
elimination)
E.g., only crawl .edu, obey robots.txt, etc.
Which one?
Sec. 20.2.1
![Page 15: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/15.jpg)
Basic crawl architecture
WWW
DNS
Parse
Content seen?
Doc FP’s
Dup URL elim
URL set
URL Frontier
URL filter
robots filters
Fetch
Sec. 20.2.1
![Page 16: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/16.jpg)
DNS (Domain Name Server)
• A lookup service on the internet – Given a URL, retrieve its IP address
– Service provided by a distributed set of servers – thus, lookup latencies can be high (even seconds)
• Common OS implementations of DNS lookup are blocking: only one outstanding request at a time
• Solutions – DNS caching
– Batch DNS resolver – collects requests and sends them out together
Sec. 20.2.2
![Page 17: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/17.jpg)
Parsing: URL normalization
• When a fetched document is parsed, some of the extracted links are relative URLs
• E.g., at http://en.wikipedia.org/wiki/Main_Page
we have a relative link to /wiki/Wikipedia:General_disclaimer which is the same as the absolute URL http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer
• During parsing, must normalize (expand) such relative URLs
Sec. 20.2.1
![Page 18: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/18.jpg)
Content seen?
• Duplication is widespread on the web (~ 30%)
• If the page just fetched is already in the index, do
not further process it
• This is verified using document fingerprints or
shingles
• duplicateDocs_corrected
Sec. 20.2.1
![Page 19: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/19.jpg)
Filters and robots.txt
• Filters – regular expressions for URL’s to be crawled/not
• Once a robots.txt file is fetched from a site, need not fetch it repeatedly
–Doing so burns bandwidth, hits web server
• Cache robots.txt files
Sec. 20.2.1
![Page 20: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/20.jpg)
Duplicate URL elimination
• For a non-continuous (one-shot) crawl, test to see if an extracted+filtered URL has already been passed to the frontier
• For a continuous crawl – see details of frontier implementation
Sec. 20.2.1
![Page 21: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/21.jpg)
Distributing the crawler
• Run multiple crawl threads, under different processes – potentially at different nodes
– Geographically distributed nodes
• Partition hosts being crawled into nodes
– Hash used for partition
• How do these nodes communicate?
Sec. 20.2.1
![Page 22: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/22.jpg)
Communication between nodes
• The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes
WWW
Fetch
DNS
Parse
Content seen?
URL filter
Dup URL elim
Doc FP’s
URL set
URL Frontier
robots filters
Host splitter
To other hosts
From other hosts
Sec. 20.2.1
![Page 23: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/23.jpg)
URL frontier: two main considerations
• Politeness: do not hit a web server too frequently
• Freshness: crawl some pages more often than others – E.g., pages (such as News sites) whose content
changes often
These goals may conflict each other.
(E.g., simple priority queue fails – many links out of a page go to its own site, creating a burst of accesses to that site.)
Sec. 20.2.3
![Page 24: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/24.jpg)
Politeness – challenges
• Even if we restrict only one thread to fetch from a host, can hit it repeatedly
• Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host
Sec. 20.2.3
![Page 25: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/25.jpg)
Back queue selector
B back queues Single host on each
Crawl thread requesting URL
URL frontier: Mercator scheme
Biased front queue selector Back queue router
Prioritizer
K front queues
URLs
Sec. 20.2.3
![Page 26: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/26.jpg)
Mercator URL frontier
• URLs flow in from the top into the frontier
• Front queues manage prioritization
• Back queues enforce politeness
• Each queue is FIFO
Sec. 20.2.3
![Page 27: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/27.jpg)
Front queues
Prioritizer
1 K
Biased front queue selector Back queue router
Sec. 20.2.3
![Page 28: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/28.jpg)
Front queues
• Prioritizer assigns to URL an integer priority between 1 and K
– Appends URL to corresponding queue
• Heuristics for assigning priority
– Refresh rate sampled from previous crawls
– Application-specific (e.g., “crawl news sites more often”)
Sec. 20.2.3
![Page 29: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/29.jpg)
Biased front queue selector
• When a back queue requests a URL (in a sequence to be described): picks a front queue from which to pull a URL
• This choice can be round robin biased to queues of higher priority, or some more sophisticated variant
– Can be randomized
Sec. 20.2.3
![Page 30: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/30.jpg)
Back queues
Biased front queue selector Back queue router
Back queue selector
1 B
Heap
Sec. 20.2.3
![Page 31: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/31.jpg)
Back queue invariants
• Each back queue is kept non-empty while the crawl is in progress
• Each back queue only contains URLs from a single host
– Maintain a table from hosts to back queues
Host name Back queue
… 3
1
B
Sec. 20.2.3
![Page 32: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/32.jpg)
Back queue heap
• One entry for each back queue
• The entry is the earliest time te at which the host corresponding to the back queue can be hit again
• This earliest time is determined from
– Last access to that host
– Any time buffer heuristic we choose
Sec. 20.2.3
![Page 33: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/33.jpg)
Back queue processing
• A crawler thread seeking a URL to crawl:
• Extracts the root of the heap
• Fetches URL at head of corresponding back queue q (look up from table)
• Checks if queue q is now empty – if so, pulls a URL v from front queues – If there’s already a back queue for v’s host, append v
to q and pull another URL from front queues, repeat
– Else add v to q
• When q is non-empty, create heap entry for it
Sec. 20.2.3
![Page 34: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/34.jpg)
Number of back queues B
• Keep all threads busy while respecting politeness
• Mercator recommendation: three times as many back queues as crawler threads
Sec. 20.2.3
![Page 35: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/35.jpg)
35
Web Search in 2020?
• Type keywords into a search box?
• Social or “human powered” search?
• The Semantic Web?
• Intelligent search/semantic search/natural language search?
![Page 36: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/36.jpg)
36
Intelligent Search
Instead of merely retrieving Web pages, read ’em!
Machine Reading = Information Extraction + tractable inference
Alan Smithson will give a talk at the UW database seminar on
Friday Dec 5
• IE(sentence) = who did what? – speaking(Alan Smithson, UW)
• Inference = uncover implicit information – Will Alan visit Seattle?
![Page 37: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/37.jpg)
37
Application: Information Fusion
• What kills bacteria?
• Which west coast, nano-technology companies are hiring?
• What is a quiet, inexpensive, 4-star hotel in Vancouver?
![Page 38: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/38.jpg)
38
• Opine (Popescu & Etzioni, EMNLP ’05)
• IE(product reviews)
– Informative
– Abundant, but varied
– Textual
• Summarize reviews without any prior knowledge of product category
Opinion Mining
![Page 39: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/39.jpg)
39
![Page 40: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/40.jpg)
40
![Page 41: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/41.jpg)
41
TextRunner Extraction
• Extract Triple representing binary relation (Arg1, Relation, Arg2) from sentence.
Internet powerhouse, EBay, was originally founded by Pierre Omidyar.
Internet powerhouse, EBay, was originally founded by Pierre Omidyar.
(Ebay, Founded by, Pierre Omidyar)
![Page 42: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/42.jpg)
42
Numerous Extraction Challenges
• Drop non-essential info:
“was originally founded by” founded by
• Retain key distinctions
Ebay founded by Pierr ≠ Ebay founded Pierre
• Non-verb relationships “George Bush, president of the U.S…”
• Synonymy & aliasing Albert Einstein = Einstein ≠ Einstein Bros.
![Page 43: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/43.jpg)
43
Question Answering (QA) from Open-Domain Text
• An idea originating from the IR community
• With massive collections of full-text documents, simply finding relevant documents is of limited use: we want answers from textbases
• QA: give the user a (short) answer to their question, perhaps supported by evidence.
• The common person’s view? [From a novel] – “I like the Internet. Really, I do. Any time I need a piece of shareware or I want to find
out the weather in Bogota … I’m the first guy to get the modem humming. But as a source of information, it sucks. You got a billion pieces of data, struggling to be heard and seen and downloaded, and anything I want to know seems to get trampled underfoot in the crowd.” • M. Marshall. The Straw Men. HarperCollins Publishers, 2002.
![Page 44: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/44.jpg)
44
People Want to Ask Questions…
Examples from AltaVista query log
who invented surf music?
how to make stink bombs
where are the snowdens of yesteryear?
which english translation of the bible is used in official catholic liturgies?
how to do clayart
how to copy psx
how tall is the sears tower? Examples from Excite query log (12/1999)
how can i find someone in texas
where can i find information on puritan religion?
what are the 7 wonders of the world
how can i eliminate stress
What vacuum cleaner does Consumers Guide recommend Around 12–15% of query logs
![Page 45: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/45.jpg)
45
The Google answer #1
• Include question words etc. in your stop-list
• Do standard IR
• Sometimes this (sort of) works:
• Question: Who was the prime minister of Australia during the Great Depression?
• Answer: James Scullin (Labor) 1929–31.
![Page 46: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/46.jpg)
46
Page about Curtin (WW II Labor Prime Minister) (Can deduce answer)
Page about Curtin (WW II Labor Prime Minister)
(Lacks answer)
Page about Chifley (Labor Prime Minister) (Can deduce answer)
![Page 47: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/47.jpg)
47
But often it doesn’t…
• Question: How much money did IBM spend on advertising in 2002?
• Answer: I dunno, but I’d like to …
![Page 48: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/48.jpg)
48
Lot of ads on Google these days!
No relevant info (Marketing firm page)
No relevant info (Mag page on ad exec)
No relevant info (Mag page on MS-IBM)
![Page 49: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/49.jpg)
How much money did IBM spend on advertising in 2002?
No Answer 2011
![Page 50: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/50.jpg)
50
The Google answer #2
• Take the question and try to find it as a string on the web
• Return the next sentence on that web page as the answer
• Works brilliantly if this exact question appears as a FAQ question, etc.
• Works lousily most of the time
• Reminiscent of the line about monkeys and typewriters producing Shakespeare
• But a slightly more sophisticated version of this approach has been revived in recent years with considerable success…
![Page 51: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/51.jpg)
51
A Brief (Academic) History
• In some sense question answering is not a new research area
• Question answering systems can be found in many areas of NLP research, including:
• Natural language database systems – A lot of early NLP work on these (e.g., LUNAR)
• Spoken dialog systems – Currently very active and commercially relevant
• The focus on open-domain QA is fairly new
– MURAX (Kupiec 1993): Encyclopedia answers – Hirschman: Reading comprehension tests – TREC QA competition: 1999–
![Page 52: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/52.jpg)
52
Question Answering at TREC
• Question answering competition at TREC
• Until 2004, consisted of answering a set of 500 fact-based questions, e.g., “When was Mozart born?”.
• For the first three years systems were allowed to return 5 ranked answer snippets (50/250 bytes) to each question.
– IR think
– Mean Reciprocal Rank (MRR) scoring:
• 1, 0.5, 0.33, 0.25, 0.2, 0 for 1, 2, 3, 4, 5, 6+ doc
– Mainly Named Entity answers (person, place, date, …)
• From 2002 the systems are only allowed to return a single exact answer and the notion of confidence has been introduced.
![Page 53: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/53.jpg)
Exact answer with context.
![Page 54: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/54.jpg)
54
The TREC Document Collection
• The retrieval collection uses news articles from the following sources:
• AP newswire, 1998-2000
• New York Times newswire, 1998-2000
• Xinhua News Agency newswire, 1996-2000
• In total there are 1,033,461 documents in the collection. 3GB of text
• This is a lot of text to process entirely using advanced NLP techniques so the systems usually consist of an initial information retrieval phase followed by more advanced processing.
• Many supplement this text with use of the web, and other knowledge bases
![Page 55: Crawling the Webweb.cs.dal.ca/~anwar/ir/lecturenotes/l13.pdf · 2012-07-15 · Processing steps in crawling •Pick a URL from the frontier •Fetch the document at the URL •Parse](https://reader030.vdocument.in/reader030/viewer/2022041117/5f2bdbb09e7f0155c224647f/html5/thumbnails/55.jpg)
55
Sample TREC questions
1. Who is the author of the book, "The Iron Lady: A
Biography of Margaret Thatcher"?
2. What was the monetary value of the Nobel Peace
Prize in 1989?
3. What does the Peugeot company manufacture?
4. How much did Mercury spend on advertising in 1993?
5. What is the name of the managing director of Apricot
Computer?
6. Why did David Koresh ask the FBI for a word processor?
7. What debts did Qintex group leave?
8. What is the name of the rare neurological disease with
symptoms such as: involuntary movements (tics), swearing,
and incoherent vocalizations (grunts, shouts, etc.)?