1、外文资料原文 外文资料原文 Efficient URL Caching for World Wide Web Crawling Andrei Z. Broder IBM TJ WatsonResearchCenter 19 Skyline Dr Hawthorne, NY10532 Marc Najork Microsoft Research 1065 La Avenida Mountain View, CA94043 Janet L. Wiener Hewlett Packard Labs 1501 Page Mill Road Palo Alto, CA94304 ABSTRACT
2、Crawling the web is deceptively simple: the basic algorithm is (a)Fetch a page (b) Parse it to extract all linked URLs (c) For all theURLs not seen before, repeat (a)(c). However, the size of the web(estimated at over 4 billion pages) and its rate of change (estimatedat 7% per week) move this plan f
3、rom a trivial programming exerciseto a serious algorithmic and system design challenge. Indeed, thesetwo factors alone imply that for a reasonably fresh and completecrawl of the web, step (a) must be executed about a thousand timesper second, and thus the membership test (c) must be done wellover te
4、n thousand times per second against a set too large to storein main memory. This requires a distributed architecture, whichfurther complicates the 外文资料原文 membership test. A crucial way to speed up the test is to cache, that is, to store inmain memory a (dynamic) subset of the “seen” URLs. The maingo
5、al of this paper is to carefully investigate several URL cachingtechniques for web crawling. We consider both practical algorithms:random replacement, static cache, LRU, and CLOCK, andtheoretical limits: clairvoyant caching and infinite cache. We performedabout 1,800 simulations using these algorith
6、ms with variouscache sizes, using actual log data extracted from a massive 33day web crawl that issued over one billion HTTP requests.Our main conclusion is that caching is very effective in oursetup, a cache of roughly 50,000 entries can achieve a hit rate ofalmost 80%. Interestingly, this cache si
7、ze falls at a critical point: asubstantially smaller cache is much less effective while a substantiallylarger cache brings little additional benefit. We conjecturethat such critical points are inherent to our problem and venture anexplanation for this phenomenon. 1. INTRODUCTION A recent Pew Foundat
8、ion study 31 states that “Search engineshave become an indispensable utility for Internet users” and estimatesthat as of mid-2002, slightly over 50% of all Americans haveused web search to find information. Hence, the technology thatpowers web search is of enormous practical interest. In this paper,
9、we concentrate on one aspect of the search technology, namelythe process of collecting web pages that eventually constitute thesearch engine corpus. Search engines collect pages in many ways, among them directURL submission, paid inclusion, and URL extraction from nonwebsources, but the bulk of the
10、corpus is obtained by recursivelyexploring the web, a process known as crawling or SPIDERing. Thebasic algorithm is (a) Fetch a page (b) Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)(c) Crawling typically starts from a set of seed URLs, made up ofURLs obtained
11、by other means as described above and/or made upof URLs collected during previous crawls. Sometimes crawls arestarted from a single well connected page, or a directory such , but in this case a relatively large portion of the 外文资料原文 web(estimated at over 20%) is never reached. See 9 for a discussion
12、of the graph structure of the web that leads to this phenomenon. If we view web pages as nodes in a graph, and hyperlinks as directededges among these nodes, then crawling becomes a processknown in mathematical circles as graph traversal. Various strategiesfor graph traversal differ in their choice
13、of which node amongthe nodes not yet explored to explore next. Two standard strategiesfor graph traversal are Depth First Search (DFS) and BreadthFirst Search (BFS) they are easy to implement and taught in manyintroductory algorithms classes. (See for instance 34). However, crawling the web is not a
14、 trivial programming exercisebut a serious algorithmic and system design challenge because ofthe following two factors. 1. The web is very large. Currently, Google 20 claims to haveindexed over 3 billion pages. Various studies 3, 27, 28 haveindicated that, historically, the web has doubled every 9-1
15、2months. 2. Web pages are changing rapidly. If “change” means “anychange”, then about 40% of all web pages change weekly12. Even if we consider only pages that change by a thirdor more, about 7% of all web pages change weekly 17. These two factors imply that to obtain a reasonably fresh and679comple
16、te snapshot of the web, a search engine must crawl at least100 million pages per day. Therefore, step (a) must be executedabout 1,000 times per second, and the membership test in step (c)must be done well over ten thousand times per second,against aset of URLs that is too large to store in main memo
17、ry. In addition,crawlers typically use a distributed architecture to crawl more pagesin parallel, which further complicates the membership test: it ispossible that the membership question can only be answered by apeer node, not locally. A crucial way to speed up the membership test is to cache a (dy
18、namic)subset of the “seen” URLs in main memory. The main goalof this paper is to investigate in depth several URL caching techniquesfor web crawling. We examined four practical techniques:random replacement, static cache, LRU, and CLOCK, and comparedthem against two theoretical limits: clairvoyant caching andinfinite cache when run against a trace of a web crawl that issuedover one billion HTTP requests. We found that simple cachingtechniques are extremely effective even at relatively small cachesizes such as 50,000 entries and show how these caches can be