欢迎来到毕设资料网! | 帮助中心 毕设资料交流与分享平台
毕设资料网
全部分类
  • 毕业设计>
  • 毕业论文>
  • 外文翻译>
  • 课程设计>
  • 实习报告>
  • 相关资料>
  • ImageVerifierCode 换一换
    首页 毕设资料网 > 资源分类 > DOCX文档下载
    分享到微信 分享到微博 分享到QQ空间

    计算机外文翻译---基于网络爬虫的有效URL缓存

    • 资源ID:124116       资源大小:61.45KB        全文页数:24页
    • 资源格式: DOCX        下载积分:100金币
    快捷下载 游客一键下载
    账号登录下载
    三方登录下载: QQ登录
    下载资源需要100金币
    邮箱/手机:
    温馨提示:
    快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如填写123,账号就是123,密码也是123。
    支付方式: 支付宝   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

    计算机外文翻译---基于网络爬虫的有效URL缓存

    1、外文资料原文 外文资料原文 Efficient URL Caching for World Wide Web Crawling Andrei Z. Broder IBM TJ WatsonResearchCenter 19 Skyline Dr Hawthorne, NY10532 Marc Najork Microsoft Research 1065 La Avenida Mountain View, CA94043 Janet L. Wiener Hewlett Packard Labs 1501 Page Mill Road Palo Alto, CA94304 ABSTRACT

    2、Crawling the web is deceptively simple: the basic algorithm is (a)Fetch a page (b) Parse it to extract all linked URLs (c) For all theURLs not seen before, repeat (a)(c). However, the size of the web(estimated at over 4 billion pages) and its rate of change (estimatedat 7% per week) move this plan f

    3、rom a trivial programming exerciseto a serious algorithmic and system design challenge. Indeed, thesetwo factors alone imply that for a reasonably fresh and completecrawl of the web, step (a) must be executed about a thousand timesper second, and thus the membership test (c) must be done wellover te

    4、n thousand times per second against a set too large to storein main memory. This requires a distributed architecture, whichfurther complicates the 外文资料原文 membership test. A crucial way to speed up the test is to cache, that is, to store inmain memory a (dynamic) subset of the “seen” URLs. The maingo

    5、al of this paper is to carefully investigate several URL cachingtechniques for web crawling. We consider both practical algorithms:random replacement, static cache, LRU, and CLOCK, andtheoretical limits: clairvoyant caching and infinite cache. We performedabout 1,800 simulations using these algorith

    6、ms with variouscache sizes, using actual log data extracted from a massive 33day web crawl that issued over one billion HTTP requests.Our main conclusion is that caching is very effective in oursetup, a cache of roughly 50,000 entries can achieve a hit rate ofalmost 80%. Interestingly, this cache si

    7、ze falls at a critical point: asubstantially smaller cache is much less effective while a substantiallylarger cache brings little additional benefit. We conjecturethat such critical points are inherent to our problem and venture anexplanation for this phenomenon. 1. INTRODUCTION A recent Pew Foundat

    8、ion study 31 states that “Search engineshave become an indispensable utility for Internet users” and estimatesthat as of mid-2002, slightly over 50% of all Americans haveused web search to find information. Hence, the technology thatpowers web search is of enormous practical interest. In this paper,

    9、we concentrate on one aspect of the search technology, namelythe process of collecting web pages that eventually constitute thesearch engine corpus. Search engines collect pages in many ways, among them directURL submission, paid inclusion, and URL extraction from nonwebsources, but the bulk of the

    10、corpus is obtained by recursivelyexploring the web, a process known as crawling or SPIDERing. Thebasic algorithm is (a) Fetch a page (b) Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)(c) Crawling typically starts from a set of seed URLs, made up ofURLs obtained

    11、by other means as described above and/or made upof URLs collected during previous crawls. Sometimes crawls arestarted from a single well connected page, or a directory such , but in this case a relatively large portion of the 外文资料原文 web(estimated at over 20%) is never reached. See 9 for a discussion

    12、of the graph structure of the web that leads to this phenomenon. If we view web pages as nodes in a graph, and hyperlinks as directededges among these nodes, then crawling becomes a processknown in mathematical circles as graph traversal. Various strategiesfor graph traversal differ in their choice

    13、of which node amongthe nodes not yet explored to explore next. Two standard strategiesfor graph traversal are Depth First Search (DFS) and BreadthFirst Search (BFS) they are easy to implement and taught in manyintroductory algorithms classes. (See for instance 34). However, crawling the web is not a

    14、 trivial programming exercisebut a serious algorithmic and system design challenge because ofthe following two factors. 1. The web is very large. Currently, Google 20 claims to haveindexed over 3 billion pages. Various studies 3, 27, 28 haveindicated that, historically, the web has doubled every 9-1

    15、2months. 2. Web pages are changing rapidly. If “change” means “anychange”, then about 40% of all web pages change weekly12. Even if we consider only pages that change by a thirdor more, about 7% of all web pages change weekly 17. These two factors imply that to obtain a reasonably fresh and679comple

    16、te snapshot of the web, a search engine must crawl at least100 million pages per day. Therefore, step (a) must be executedabout 1,000 times per second, and the membership test in step (c)must be done well over ten thousand times per second,against aset of URLs that is too large to store in main memo

    17、ry. In addition,crawlers typically use a distributed architecture to crawl more pagesin parallel, which further complicates the membership test: it ispossible that the membership question can only be answered by apeer node, not locally. A crucial way to speed up the membership test is to cache a (dy

    18、namic)subset of the “seen” URLs in main memory. The main goalof this paper is to investigate in depth several URL caching techniquesfor web crawling. We examined four practical techniques:random replacement, static cache, LRU, and CLOCK, and comparedthem against two theoretical limits: clairvoyant caching andinfinite cache when run against a trace of a web crawl that issuedover one billion HTTP requests. We found that simple cachingtechniques are extremely effective even at relatively small cachesizes such as 50,000 entries and show how these caches can be


    注意事项

    本文(计算机外文翻译---基于网络爬虫的有效URL缓存)为本站会员(泛舟)主动上传,毕设资料网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请联系网站客服QQ:540560583,我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们
    本站所有资料均属于原创者所有,仅提供参考和学习交流之用,请勿用做其他用途,转载必究!如有侵犯您的权利请联系本站,一经查实我们会立即删除相关内容!
    copyright@ 2008-2025 毕设资料网所有
    联系QQ:540560583