手机
当前位置:查字典教程网 >脚本专栏 >python >基于scrapy实现的简单蜘蛛采集程序
基于scrapy实现的简单蜘蛛采集程序
摘要:本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:#StandardPythonlibraryimpor...

本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:

# Standard Python library imports # 3rd party imports from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector # My imports from poetry_analysis.items import PoetryAnalysisItem HTML_FILE_NAME = r'.+.html' class PoetryParser(object): """ Provides common parsing method for poems formatted this one specific way. """ date_pattern = r'(d{2} w{3,9} d{4})' def parse_poem(self, response): hxs = HtmlXPathSelector(response) item = PoetryAnalysisItem() # All poetry text is in pre tags text = hxs.select('//pre/text()').extract() item['text'] = ''.join(text) item['url'] = response.url # head/title contains title - a poem by author title_text = hxs.select('//head/title/text()').extract()[0] item['title'], item['author'] = title_text.split(' - ') item['author'] = item['author'].replace('a poem by', '') for key in ['title', 'author']: item[key] = item[key].strip() item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern) return item class PoetrySpider(CrawlSpider, PoetryParser): name = 'example.com_poetry' allowed_domains = ['www.example.com'] root_path = 'someuser/poetry/' start_urls = ['http://www.example.com/someuser/poetry/recent/', 'http://www.example.com/someuser/poetry/less_recent/'] rules = [Rule(SgmlLinkExtractor(allow=[start_urls[0] + HTML_FILE_NAME]), callback='parse_poem'), Rule(SgmlLinkExtractor(allow=[start_urls[1] + HTML_FILE_NAME]), callback='parse_poem')]

希望本文所述对大家的Python程序设计有所帮助。

【基于scrapy实现的简单蜘蛛采集程序】相关文章:

python实现360的字符显示界面

python中mechanize库的简单使用示例

使用wxpython实现的一个简单图片浏览器实例

python实现进程间通信简单实例

基于Python实现的扫雷游戏实例代码

python基础教程之实现石头剪刀布游戏示例

python实现的登录和操作开心网脚本分享

Python实现的简单万年历例子分享

python使用rabbitmq实现网络爬虫示例

Python实现的生成自我描述脚本分享(很有意思的程序)

精品推荐
分类导航