手机
当前位置:查字典教程网 >脚本专栏 >python >Python自定义scrapy中间模块避免重复采集的方法
Python自定义scrapy中间模块避免重复采集的方法
摘要:本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下:fromscrapyimportlo...

本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下:

from scrapy import log from scrapy.http import Request from scrapy.item import BaseItem from scrapy.utils.request import request_fingerprint from myproject.items import MyItem class IgnoreVisitedItems(object): """Middleware to ignore re-visiting item pages if they were already visited before. The requests to be filtered by have a meta['filter_visited'] flag enabled and optionally define an id to use for identifying them, which defaults the request fingerprint, although you'd want to use the item id, if you already have it beforehand to make it more robust. """ FILTER_VISITED = 'filter_visited' VISITED_ID = 'visited_id' CONTEXT_KEY = 'visited_ids' def process_spider_output(self, response, result, spider): context = getattr(spider, 'context', {}) visited_ids = context.setdefault(self.CONTEXT_KEY, {}) ret = [] for x in result: visited = False if isinstance(x, Request): if self.FILTER_VISITED in x.meta: visit_id = self._visited_id(x) if visit_id in visited_ids: log.msg("Ignoring already visited: %s" % x.url, level=log.INFO, spider=spider) visited = True elif isinstance(x, BaseItem): visit_id = self._visited_id(response.request) if visit_id: visited_ids[visit_id] = True x['visit_id'] = visit_id x['visit_status'] = 'new' if visited: ret.append(MyItem(visit_id=visit_id, visit_status='old')) else: ret.append(x) return ret def _visited_id(self, request): return request.meta.get(self.VISITED_ID) or request_fingerprint(request)

希望本文所述对大家的Python程序设计有所帮助。

【Python自定义scrapy中间模块避免重复采集的方法】相关文章:

python sys模块sys.path使用方法示例

Python中os和shutil模块实用方法集锦

Python去掉字符串中空格的方法

Python实现全局变量的两个解决方法

使用setup.py安装python包和卸载python包的方法

Python实例之wxpython中Frame使用方法

python判断windows隐藏文件的方法

python读取注册表中值的方法

Python下singleton模式的实现方法

从零学python系列之浅谈pickle模块封装和拆封数据对象的方法

精品推荐
分类导航