最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

Python自定义scrapy中间模块避免重复采集的方法

来源:动视网 责编:小采 时间:2020-11-27 14:39:48
文档

Python自定义scrapy中间模块避免重复采集的方法

Python自定义scrapy中间模块避免重复采集的方法:本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下: from scrapy import log from scrapy.http import Request from scrapy.item import BaseItem from scrapy.utils
推荐度:
导读Python自定义scrapy中间模块避免重复采集的方法:本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下: from scrapy import log from scrapy.http import Request from scrapy.item import BaseItem from scrapy.utils

本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下:

from scrapy import log
from scrapy.http import Request
from scrapy.item import BaseItem
from scrapy.utils.request import request_fingerprint
from myproject.items import MyItem
class IgnoreVisitedItems(object):
 """Middleware to ignore re-visiting item pages if they
 were already visited before. 
 The requests to be filtered by have a meta['filter_visited']
 flag enabled and optionally define an id to use 
 for identifying them, which defaults the request fingerprint,
 although you'd want to use the item id,
 if you already have it beforehand to make it more robust.
 """
 FILTER_VISITED = 'filter_visited'
 VISITED_ID = 'visited_id'
 CONTEXT_KEY = 'visited_ids'
 def process_spider_output(self, response, result, spider):
 context = getattr(spider, 'context', {})
 visited_ids = context.setdefault(self.CONTEXT_KEY, {})
 ret = []
 for x in result:
 visited = False
 if isinstance(x, Request):
 if self.FILTER_VISITED in x.meta:
 visit_id = self._visited_id(x)
 if visit_id in visited_ids:
 log.msg("Ignoring already visited: %s" % x.url,
 level=log.INFO, spider=spider)
 visited = True
 elif isinstance(x, BaseItem):
 visit_id = self._visited_id(response.request)
 if visit_id:
 visited_ids[visit_id] = True
 x['visit_id'] = visit_id
 x['visit_status'] = 'new'
 if visited:
 ret.append(MyItem(visit_id=visit_id, visit_status='old'))
 else:
 ret.append(x)
 return ret
 def _visited_id(self, request):
 return request.meta.get(self.VISITED_ID) or request_fingerprint(request)

希望本文所述对大家的Python程序设计有所帮助。

文档

Python自定义scrapy中间模块避免重复采集的方法

Python自定义scrapy中间模块避免重复采集的方法:本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下: from scrapy import log from scrapy.http import Request from scrapy.item import BaseItem from scrapy.utils
推荐度:
标签: 自定义 重复 python
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top