最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

基于scrapy实现的简单蜘蛛采集程序

来源:动视网 责编:小采 时间:2020-11-27 14:32:42
文档

基于scrapy实现的简单蜘蛛采集程序

基于scrapy实现的简单蜘蛛采集程序:本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下: # Standard Python library imports # 3rd party imports from scrapy.contrib.spiders import CrawlSpider, Rule from scr
推荐度:
导读基于scrapy实现的简单蜘蛛采集程序:本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下: # Standard Python library imports # 3rd party imports from scrapy.contrib.spiders import CrawlSpider, Rule from scr


本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:

# Standard Python library imports
# 3rd party imports
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
# My imports
from poetry_analysis.items import PoetryAnalysisItem
HTML_FILE_NAME = r'.+.html'
class PoetryParser(object):
 """
 Provides common parsing method for poems formatted this one specific way.
 """
 date_pattern = r'(d{2} w{3,9} d{4})'
 
 def parse_poem(self, response):
 hxs = HtmlXPathSelector(response)
 item = PoetryAnalysisItem()
 # All poetry text is in pre tags
 text = hxs.select('//pre/text()').extract()
 item['text'] = ''.join(text)
 item['url'] = response.url
 # head/title contains title - a poem by author
 title_text = hxs.select('//head/title/text()').extract()[0]
 item['title'], item['author'] = title_text.split(' - ')
 item['author'] = item['author'].replace('a poem by', '')
 for key in ['title', 'author']:
 item[key] = item[key].strip()
 item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern)
 return item
class PoetrySpider(CrawlSpider, PoetryParser):
 name = 'example.com_poetry'
 allowed_domains = ['www.example.com']
 root_path = 'someuser/poetry/'
 start_urls = ['http://www.example.com/someuser/poetry/recent/',
 'http://www.example.com/someuser/poetry/less_recent/']
 rules = [Rule(SgmlLinkExtractor(allow=[start_urls[0] + HTML_FILE_NAME]),
 callback='parse_poem'),
 Rule(SgmlLinkExtractor(allow=[start_urls[1] + HTML_FILE_NAME]),
 callback='parse_poem')]

希望本文所述对大家的Python程序设计有所帮助。

文档

基于scrapy实现的简单蜘蛛采集程序

基于scrapy实现的简单蜘蛛采集程序:本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下: # Standard Python library imports # 3rd party imports from scrapy.contrib.spiders import CrawlSpider, Rule from scr
推荐度:
标签: 程序 蜘蛛 scrapy
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top