最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

python使用scrapy解析js示例

来源:动视网 责编:小采 时间:2020-11-27 14:30:01
文档

python使用scrapy解析js示例

python使用scrapy解析js示例: 代码如下:from selenium import selenium class MySpider(CrawlSpider): name = 'cnbeta' allowed_domains = ['cnbeta.com'] start_urls = ['http://www.gxlcms.com'] rules = ( # Extract link
推荐度:
导读python使用scrapy解析js示例: 代码如下:from selenium import selenium class MySpider(CrawlSpider): name = 'cnbeta' allowed_domains = ['cnbeta.com'] start_urls = ['http://www.gxlcms.com'] rules = ( # Extract link


代码如下:


from selenium import selenium

class MySpider(CrawlSpider):
name = 'cnbeta'
allowed_domains = ['cnbeta.com']
start_urls = ['http://www.gxlcms.com']

rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(SgmlLinkExtractor(allow=('/articles/.*\.htm', )),
callback='parse_page', follow=True),

# Extract links matching 'item.php' and parse them with the spider's method parse_item
)

def __init__(self):
CrawlSpider.__init__(self)
self.verificationErrors = []
self.selenium = selenium("localhost", 4444, "*firefox", "http://www.gxlcms.com")
self.selenium.start()

def __del__(self):
self.selenium.stop()
print self.verificationErrors
CrawlSpider.__del__(self)


def parse_page(self, response):
self.log('Hi, this is an item page! %s' % response.url)
sel = Selector(response)
from webproxy.items import WebproxyItem

sel = self.selenium
sel.open(response.url)
sel.wait_for_page_to_load("30000")
import time

time.sleep(2.5)

文档

python使用scrapy解析js示例

python使用scrapy解析js示例: 代码如下:from selenium import selenium class MySpider(CrawlSpider): name = 'cnbeta' allowed_domains = ['cnbeta.com'] start_urls = ['http://www.gxlcms.com'] rules = ( # Extract link
推荐度:
标签: js 示例 python
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top