最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

python多线程抓取*子内容示例

来源:动视网 责编:小采 时间:2020-11-27 14:38:30
文档

python多线程抓取*子内容示例

python多线程抓取*子内容示例:使用re, urllib, threading 多线程抓取天涯帖子内容,设置url为需抓取的天涯帖子的第一页,设置file_name为下载后的文件名 代码如下:#coding:utf-8 import urllibimport reimport threadingimport os, time class Dow
推荐度:
导读python多线程抓取*子内容示例:使用re, urllib, threading 多线程抓取天涯帖子内容,设置url为需抓取的天涯帖子的第一页,设置file_name为下载后的文件名 代码如下:#coding:utf-8 import urllibimport reimport threadingimport os, time class Dow


使用re, urllib, threading 多线程抓取天涯帖子内容,设置url为需抓取的天涯帖子的第一页,设置file_name为下载后的文件名

代码如下:


#coding:utf-8

import urllib
import re
import threading
import os, time

class Down_Tianya(threading.Thread):
"""多线程下载"""
def __init__(self, url, num, dt):
threading.Thread.__init__(self)
self.url = url
self.num = num
self.txt_dict = dt

def run(self):
print 'downling from %s' % self.url
self.down_text()

def down_text(self):
"""根据传入的url抓出各页内容,按页数做键存入字典"""
html_content =urllib.urlopen(self.url).read()
text_pattern = re.compile('时间:(.*?).*?.*?\s*(.*?)', re.DOTALL)
text = text_pattern.findall(html_content)
text_join = ['\r\n\r\n\r\n\r\n'.join(item) for item in text]
self.txt_dict[self.num] = text_join

def page(url):
"""根据第一页地址抓取总页数"""
html_page = urllib.urlopen(url).read()
page_pattern = re.compile(r'(\d*)\s*下页')
page_result = page_pattern.search(html_page)
if page_result:
page_num = int(page_result.group(1))
return page_num

def write_text(dict, fn):
"""把字典内容按键(页数)写入文本,每个键值为每页内容的list列表"""
tx_file = open(fn, 'w+')
pn = len(dict)
for i in range(1, pn+1):
tx_list = dict[i]
for tx in tx_list:
tx = tx.replace('
', '\r\n').replace('
', '\r\n').replace(' ', '')
tx_file.write(tx.strip()+'\r\n'*4)
tx_file.close()


def main():
url = 'http://bbs.tianya.cn/post-16-996521-1.shtml'
file_name ='abc.txt'
my_page = page(url)
my_dict = {}

print 'page num is : %s' % my_page

threads = []

"""根据页数构造urls进行多线程下载"""
for num in range(1, my_page+1):
myurl = '%s%s.shtml' % (url[:-7], num)
downlist = Down_Tianya(myurl, num, my_dict)
downlist.start()
threads.append(downlist)

"""检查下载完成后再进行写入"""
for t in threads:
t.join()

write_text(my_dict, file_name)

print 'All download finished. Save file at directory: %s' % os.getcwd()

if __name__ == '__main__':
main()

down_tianya.py

代码如下:


#coding:utf-8

import urllib
import re
import threading
import os

class Down_Tianya(threading.Thread):
"""多线程下载"""
def __init__(self, url, num, dt):
threading.Thread.__init__(self)
self.url = url
self.num = num
self.txt_dict = dt

def run(self):
print 'downling from %s' % self.url
self.down_text()

def down_text(self):
"""根据传入的url抓出各页内容,按页数做键存入字典"""
html_content =urllib.urlopen(self.url).read()
text_pattern = re.compile('时间:(.*?).*?.*?\s*(.*?)', re.DOTALL)
text = text_pattern.findall(html_content)
text_join = ['\r\n\r\n\r\n\r\n'.join(item) for item in text]
self.txt_dict[self.num] = text_join

def page(url):
"""根据第一页地址抓取总页数"""
html_page = urllib.urlopen(url).read()
page_pattern = re.compile(r'(\d*)\s*下页')
page_result = page_pattern.search(html_page)
if page_result:
page_num = int(page_result.group(1))
return page_num

def write_text(dict, fn):
"""把字典内容按键(页数)写入文本,每个键值为每页内容的list列表"""
tx_file = open(fn, 'w+')
pn = len(dict)
for i in range(1, pn+1):
tx_list = dict[i]
for tx in tx_list:
tx = tx.replace('
', '\r\n').replace('
', '\r\n').replace(' ', '')
tx_file.write(tx.strip()+'\r\n'*4)
tx_file.close()


def main():
url = 'http://bbs.tianya.cn/post-16-996521-1.shtml'
file_name ='abc.txt'
my_page = page(url)
my_dict = {}

print 'page num is : %s' % my_page

threads = []

"""根据页数构造urls进行多线程下载"""
for num in range(1, my_page+1):
myurl = '%s%s.shtml' % (url[:-7], num)
downlist = Down_Tianya(myurl, num, my_dict)
downlist.start()
threads.append(downlist)

"""检查下载完成后再进行写入"""
for t in threads:
t.join()

write_text(my_dict, file_name)

print 'All download finished. Save file at directory: %s' % os.getcwd()

if __name__ == '__main__':
main()

文档

python多线程抓取*子内容示例

python多线程抓取*子内容示例:使用re, urllib, threading 多线程抓取天涯帖子内容,设置url为需抓取的天涯帖子的第一页,设置file_name为下载后的文件名 代码如下:#coding:utf-8 import urllibimport reimport threadingimport os, time class Dow
推荐度:
标签: 示例 python 抓取
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top