可采集文章(,ajax动态加载的网页并提取网页信息(需进行) )

优采云 发布时间: 2021-09-09 19:03

  可采集文章(,ajax动态加载的网页并提取网页信息(需进行)

)

  网页有几种采集:

  1.静态网页

  2.动态网页(需要js、ajax动态加载数据的网页)

  3.需要在采集的网页前模拟登录

  4.加密网页

  3、4个解决方案和想法会在后续博客中说明

  目前只有 1、2 的解决方案和想法:

  一.静态网页

  静态网页的采集解析方法很多! java和python都提供了很多工具包或者框架,比如java httpclient、Htmlunit、Jsoup、HtmlParser等,Python urllib、urllib2、BeautifulSoup、Scrapy等,不详,网上有很多资料。

  二.动态网页

  对于采集,动态网页是指需要通过js和ajax动态加载获取数据的网页。 采集data 方案分为两种:

  1.通过抓包工具分析js和ajax请求,模拟js加载后获取数据的请求。

  2.调用浏览器内核,获取加载网页的源码,然后解析源码

  研究爬虫的人一定对js有所了解。网上学习资料很多,就不一一列举了。我写这篇文章只是为了文章的完整性@

  调用浏览器内核的工具包也有几个,不过不是今天的重点。今天的重点是文章的标题Scrapy框架结合Spynner采集需要js、ajax动态加载和提取。网页信息(以采集微信公号文章list为例)

  使用Scrapy和Spynner之前需要先安装环境。学了很久python,在mac上折腾了半天,快要发疯的时候成功了,还杀了很多脑细胞。

  

  赢了太惨了!简单总结一下,用了就想装什么!

  开始...

  1.创建微信公众号文章List采集Project(以下简称micro采集)

  scrapy startproject weixin

  2.在spider目录下创建采集spider文件

  vim weixinlist.py

  编写如下代码

  from weixin.items import WeixinItem

import sys

sys.path.insert(0,'..')

import scrapy

import time

from scrapy import Spider

class MySpider(Spider):

name = 'weixinlist'

allowed_domains = []

start_urls = [

'',

]

download_delay = 1

print('start init....')

def parse(self, response):

sel=scrapy.Selector(response)

print('hello,world!')

print(response)

print(sel)

list=sel.xpath('//div[@class="txt-box"]/h4')

items=[]

for single in list:

data=WeixinItem()

title=single.xpath('a/text()').extract()

link=single.xpath('a/@href').extract()

data['title']=title

data['link']=link

if len(title)>0:

print(title[0].encode('utf-8'))

print(link)

  3.在items.py中添加WeixinItem类

  import scrapy

class WeixinItem(scrapy.Item):

# define the fields for your item here like:

# name = scrapy.Field()

title=scrapy.Field()

link=scrapy.Field()

  4.在items.py同级目录下创建一个下载中间件downloadwebkit.py,在里面写入如下代码:

  import spynner

import pyquery

import time

import BeautifulSoup

import sys

from scrapy.http import HtmlResponse

class WebkitDownloaderTest( object ):

def process_request( self, request, spider ):

# if spider.name in settings.WEBKIT_DOWNLOADER:

# if( type(request) is not FormRequest ):

browser = spynner.Browser()

browser.create_webview()

browser.set_html_parser(pyquery.PyQuery)

browser.load(request.url, 20)

try:

browser.wait_load(10)

except:

pass

string = browser.html

string=string.encode('utf-8')

renderedBody = str(string)

return HtmlResponse( request.url, body=renderedBody )

  这段代码是在网页加载完成后调用浏览器内核获取源码

  在setting.py文件中5.Configure并声明下载使用下载中间件

  在底部添加以下代码:

  #which spider should use WEBKIT

WEBKIT_DOWNLOADER=['fenghuangblog']

DOWNLOADER_MIDDLEWARES = {

'weixin.downloadwebkit.WebkitDownloaderTest': 543,

}

import os

os.environ["DISPLAY"] = ":0"

  6.运行程序:

  运行命令:

  scrapy crawl weixinlist

  运行结果:

  kevinflynndeMacBook-Pro:spiders kevinflynn$ scrapy crawl weixinlist

start init....

2015-07-28 21:13:55 [scrapy] INFO: Scrapy 1.0.1 started (bot: weixin)

2015-07-28 21:13:55 [scrapy] INFO: Optional features available: ssl, http11

2015-07-28 21:13:55 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'weixin.spiders', 'SPIDER_MODULES': ['weixin.spiders'], 'BOT_NAME': 'weixin'}

2015-07-28 21:13:55 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named service_identity'. Please install it from and make sure all of its dependencies are satisfied. Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected.

2015-07-28 21:13:55 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState

2015-07-28 21:13:55 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, WebkitDownloaderTest, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats

2015-07-28 21:13:55 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware

2015-07-28 21:13:55 [scrapy] INFO: Enabled item pipelines:

2015-07-28 21:13:55 [scrapy] INFO: Spider opened

2015-07-28 21:13:55 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2015-07-28 21:13:55 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023

QFont::setPixelSize: Pixel size

互联网协议入门

[u';mid=210032701&idx=1&sn=6b1fc2bc5d4eb0f87513751e4ccf610c&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

自己动手写贝叶斯分类器给图书分类

[u';mid=210013947&idx=1&sn=1f36ba5794e22d0fb94a9900230e74ca&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

不当免费技术支持的10种方法

[u';mid=209998175&idx=1&sn=216106034a3b4afea6e67f813ce1971f&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

以 Python 为实例,介绍贝叶斯理论

[u';mid=209998175&idx=2&sn=2f3dee873d7350dfe9546ab4a9323c05&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

我从腾讯那“偷了”3000万QQ用户数据,出了份很有趣的...

[u';mid=209980651&idx=1&sn=11fd40a2dee5132b0de8d4c79a97dac2&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

如何用 Spark 快速开发应用?

[u';mid=209820653&idx=2&sn=23712b78d82fb412e960c6aa1e361dd3&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

一起来写个简单的解释器(1)

[u';mid=209797651&idx=1&sn=15073e27080e6b637c8d24b6bb815417&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

那个直接在机器码中改 Bug 的家伙

[u';mid=209762756&idx=1&sn=04ae1bc3a366d358f474ac3e9a85fb60&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

把一个库开源,你该做些什么

[u';mid=209762756&idx=2&sn=0ac961ffd82ead6078a60f25fed3c2c4&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

程序员的困境

[u';mid=209696436&idx=1&sn=8cb55b03c8b95586ba4498c64fa54513&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

2015-07-28 21:14:08 [scrapy] INFO: Closing spider (finished)

2015-07-28 21:14:08 [scrapy] INFO: Dumping Scrapy stats:

{'downloader/response_bytes': 131181,

'downloader/response_count': 1,

'downloader/response_status_count/200': 1,

'finish_reason': 'finished',

'finish_time': datetime.datetime(2015, 7, 28, 13, 14, 8, 958071),

'log_count/DEBUG': 2,

'log_count/INFO': 7,

'log_count/WARNING': 1,

'response_received_count': 1,

'scheduler/dequeued': 1,

'scheduler/dequeued/memory': 1,

'scheduler/enqueued': 1,

'scheduler/enqueued/memory': 1,

'start_time': datetime.datetime(2015, 7, 28, 13, 13, 55, 688111)}

2015-07-28 21:14:08 [scrapy] INFO: Spider closed (finished)

QThread: Destroyed while thread is still running

kevinflynndeMacBook-Pro:spiders kevinflynn$

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线