htmlunit抓取动态网页(,ajax动态加载数据的网页并提取网页信息(组图) )

优采云 发布时间: 2021-09-22 12:13

  htmlunit抓取动态网页(,ajax动态加载数据的网页并提取网页信息(组图)

)

  对于采集的页面有几个:

  1.网

  2.动态网页(js,ajax动态加载数据)

  3.需要模拟登录到采集网

  4.加入的网</p

p3,4个解决方案和想法将陈述/p

p在随后的博客中/p

p现在只有1,2个解决方案和想法:/p

p一.网/p

p对于静态页面的采集解析方法! java,python提供了很多工具包或框架,如java httpclient,htmlUnit,jsoup,htmlparser等,python的urllib,urllib2,beautysoup,scrape等,没有细节,在线信息更多。/p

p二. news/p

p采集的动态网页,需要js,ajax动态加载,采集 data程序的那些分为两个:/p

p1.通过组装js,ajax请求,在js加载后模拟数据。/p

p2.调用浏览器内核,在获取源代码页面加载后,然后通过行解析源代码/p

p一个研究爬行动物的人必须有一些东西,有很多在线学习材料,而不是声明,只为文章/p

p写下这个条带/p

ptoolkit java,呼叫浏览器,但不是今天的焦点,今天的焦点是文章 @ @ @ @ @ @ 采集需要js,ajax动态加载的网页并提取Web信息(用采集微信公章号文章列列表作为一个例子)/p

p启动... /p

p1.在文章 list 采集项目(以下称为micro 采集)/p

ppre class="命令" name="code"scrapy startproject weixin/pre/p

p2.在蜘蛛目录中创建采集 spider文件/p

ppre class="python" name="code"vim weixinlist.py/pre/p

p写如下/p

ppre class="python" name="code"from weixin.items import WeixinItem

import sys

sys.path.insert(0,'..')

import scrapy

import time

from scrapy import Spider

class MySpider(Spider):

name = 'weixinlist'

allowed_domains = []

start_urls = [

'http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ',

]

download_delay = 1

print('start init....')

def parse(self, response):

sel=scrapy.Selector(response)

print('hello,world!')

print(response)

print(sel)

list=sel.xpath('//div[@class="txt-box"]/h4')

items=[]

for single in list:

data=WeixinItem()

title=single.xpath('a/text()').extract()

link=single.xpath('a/@href').extract()

data['title']=title

data['link']=link

if len(title)0:

print(title[0].encode('utf-8'))

print(link)

/pre/p

p3.添加到weixinitem类/p

p在项目中.py/p

p4.创建一个下载中间件下载ybkit.py在相同级别的项目.py,并将其写入以下代码:/p

ppre class="python" name="code"import spynner

import pyquery

import time

import BeautifulSoup

import sys

from scrapy.http import HtmlResponse

class WebkitDownloaderTest( object ):

def process_request( self, request, spider ):

# if spider.name in settings.WEBKIT_DOWNLOADER:

# if( type(request) is not FormRequest ):

browser = spynner.Browser()

browser.create_webview()

browser.set_html_parser(pyquery.PyQuery)

browser.load(request.url, 20)

try:

browser.wait_load(10)

except:

pass

string = browser.html

string=string.encode('utf-8')

renderedBody = str(string)

return HtmlResponse( request.url, body=renderedBody )/pre/p

p此代码是调用浏览器内核,在页面加载后获取源代码/p

p5.在setting.py文件中配置,声明下载和下载中间件/p

p在底部添加以下代码:/p

ppre class="python" name="code"#which spider should use WEBKIT

WEBKIT_DOWNLOADER=['weixinlist']

DOWNLOADER_MIDDLEWARES = {

'weixin.downloadwebkit.WebkitDownloaderTest': 543,

}

import os

os.environ["DISPLAY"] = ":0"/pre/p

p6.运运程序:/p

p运行命令:/p

ppre class="python" name="code"scrapy crawl weixinlist/pre/p

p运行结果:/p

ppre class="python" name="code"kevinflynndeMacBook-Pro:spiders kevinflynn$ scrapy crawl weixinlist

start init....

2015-07-28 21:13:55 [scrapy] INFO: Scrapy 1.0.1 started (bot: weixin)

2015-07-28 21:13:55 [scrapy] INFO: Optional features available: ssl, http11

2015-07-28 21:13:55 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'weixin.spiders', 'SPIDER_MODULES': ['weixin.spiders'], 'BOT_NAME': 'weixin'}

2015-07-28 21:13:55 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named service_identity'. Please install it from https://pypi.python.org/pypi/service_identity and make sure all of its dependencies are satisfied. Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected.

2015-07-28 21:13:55 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState

2015-07-28 21:13:55 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, WebkitDownloaderTest, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats

2015-07-28 21:13:55 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware

2015-07-28 21:13:55 [scrapy] INFO: Enabled item pipelines:

2015-07-28 21:13:55 [scrapy] INFO: Spider opened

2015-07-28 21:13:55 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2015-07-28 21:13:55 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023

QFont::setPixelSize: Pixel size = 0 (0)

2015-07-28 21:14:08 [scrapy] DEBUG: Crawled (200) GET http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ (referer: None)

hello,world!

200 http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ

Selector xpath=None data=u'htmlheadmeta http-equiv="X-UA-Compa'>

互联网协议入门

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=210032701&idx=1&sn=6b1fc2bc5d4eb0f87513751e4ccf610c&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

自己动手写贝叶斯分类器给图书分类

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=210013947&idx=1&sn=1f36ba5794e22d0fb94a9900230e74ca&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

不当免费技术支持的10种方法

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209998175&idx=1&sn=216106034a3b4afea6e67f813ce1971f&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

以 Python 为实例,介绍贝叶斯理论

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209998175&idx=2&sn=2f3dee873d7350dfe9546ab4a9323c05&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

我从腾讯那“偷了”3000万QQ用户数据,出了份很有趣的...

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209980651&idx=1&sn=11fd40a2dee5132b0de8d4c79a97dac2&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

如何用 Spark 快速开发应用?

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209820653&idx=2&sn=23712b78d82fb412e960c6aa1e361dd3&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

一起来写个简单的解释器(1)

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209797651&idx=1&sn=15073e27080e6b637c8d24b6bb815417&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

那个直接在机器码中改 Bug 的家伙

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209762756&idx=1&sn=04ae1bc3a366d358f474ac3e9a85fb60&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

把一个库开源,你该做些什么

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209762756&idx=2&sn=0ac961ffd82ead6078a60f25fed3c2*敏*感*词*&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

程序员的困境

[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209696436&idx=1&sn=8cb55b03c8b95586ba4498c64fa54513&3rd=MzA3MDU4NTYzMw==&scene=6#rd']

2015-07-28 21:14:08 [scrapy] INFO: Closing spider (finished)

2015-07-28 21:14:08 [scrapy] INFO: Dumping Scrapy stats:

{'downloader/response_bytes': 131181,

'downloader/response_count': 1,

'downloader/response_status_count/200': 1,

'finish_reason': 'finished',

'finish_time': datetime.datetime(2015, 7, 28, 13, 14, 8, 958071),

'log_count/DEBUG': 2,

'log_count/INFO': 7,

'log_count/WARNING': 1,

'response_received_count': 1,

'scheduler/dequeued': 1,

'scheduler/dequeued/memory': 1,

'scheduler/enqueued': 1,

'scheduler/enqueued/memory': 1,

'start_time': datetime.datetime(2015, 7, 28, 13, 13, 55, 688111)}

2015-07-28 21:14:08 [scrapy] INFO: Spider closed (finished)

QThread: Destroyed while thread is still running

kevinflynndeMacBook-Pro:spiders kevinflynn$

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线