谷歌网页视频抓取工具( Python处理网页相关的工具+BeautifulSoup抓取Goolge搜索链接 )

优采云 发布时间: 2022-02-06 09:23

  谷歌网页视频抓取工具(

Python处理网页相关的工具+BeautifulSoup抓取Goolge搜索链接

)

  

  1)urllib2+BeautifulSoup 抓取 Goolge 搜索链接

  最近,涉及的项目需要处理谷歌搜索结果。在此之前,我学习了 Python 来处理与 Web 相关的工具。在实际应用中,使用urllib2和beautifulsoup来爬取网页,但是在爬取google搜索结果的时候,发现如果直接处理google搜索结果页面的源码,会得到很多“脏”链接。

  请看下图“泰坦尼克号詹姆斯”的搜索结果:

  

  图中红色标注的是不需要的,蓝色标注的是需要抓取的。

  当然,这个“脏链接”可以通过规则过滤来过滤掉,但是程序的复杂度很高。就在我皱着眉头写过滤规则的时候。同学们提醒谷歌应该提供相关的API,这才恍然大悟。

  (2)Google 网页搜索 API + 多线程

  该文档提供了一个在 Python 中搜索的示例:

  import simplejson

# The request also includes the userip parameter which provides the end

# user"s IP address. Doing so will help distinguish this legitimate

# server-side traffic from traffic which doesn"t come from an end-user.

url = ("https://ajax.googleapis.com/ajax/services/search/web"

"?v=1.0&q=Paris%20Hilton&userip=USERS-IP-ADDRESS")

request = urllib2.Request(

url, None, {"Referer": /* Enter the URL of your site here */})

response = urllib2.urlopen(request)

# Process the JSON string.

results = simplejson.load(response)

# now have some fun with the results...

import simplejson

# The request also includes the userip parameter which provides the end

# user"s IP address. Doing so will help distinguish this legitimate

# server-side traffic from traffic which doesn"t come from an end-user.

url = ("https://ajax.googleapis.com/ajax/services/search/web"

"?v=1.0&q=Paris%20Hilton&userip=USERS-IP-ADDRESS")

request = urllib2.Request(

url, None, {"Referer": /* Enter the URL of your site here */})

response = urllib2.urlopen(request)

# Process the JSON string.

results = simplejson.load(response)

# now have some fun with the results..

  在实际应用中,可能需要爬取google的很多网页,所以需要使用多个线程来分担爬取任务。有关使用 google web search api 的详细参考,请参见此处(此处介绍了标准 URL 参数)。另外要特别注意url中的参数rsz必须为8(包括8)下面的值,如果大于8会报错!

  (3)代码实现

  代码实现还是有问题,但是可以运行,健壮性差,需要改进。希望各界大神指出错误(初学Python),不胜感激。

  #-*-coding:utf-8-*-

import urllib2,urllib

import simplejson

import os, time,threading

import common, html_filter

#input the keywords

keywords = raw_input("Enter the keywords: ")

#define rnum_perpage, pages

rnum_perpage=8

pages=8

#定义线程函数

def thread_scratch(url, rnum_perpage, page):

url_set = []

try:

request = urllib2.Request(url, None, {"Referer": "http://www.sina.com"})

response = urllib2.urlopen(request)

# Process the JSON string.

results = simplejson.load(response)

info = results["responseData"]["results"]

except Exception,e:

print "error occured"

print e

else:

for minfo in info:

url_set.append(minfo["url"])

print minfo["url"]

#处理链接

i = 0

for u in url_set:

try:

request_url = urllib2.Request(u, None, {"Referer": "http://www.sina.com"})

request_url.add_header(

"User-agent",

"CSC"

)

response_data = urllib2.urlopen(request_url).read()

#过滤文件

#content_data = html_filter.filter_tags(response_data)

#写入文件

filenum = i+page

filename = dir_name+"/related_html_"+str(filenum)

print " write start: related_html_"+str(filenum)

f = open(filename, "w+", -1)

f.write(response_data)

#print content_data

f.close()

print " write down: related_html_"+str(filenum)

except Exception, e:

print "error occured 2"

print e

i = i+1

return

#创建文件夹

dir_name = "related_html_"+urllib.quote(keywords)

if os.path.exists(dir_name):

print "exists file"

common.delete_dir_or_file(dir_name)

os.makedirs(dir_name)

#抓取网页

print "start to scratch web pages:"

for x in range(pages):

print "page:%s"%(x+1)

page = x * rnum_perpage

url = ("https://ajax.googleapis.com/ajax/services/search/web"

"?v=1.0&q=%s&rsz=%s&start=%s") % (urllib.quote(keywords), rnum_perpage,page)

print url

t = threading.Thread(target=thread_scratch, args=(url,rnum_perpage, page))

t.start()

#主线程等待子线程抓取完

main_thread = threading.currentThread()

for t in threading.enumerate():

if t is main_thread:

continue

t.join()

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线