网页爬虫抓取百度图片( Python3实战入门数据库篇--把爬取到的数据存到数据库)
优采云 发布时间: 2021-10-13 08:20网页爬虫抓取百度图片(
Python3实战入门数据库篇--把爬取到的数据存到数据库)
# 简单的网络爬虫
from urllib import request
import chardet
response = request.urlopen("http://www.jianshu.com/")
html = response.read()
charset = chardet.detect(html)# {'language': '', 'encoding': 'utf-8', 'confidence': 0.99}
html = html.decode(str(charset["encoding"])) # 解码
print(html)
复制代码
由于爬取的html文档比较长,这里发个简单的帖子给大家看看
..........后面省略一大堆
复制代码
这是Python3爬虫的简单介绍。是不是很简单?我建议你输入几次。
三、Python3抓取网页中的图片并将图片保存到本地文件夹
目标
import re
import urllib.request
#爬取网页html
def getHtml(url):
page = urllib.request.urlopen(url)
html = page.read()
return html
html = getHtml("http://tieba.baidu.com/p/3205263090")
html = html.decode('UTF-8')
#获取图片链接的方法
def getImg(html):
# 利用正则表达式匹配网页里的图片地址
reg = r'src="([.*\S]*\.jpg)" pic_ext="jpeg"'
imgre=re.compile(reg)
imglist=re.findall(imgre,html)
return imglist
imgList=getImg(html)
imgCount=0
#for把获取到的图片都下载到本地pic文件夹里,保存之前先在本地建一个pic文件夹
for imgPath in imgList:
f=open("../pic/"+str(imgCount)+".jpg",'wb')
f.write((urllib.request.urlopen(imgPath)).read())
f.close()
imgCount+=1
print("全部抓取完成")
复制代码
迫不及待想看看有哪些美图被爬了
爬24个女孩的照片真是太容易了。是不是很简单。
四、Python3抓取新闻网站新闻列表
这里稍微复杂一点,我给大家解释一下。
分析上图,我们要抓取的信息在div中的a标签和img标签中,所以我们要考虑的就是如何获取这些信息
这里要用到我们导入的BeautifulSoup4库,这里是关键代码
# 使用剖析器为html.parser
soup = BeautifulSoup(html, 'html.parser')
# 获取到每一个class=hot-article-img的a节点
allList = soup.select('.hot-article-img')
复制代码
上面代码得到的allList就是我们要获取的新闻列表,抓到的如下
[
<a href=span"/article/211390.html"/span target=span"_blank"/span>

</a>
,
<a href=span"/article/214982.html"/span target=span"_blank"/span title=span"TFBOYS成员各自飞,商业价值天花板已现?"/span>

</a>
,
<a href=span"/article/213703.html"/span target=span"_blank"/span title=span"买手店江湖"/span>

</a>
,
<a href=span"/article/214679.html"/span target=span"_blank"/span title=span"iPhone X正式告诉我们,手机和相机开始分道扬镳"/span>

</a>
,
<a href=span"/article/214962.html"/span target=span"_blank"/span title=span"信用已被透支殆尽,乐视汽车或成贾跃亭弃子"/span>

</a>
,
<a href=span"/article/214867.html"/span target=span"_blank"/span title=span"别小看“搞笑诺贝尔奖”,要向好奇心致敬"/span>

</a>
,
<a href=span"/article/214954.html"/span target=span"_blank"/span title=span"10 年前改变世界的,可不止有 iPhone | 发车"/span>

</a>
,
<a href=span"/article/214908.html"/span target=span"_blank"/span title=span"感谢微博替我做主"/span>

</a>
,
<a href=span"/article/215001.html"/span target=span"_blank"/span title=span"苹果确认取消打赏抽成,但还有多少内容让你觉得值得掏腰包?"/span>

</a>
,
<a href=span"/article/214969.html"/span target=span"_blank"/span title=span"中国音乐的“全面付费”时代即将到来?"/span>

</a>
,
<a href=span"/article/214964.html"/span target=span"_blank"/span title=span"百丽退市启示录:“一代鞋王”如何与新生代消费者渐行渐远"/span>

</a>
]
复制代码
这里的数据是抓到的,但是太乱了,还有很多不是我们想要的,下面就是通过遍历提取我们的有效信息
#遍历列表,获取有效信息
for news in allList:
aaa = news.select('a')
# 只选择长度大于0的结果
if len(aaa) > 0:
# 文章链接
try:#如果抛出异常就代表为空
href = url + aaa[0]['href']
except Exception:
href=''
# 文章图片url
try:
imgUrl = aaa[0].select('img')[0]['src']
except Exception:
imgUrl=""
# 新闻标题
try:
title = aaa[0]['title']
except Exception:
title = "标题为空"
print("标题",title,"\nurl:",href,"\n图片地址:",imgUrl)
print("==============================================================================================")
复制代码
这里添加了异常处理,主要是因为有些新闻可能没有标题,没有网址或图片。如果不进行异常处理,可能会导致我们的爬行中断。
过滤后的有效信息
标题 标题为空
url: https://www.huxiu.com/article/211390.html
图片地址: https://img.huxiucdn.com/article/cover/201708/22/173535862821.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 TFBOYS成员各自飞,商业价值天花板已现?
url: https://www.huxiu.com/article/214982.html
图片地址: https://img.huxiucdn.com/article/cover/201709/17/094856378420.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 买手店江湖
url: https://www.huxiu.com/article/213703.html
图片地址: https://img.huxiucdn.com/article/cover/201709/17/122655034450.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 iPhone X正式告诉我们,手机和相机开始分道扬镳
url: https://www.huxiu.com/article/214679.html
图片地址: https://img.huxiucdn.com/article/cover/201709/14/182151300292.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 信用已被透支殆尽,乐视汽车或成贾跃亭弃子
url: https://www.huxiu.com/article/214962.html
图片地址: https://img.huxiucdn.com/article/cover/201709/16/210518696352.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 别小看“搞笑诺贝尔奖”,要向好奇心致敬
url: https://www.huxiu.com/article/214867.html
图片地址: https://img.huxiucdn.com/article/cover/201709/15/180620783020.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 10 年前改变世界的,可不止有 iPhone | 发车
url: https://www.huxiu.com/article/214954.html
图片地址: https://img.huxiucdn.com/article/cover/201709/16/162049096015.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 感谢微博替我做主
url: https://www.huxiu.com/article/214908.html
图片地址: https://img.huxiucdn.com/article/cover/201709/16/010410913192.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 苹果确认取消打赏抽成,但还有多少内容让你觉得值得掏腰包?
url: https://www.huxiu.com/article/215001.html
图片地址: https://img.huxiucdn.com/article/cover/201709/17/154147105217.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 中国音乐的“全面付费”时代即将到来?
url: https://www.huxiu.com/article/214969.html
图片地址: https://img.huxiucdn.com/article/cover/201709/17/101218317953.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 百丽退市启示录:“一代鞋王”如何与新生代消费者渐行渐远
url: https://www.huxiu.com/article/214964.html
图片地址: https://img.huxiucdn.com/article/cover/201709/16/213400162818.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
复制代码
这里我们抓取新闻网站新闻信息就大功告成了,下面贴出完整代码
from bs4 import BeautifulSoup
from urllib import request
import chardet
url = "https://www.huxiu.com"
response = request.urlopen(url)
html = response.read()
charset = chardet.detect(html)
html = html.decode(str(charset["encoding"])) # 设置抓取到的html的编码方式
# 使用剖析器为html.parser
soup = BeautifulSoup(html, 'html.parser')
# 获取到每一个class=hot-article-img的a节点
allList = soup.select('.hot-article-img')
#遍历列表,获取有效信息
for news in allList:
aaa = news.select('a')
# 只选择长度大于0的结果
if len(aaa) > 0:
# 文章链接
try:#如果抛出异常就代表为空
href = url + aaa[0]['href']
except Exception:
href=''
# 文章图片url
try:
imgUrl = aaa[0].select('img')[0]['src']
except Exception:
imgUrl=""
# 新闻标题
try:
title = aaa[0]['title']
except Exception:
title = "标题为空"
print("标题",title,"\nurl:",href,"\n图片地址:",imgUrl)
print("==============================================================================================")
复制代码
获取到数据后,我们需要将数据存储到数据库中。只要存储在我们的数据库中,并且数据库中有数据,我们就可以进行后续的数据分析和处理。也可以使用爬取的文章,给app提供新闻api接口,当然这是后话了。自学Python数据库操作后,会写一篇文章《Python3数据库实战介绍---将爬取到的数据保存到数据库》