网页新闻抓取( Python3爬取新闻网站新闻列表到什么时候才是好的)

优采云 发布时间: 2021-09-17 02:09

  网页新闻抓取(

Python3爬取新闻网站新闻列表到什么时候才是好的)

  # 简单的网络爬虫

from urllib import request

import chardet

response = request.urlopen("http://www.jianshu.com/")

html = response.read()

charset = chardet.detect(html)# {'language': '', 'encoding': 'utf-8', 'confidence': 0.99}

html = html.decode(str(charset["encoding"])) # 解码

print(html)

  因为捕获的HTML文档相对较长,所以这里发布了一个简单的部分供您查看

  

..........后面省略一大堆

  这是对Python3爬虫的简单介绍。这很简单吗?我建议你再敲几下

  3、 Python3将抓取网页中的图片并将其保存到本地文件夹

  目标

  import re

import urllib.request

#爬取网页html

def getHtml(url):

page = urllib.request.urlopen(url)

html = page.read()

return html

html = getHtml("http://tieba.baidu.com/p/3205263090")

html = html.decode('UTF-8')

#获取图片链接的方法

def getImg(html):

# 利用正则表达式匹配网页里的图片地址

reg = r'src="([.*\S]*\.jpg)" pic_ext="jpeg"'

imgre=re.compile(reg)

imglist=re.findall(imgre,html)

return imglist

imgList=getImg(html)

imgCount=0

#for把获取到的图片都下载到本地pic文件夹里,保存之前先在本地建一个pic文件夹

for imgPath in imgList:

f=open("../pic/"+str(imgCount)+".jpg",'wb')

f.write((urllib.request.urlopen(imgPath)).read())

f.close()

imgCount+=1

print("全部抓取完成")

  迫不及待地想看看他们爬上了多美的画面

  

  爬上去很容易就能得到24个女孩的照片。这不是很简单吗

  4、 Python3爬行新闻网站news列表

  这里有点复杂。让我们向您解释一下分布情况

  

  分析我们想要在上图中捕获的信息,然后将其放入div中的a标记和img标记中,因此我们想要考虑的是如何获取这些信息

  这里我们将使用导入的Beauty soup4库,以及这里的关键代码

  # 使用剖析器为html.parser

soup = BeautifulSoup(html, 'html.parser')

# 获取到每一个class=hot-article-img的a节点

allList = soup.select('.hot-article-img')

  以上代码获取的alllist就是我们想要获取的新闻列表。捕获的信息如下所示

  [

![](https://img.huxiucdn.com/article/cover/201708/22/173535862821.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/17/094856378420.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/17/122655034450.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/14/182151300292.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/16/210518696352.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/15/180620783020.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/16/162049096015.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/16/010410913192.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/17/154147105217.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/17/101218317953.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

,

![](https://img.huxiucdn.com/article/cover/201709/16/213400162818.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)

]

  这里的数据是被捕获的,但是太混乱了,还有很多我们不想要的东西。让我们通过遍历来优化我们的有效信息

  这里添加了异常处理。主要原因是有些新闻可能没有标题、URL或图片。如果未完成异常处理,我们可能会中断爬网

  ###过滤后的有效信息

  标题 标题为空

url: https://www.huxiu.com/article/211390.html

图片地址: https://img.huxiucdn.com/article/cover/201708/22/173535862821.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 TFBOYS成员各自飞,商业价值天花板已现?

url: https://www.huxiu.com/article/214982.html

图片地址: https://img.huxiucdn.com/article/cover/201709/17/094856378420.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 买手店江湖

url: https://www.huxiu.com/article/213703.html

图片地址: https://img.huxiucdn.com/article/cover/201709/17/122655034450.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 iPhone X正式告诉我们,手机和相机开始分道扬镳

url: https://www.huxiu.com/article/214679.html

图片地址: https://img.huxiucdn.com/article/cover/201709/14/182151300292.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 信用已被透支殆尽,乐视汽车或成贾跃亭弃子

url: https://www.huxiu.com/article/214962.html

图片地址: https://img.huxiucdn.com/article/cover/201709/16/210518696352.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 别小看“搞笑诺贝尔奖”,要向好奇心致敬

url: https://www.huxiu.com/article/214867.html

图片地址: https://img.huxiucdn.com/article/cover/201709/15/180620783020.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 10 年前改变世界的,可不止有 iPhone | 发车

url: https://www.huxiu.com/article/214954.html

图片地址: https://img.huxiucdn.com/article/cover/201709/16/162049096015.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 感谢微博替我做主

url: https://www.huxiu.com/article/214908.html

图片地址: https://img.huxiucdn.com/article/cover/201709/16/010410913192.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 苹果确认取消打赏抽成,但还有多少内容让你觉得值得掏腰包?

url: https://www.huxiu.com/article/215001.html

图片地址: https://img.huxiucdn.com/article/cover/201709/17/154147105217.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 中国音乐的“全面付费”时代即将到来?

url: https://www.huxiu.com/article/214969.html

图片地址: https://img.huxiucdn.com/article/cover/201709/17/101218317953.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

标题 百丽退市启示录:“一代鞋王”如何与新生代消费者渐行渐远

url: https://www.huxiu.com/article/214964.html

图片地址: https://img.huxiucdn.com/article/cover/201709/16/213400162818.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg

==============================================================================================

  在这里,我们抓取新闻网站新闻信息,就这样完成了。完整的代码发布在下面

  from bs4 import BeautifulSoup

from urllib import request

import chardet

url = "https://www.huxiu.com"

response = request.urlopen(url)

html = response.read()

charset = chardet.detect(html)

html = html.decode(str(charset["encoding"])) # 设置抓取到的html的编码方式

# 使用剖析器为html.parser

soup = BeautifulSoup(html, 'html.parser')

# 获取到每一个class=hot-article-img的a节点

allList = soup.select('.hot-article-img')

#遍历列表,获取有效信息

for news in allList:

aaa = news.select('a')

# 只选择长度大于0的结果

if len(aaa) > 0:

# 文章链接

try:#如果抛出异常就代表为空

href = url + aaa[0]['href']

except Exception:

href=''

# 文章图片url

try:

imgUrl = aaa[0].select('img')[0]['src']

except Exception:

imgUrl=""

# 新闻标题

try:

title = aaa[0]['title']

except Exception:

title = "标题为空"

print("标题",title,"\nurl:",href,"\n图片地址:",imgUrl)

print("==============================================================================================")

  当获得数据时,我们还需要将数据保存到数据库中。只要数据库中有数据,我们就可以进行以下数据分析和处理。我们还可以使用这些爬网的文章为应用程序提供新闻API接口。当然,这些都是以后的事了。在我自学了如何操作python数据库之后,我将编写一个文章@

  视频说明:

  过去回顾

  Python 001~ Python开发工具pycharm(Mac和Windows)的安装和破解简介

  开始使用Python 002~创建您的第一个Python项目

  Python 012介绍~ Python 3零基础~将爬行数据保存到数据库中,用数据库去重复函数

  Python 010~Python 3操作数据库简介借助pycharm快速连接和操作MySQL数据库

  Python 020入门~抓取前程无忧的位置信息并存储在MySQL数据库中

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线