如何文章采集(一起研究并学习一下“”的篇文章“”)
优采云 发布时间: 2022-04-12 06:25如何文章采集(一起研究并学习一下“”的篇文章“”)
本篇文章主要为大家展示“Python如何爬取新闻信息”。内容简单易懂,清晰明了。我希望它可以帮助你解决你的疑惑。让小编带你学习了解“Python”如何抓取新闻”这个文章。
前言
一个简单的Python信息采集案例,从列表页到详情页,再到数据保存,保存为txt文档,网站网页结构比较规整,简单明了,以及信息和新闻内容采集 并保存!
资料放在群档等你拿
适用于的库
请求、时间、重新、UserAgent、etree
import requests,time,re
from fake_useragent import UserAgent
from lxml import etree
列表页面,链接xpath解析
href_list=req.xpath('//ul[@class="news-list"]/li/a/@href')
详情页面
内容 xpath 解析
h3=req.xpath('//div[@class="title-box"]/h3/text()')[0]
author=req.xpath('//div[@class="title-box"]/span[@class="news-from"]/text()')[0]
details=req.xpath('//div[@class="content-l detail"]/p/text()')
内容格式
detail='\n'.join(details)
标题格式化,替换非法字符
pattern = r"[\/\\\:\*\?\"\\|]"
new_title = re.sub(pattern, "_", title) # 替换为下划线
保存数据,另存为txt文本
def save(self,h3, author, detail):
with open(f'{h3}.txt','w',encoding='utf-8') as f:
f.write('%s%s%s%s%s'%(h3,'\n',detail,'\n',author))
print(f"保存{h3}.txt文本成功!")
遍历数据采集,yield处理
def get_tasks(self):
data_list = self.parse_home_list(self.url)
for item in data_list:
yield item
程序运行效果
附上源码参考:
#研招网考研资讯采集
#20200710 by微信:huguo00289
# -*- coding: UTF-8 -*-
import requests,time,re
from fake_useragent import UserAgent
from lxml import etree
class RandomHeaders(object):
ua=UserAgent()
@property
def random_headers(self):
return {
'User-Agent': self.ua.random,
}
class Spider(RandomHeaders):
def __init__(self,url):
self.url=url
def parse_home_list(self,url):
response=requests.get(url,headers=self.random_headers).content.decode('utf-8')
req=etree.HTML(response)
href_list=req.xpath('//ul[@class="news-list"]/li/a/@href')
print(href_list)
for href in href_list:
item = self.parse_detail(f'https://yz.chsi.com.cn{href}')
yield item
def parse_detail(self,url):
print(f">>正在爬取{url}")
try:
response = requests.get(url, headers=self.random_headers).content.decode('utf-8')
time.sleep(2)
except Exception as e:
print(e.args)
self.parse_detail(url)
else:
req = etree.HTML(response)
try:
h3=req.xpath('//div[@class="title-box"]/h3/text()')[0]
h3=self.validate_title(h3)
author=req.xpath('//div[@class="title-box"]/span[@class="news-from"]/text()')[0]
details=req.xpath('//div[@class="content-l detail"]/p/text()')
detail='\n'.join(details)
print(h3, author, detail)
self.save(h3, author, detail)
return h3, author, detail
except IndexError:
print(">>>采集出错需延时,5s后重试..")
time.sleep(5)
self.parse_detail(url)
@staticmethod
def validate_title(title):
pattern = r"[\/\\\:\*\?\"\\|]"
new_title = re.sub(pattern, "_", title) # 替换为下划线
return new_title
def save(self,h3, author, detail):
with open(f'{h3}.txt','w',encoding='utf-8') as f:
f.write('%s%s%s%s%s'%(h3,'\n',detail,'\n',author))
print(f"保存{h3}.txt文本成功!")
def get_tasks(self):
data_list = self.parse_home_list(self.url)
for item in data_list:
yield item
if __name__=="__main__":
url="https://yz.chsi.com.cn/kyzx/jyxd/"
spider=Spider(url)
for data in spider.get_tasks():
print(data)
以上就是《Python如何抓取新闻信息》文章的全部内容,感谢阅读!相信大家都有一定的了解。希望分享的内容对大家有所帮助。想了解更多知识,请关注易宿云行业资讯频道!