抓取网页新闻(Python正则抓取网易新闻的方法结合实例形式较为详细的分析 )

优采云 发布时间: 2022-01-14 18:05

  抓取网页新闻(Python正则抓取网易新闻的方法结合实例形式较为详细的分析

)

  本文文章主要介绍Python中定时抓取网易新闻的方法,并详细分析使用Python使用正则表达式抓取网易新闻的实现技巧和注意事项。有需要的朋友可以参考以下

  本文示例介绍了Python定时抓取网易新闻的方法。分享给大家参考,详情如下:

  自己写了一些爬网易新闻的爬虫,发现它的网页源码和网页上的评论一点都不正确,于是用抓包工具获取了它的评论隐藏地址(各个浏览器有自己的抓包工具可以用来分析网站)

  如果你仔细看,你会发现有一个特别的,那么这就是你想要的

  然后打开链接找到相关的评论内容。 (下图为第一页内容)

  接下来是代码(也是按照大神改写的)。

  #coding=utf-8import urllib2import reimport jsonimport timeclass WY(): def __init__(self): self.headers = {&#039;User-Agent&#039;: &#039;Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.24 (KHTML, like &#039;} self.url=&#039;http://comment.news.163.com/data/news3_bbs/df/B9IBDHEH000146BE_1.html&#039; def getpage(self,page): full_url=&#039;http://comment.news.163.com/cache/newlist/news3_bbs/B9IBDHEH000146BE_&#039;+str(page)+&#039;.html&#039; return full_url def gethtml(self,page): try: req=urllib2.Request(page,None,self.headers) response = urllib2.urlopen(req) html = response.read() return html except urllib2.URLError,e: if hasattr(e,&#039;reason&#039;): print u"连接失败",e.reason return None #处理字符串 def Process(self,data,page): if page == 1: data=data.replace(&#039;var replyData=&#039;,&#039;&#039;) else: data=data.replace(&#039;var newPostList=&#039;,&#039;&#039;) reg1=re.compile(" \[<a href=&#039;&#039;>") data=reg1.sub(&#039; &#039;,data) reg2=re.compile(&#039;\]&#039;) data=reg2.sub(&#039;&#039;,data) reg3=re.compile(&#039;

&#039;) data=reg3.sub(&#039;&#039;,data) return data #解析json def dealJSON(self): with open("WY.txt","a") as file: file.write(&#039;ID&#039;+&#039;|&#039;+&#039;评论&#039;+&#039;|&#039;+&#039;踩&#039;+&#039;|&#039;+&#039;顶&#039;+&#039;\n本文来源gaodai$ma#com搞$代*码6网&#039;) for i in range(1,12): if i == 1: data=self.gethtml(self.url) data=self.Process(data,i)[:-1] value=json.loads(data) file=open(&#039;WY.txt&#039;,&#039;a&#039;) for item in value[&#039;hotPosts&#039;]: try: file.write(item[&#039;1&#039;][&#039;f&#039;].encode(&#039;utf-8&#039;)+&#039;|&#039;) file.write(item[&#039;1&#039;][&#039;b&#039;].encode(&#039;utf-8&#039;)+&#039;|&#039;) file.write(item[&#039;1&#039;][&#039;a&#039;].encode(&#039;utf-8&#039;)+&#039;|&#039;) file.write(item[&#039;1&#039;][&#039;v&#039;].encode(&#039;utf-8&#039;)+&#039;\n&#039;) except: continue file.close() print &#039;--正在采集%d/12--&#039;%i time.sleep(5) else: page=self.getpage(i) data = self.gethtml(page) data = self.Process(data,i)[:-2] # print data value=json.loads(data) # print value file=open(&#039;WY.txt&#039;,&#039;a&#039;) for item in value[&#039;newPosts&#039;]: try: file.write(item[&#039;1&#039;][&#039;f&#039;].encode(&#039;utf-8&#039;)+&#039;|&#039;) file.write(item[&#039;1&#039;][&#039;b&#039;].encode(&#039;utf-8&#039;)+&#039;|&#039;) file.write(item[&#039;1&#039;][&#039;a&#039;].encode(&#039;utf-8&#039;)+&#039;|&#039;) file.write(item[&#039;1&#039;][&#039;v&#039;].encode(&#039;utf-8&#039;)+&#039;\n&#039;) except: continue file.close() print &#039;--正在采集%d/12--&#039;%i time.sleep(5)if __name__ == &#039;__main__&#039;: WY().dealJSON()

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线