抓取网页数据php(三种数据抓取的方法(bs4)*利用之前构建的下载网页函数)

优采云 发布时间: 2022-01-01 01:18

  抓取网页数据php(三种数据抓取的方法(bs4)*利用之前构建的下载网页函数)

  数据采集的三种方法

  正则表达式(重新库)

  BeautifulSoup (bs4)

  lxml

  *利用之前构建的下载页面函数获取目标网页的html,我们使用

  https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/为例,获取html。

from get_html import download

url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'

page_content = download(url)

  *假设我们需要抓取这个网页中的国家名称和简介,我们依次使用这三种数据抓取方式来实现数据抓取。

  1.正则表达式

  from get_html import download

import re

url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'

page_content = download(url)

country = re.findall('class="h2dabiaoti">(.*?)', page_content) #注意返回的是list

survey_data = re.findall('(.*?)', page_content)

survey_info_list = re.findall(&#39;<p>  (.*?)&#39;, survey_data[0])

survey_info = &#39;&#39;.join(survey_info_list)

print(country[0],survey_info)</p>

  2.BeautifulSoup (bs4)

  from get_html import download

from bs4 import BeautifulSoup

url = &#39;https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/&#39;

html = download(url)

#创建 beautifulsoup 对象

soup = BeautifulSoup(html,"html.parser")

#搜索

country = soup.find(attrs={&#39;class&#39;:&#39;h2dabiaoti&#39;}).text

survey_info = soup.find(attrs={&#39;id&#39;:&#39;wzneirong&#39;}).text

print(country,survey_info)

  3.lxml

  from get_html import download

from lxml import etree #解析树

url = &#39;https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/&#39;

page_content = download(url)

selector = etree.HTML(page_content)#可进行xpath解析

country_select = selector.xpath(&#39;//*[@id="main_content"]/h2&#39;) #返回列表

for country in country_select:

    print(country.text)

survey_select = selector.xpath(&#39;//*[@id="wzneirong"]/p&#39;)

for survey_content in survey_select:

    print(survey_content.text,end=&#39;&#39;)

  运行结果:

  

  最后引用《Writing Web Crawlers in Python》中三种方法的性能对比,如下图:

  

  仅供参考。

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线