新聞中心
這里有您想知道的互聯(lián)網(wǎng)營(yíng)銷解決方案
Python簡(jiǎn)單爬蟲(chóng)
爬取鏈家二手房源信息
在三門(mén)等地區(qū),都構(gòu)建了全面的區(qū)域性戰(zhàn)略布局,加強(qiáng)發(fā)展的系統(tǒng)性、市場(chǎng)前瞻性、產(chǎn)品創(chuàng)新能力,以專注、極致的服務(wù)理念,為客戶提供網(wǎng)站制作、成都網(wǎng)站設(shè)計(jì) 網(wǎng)站設(shè)計(jì)制作按需設(shè)計(jì)網(wǎng)站,公司網(wǎng)站建設(shè),企業(yè)網(wǎng)站建設(shè),品牌網(wǎng)站建設(shè),營(yíng)銷型網(wǎng)站,成都外貿(mào)網(wǎng)站建設(shè),三門(mén)網(wǎng)站建設(shè)費(fèi)用合理。
import requests import re from bs4 import BeautifulSoup import csv url = ['https://cq.lianjia.com/ershoufang/'] for i in range(2,101): url.append('https://cq.lianjia.com/ershoufang/pg%s/'%(str(i))) # 模擬谷歌瀏覽器 headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'} for u in url: r = requests.get(u,headers=headers) soup = BeautifulSoup(r.text,'lxml').find_all('li', class_='clear LOGCLICKDATA') for i in soup: ns = i.select('div[class="positionInfo"]')[0].get_text() region = ns.split('-')[1].replace(' ','').encode('gbk') rem = ns.split('-')[0].replace(' ','').encode('gbk') ns = i.select('div[class="houseInfo"]')[0].get_text() xiaoqu_name = ns.split('|')[0].replace(' ','').encode('gbk') huxing = ns.split('|')[1].replace(' ','').encode('gbk') pingfang = ns.split('|')[2].replace(' ','').encode('gbk') chaoxiang = ns.split('|')[3].replace(' ','').encode('gbk') zhuangxiu = ns.split('|')[4].replace(' ','').encode('gbk') danjia = re.findall("\d+",i.select('div[class="unitPrice"]')[0].string)[0] zongjia = i.select('div[class="totalPrice"]')[0].get_text().encode('gbk') out=open("/data/data.csv",'a') csv_write=csv.writer(out) data = [region,xiaoqu_name,rem,huxing,pingfang,chaoxiang,zhuangxiu,danjia,zongjia] csv_write.writerow(data) out.close()
數(shù)據(jù)結(jié)果
本文標(biāo)題:Python簡(jiǎn)單爬蟲(chóng)
文章鏈接:http://www.ef60e0e.cn/article/gpdipd.html