博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Tencent社会招聘scrapy爬虫 --- 已经解决
阅读量:5821 次
发布时间:2019-06-18

本文共 7989 字,大约阅读时间需要 26 分钟。

1.用 scrapy 新建一个 tencent 项目

2.在 items.py 中确定要爬去的内容

1 # -*- coding: utf-8 -*- 2  3 # Define here the models for your scraped items 4 # 5 # See documentation in: 6 # http://doc.scrapy.org/en/latest/topics/items.html 7  8 import scrapy 9 10 11 class TencentItem(scrapy.Item):12     # define the fields for your item here like:13     # 职位14     position_name = scrapy.Field()15     # 详情链接16     positin_link = scrapy.Field()17     # 职业类别 18     position_type = scrapy.Field()19     # 招聘人数20     people_number = scrapy.Field()21     # 工作地点22     work_location = scrapy.Field()23     # 发布时间24     publish_time = scrapy.Field()
View Code

 

3.在当前命令下创建一个名为 tencent_spider 的爬虫, 并指定爬取域的范围

4.打开tencent_spider.py已经初始化了格式, 修改一下就好了

1 # -*- coding: utf-8 -*- 2 import scrapy 3 from tencent.items import TencentItem 4  5 class TencentSpiderSpider(scrapy.Spider): 6     name = "tencent_spider" 7     allowed_domains = ["tencent.com"] 8  9     url = "http://hr.tencent.com/position.php?&start="10     offset = 011 12     start_urls =[ 13         url + str(offset),14     ]15 16     def parse(self, response):17         for each in response.xpath("//tr[@class='even'] | //tr[@class='odd']"):18             # 初始化模型对象19             item = TencentItem()20             item['positionname'] = each.xpath("./td[1]/a/text()").extract()[0]21             # 详情连接22             item['positionlink'] = each.xpath("./td[1]/a/@href").extract()[0]23             # 职位类别24             item['positiontype'] = each.xpath("./td[2]/text()").extract()[0]25             # 招聘人数26             item['peoplenumber'] =  each.xpath("./td[3]/text()").extract()[0]27             # 工作地点28             item['worklocation'] = each.xpath("./td[4]/text()").extract()[0]29             # 发布时间30             item['publishtime'] = each.xpath("./td[5]/text()").extract()[0]31 32             yield item33         if self.offset < 1000:34             self.offset += 1035 36         # 每次处理完一页的数据之后,重新发送下一页页面请求37         # self.offset自增10,同时拼接为新的url,并调用回调函数self.parse处理Response38         yield scrapy.Request(self.url + str(self.offset), callback = self.parse)
View Code

5.在 piplines.py 中写入文件

1 # -*- coding: utf-8 -*- 2  3 # Define your item pipelines here 4 # 5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting 6 # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html 7  8 import json 9 10 class TencentPipeline(object):11     def open_spider(self, spider):12         self.filename = open("tencent.json", "w")13 14     def process_item(self, item, spider):15         text = json.dumps(dict(item), ensure_ascii = False) + "\n"16         self.filename.write(text.encode("utf-8")17         return item18 19     def close_spider(self, spider):20         self.filename.close()
View Code

 

6.进入settings中, 设置一下 headers 和 piplines

 

7.在命令输入以下命令运行

        出现以下错误...还未解决...

2017-10-03 20:03:20 [scrapy] INFO: Scrapy 1.1.1 started (bot: tencent)2017-10-03 20:03:20 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tencent.spiders', 'SPIDER_MODULES': ['tencent.spiders'], 'DOWNLOAD_DELAY': 2, 'BOT_NAME': 'tencent'}2017-10-03 20:03:20 [scrapy] INFO: Enabled extensions:['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.corestats.CoreStats']2017-10-03 20:03:20 [scrapy] INFO: Enabled downloader middlewares:['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats']2017-10-03 20:03:20 [scrapy] INFO: Enabled spider middlewares:['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware']Unhandled error in Deferred:2017-10-03 20:03:20 [twisted] CRITICAL: Unhandled error in Deferred:Traceback (most recent call last):  File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 57, in run    self.crawler_process.crawl(spname, **opts.spargs)  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 163, in crawl    return self._crawl(crawler, *args, **kwargs)  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 167, in _crawl    d = crawler.crawl(*args, **kwargs)  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1274, in unwindGenerator    return _inlineCallbacks(None, gen, Deferred())--- 
--- File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks result = g.send(result) File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 90, in crawl six.reraise(*exc_info) File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 72, in crawl self.engine = self._create_engine() File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 97, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 69, in __init__ self.scraper = Scraper(crawler) File "/usr/local/lib/python2.7/dist-packages/scrapy/core/scraper.py", line 71, in __init__ self.itemproc = itemproc_cls.from_crawler(crawler) File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 44, in load_object mod = import_module(module) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name)exceptions.SyntaxError: invalid syntax (pipelines.py, line 17)2017-10-03 20:03:20 [twisted] CRITICAL: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks result = g.send(result) File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 90, in crawl six.reraise(*exc_info) File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 72, in crawl self.engine = self._create_engine() File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 97, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 69, in __init__ self.scraper = Scraper(crawler) File "/usr/local/lib/python2.7/dist-packages/scrapy/core/scraper.py", line 71, in __init__ self.itemproc = itemproc_cls.from_crawler(crawler) File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 44, in load_object mod = import_module(module) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/home/python/Desktop/tencent/tencent/pipelines.py", line 17 return item ^SyntaxError: invalid syntax
View Code

 还是自己太粗心了, 在 5 中的piplines.py 少了一个括号...感谢 sun shine 的帮助

    thank you !!!

转载于:https://www.cnblogs.com/cuzz/p/7623895.html

你可能感兴趣的文章
hive基本操作与应用
查看>>
excel快捷键设置
查看>>
html5纲要,细谈HTML 5新增的元素
查看>>
Android应用集成支付宝接口的简化
查看>>
[分享]Ubuntu12.04安装基础教程(图文)
查看>>
django 目录结构修改
查看>>
win8 关闭防火墙
查看>>
CSS——(2)与标准流盒模型
查看>>
MYSQL 基本SQL语句
查看>>
C#中的Marshal
查看>>
linux命令:ls
查看>>
Using RequireJS in AngularJS Applications
查看>>
hdu 2444(二分图最大匹配)
查看>>
【SAP HANA】关于SAP HANA中带层次结构的计算视图Cacultation View创建、激活状况下在系统中生成对象的研究...
查看>>
DevOps 前世今生 | mPaaS 线上直播 CodeHub #1 回顾
查看>>
iOS 解决UITabelView刷新闪动
查看>>
CentOS 7 装vim遇到的问题和解决方法
查看>>
JavaScript基础教程1-20160612
查看>>
ios xmpp demo
查看>>
python matplotlib 中文显示参数设置
查看>>