V2EX = way to explore
V2EX 是一个关于分享和探索的地方
Sign Up Now
For Existing Member  Sign In
推荐学习书目
Learn Python the Hard Way
Python Sites
PyPI - Python Package Index
http://diveintopython.org/toc/index.html
Pocoo
值得关注的项目
PyPy
Celery
Jinja2
Read the Docs
gevent
pyenv
virtualenv
Stackless Python
Beautiful Soup
结巴中文分词
Green Unicorn
Sentry
Shovel
Pyflakes
pytest
Python 编程
pep8 Checker
Styles
PEP 8
Google Python Style Guide
Code Style from The Hitchhiker's Guide
killerv
V2EX  ›  Python

使用 apscheduler 和 scrapy 做定时抓取爬虫为什么只抓取一次

  •  
  •   killerv · Nov 20, 2015 · 5579 views
    This topic created in 3819 days ago, the information mentioned may be changed or developed.

    使用 apscheduler 做定时任务, scrapy 做爬虫,但是爬虫只执行了一次,后面就没有继续抓取, aqi 函数是执行的,下面是代码:

    from apscheduler.schedulers.background import BackgroundScheduler
    from apscheduler.triggers.cron import CronTrigger
    import time
    from log.make_log import make_log_file
    
    from scrapy.crawler import CrawlerProcess
    from scrapy.utils.project import get_project_settings
    
    from spider.spiders.aqi import AqiSpider
    
    def aqi(crawler, spider):
        try:
            crawler.crawl(spider)
            crawler.start()
        except Exception, e:
            make_log_file(str(e),'scrapy')
    
    if __name__ == '__main__':
        settings = get_project_settings()
        crawler = CrawlerProcess(settings)
        spider = AqiSpider()
        scheduler = BackgroundScheduler()
        scheduler.daemonic=False
        cron = CronTrigger(second='*/30')
        scheduler.add_job(aqi, cron, args=[crawler, spider])
        scheduler.start()
        while True:
            time.sleep(1000)
    
    No Comments Yet
    About   ·   Help   ·   Advertise   ·   Blog   ·   API   ·   FAQ   ·   Solana   ·   2815 Online   Highest 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 49ms · UTC 13:56 · PVG 21:56 · LAX 06:56 · JFK 09:56
    ♥ Do have faith in what you're doing.