关于scrapy的问题,为什么我不能解析整个页面,而只能解析页面上的第一条记录

Questions about scrapy, why can't I parse the whole page, but just the first record on the page?

本文关键字:记录 一条 问题 scrapy 为什么 关于 不能      更新时间:2023-09-26

我是一个新的scrappy,并试图按照一个例子(链接http://mherman.org/blog/2012/11/08/recursively-scraping-web-pages-with-scrapy/#.VcFiAjBVhBc)抓取craiglist。

然而,每次我运行我的代码时,我只能获得该页上的第一条记录,并且从所附代码中的示例是这样的,它只包含每个页面上的第一条记录

link,title
/eby/npo/5155561393.html,Residential Administrator full time
/sfc/npo/5154403251.html,Sr. Director of Family Support Services
/eby/npo/5150280793.html,Veterans Program Internship
/eby/npo/5157174843.html,PROTECT OUR LIFE SAVING MEDICINE! $10-15/H
/eby/npo/5143949422.html,Program Supervisor - Multisystemic Therapy (MST)
/sby/npo/5145782515.html,Housing Specialist -- Santa Clara and Alameda Counties
/nby/npo/5148193893.html,Shipping Assistant for Non Profit
/sby/npo/5142160649.html,Companion for People with Developmental Disabilities
/sfc/npo/5139127862.html,Director of Vocational Services

,我使用"scrapy crawl craig2 -o items_2.csv -t csv"来运行代码。提前感谢您的帮助

代码是:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item, Field
from scrapy.contrib.spiders import CrawlSpider#, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
class CraigslistSampleItem(Item):
    title = Field()
    link = Field()

class MySpider(CrawlSpider):
    name = "craig2"
    allowed_domains = ["sfbay.craigslist.org"]
    start_urls = ["http://sfbay.craigslist.org/search/"]
   # rules = (Rule (SgmlLinkExtractor(allow=("index'd00'.html", ),restrict_xpaths=('//p[@class="button next"]',))
   # , callback="parse_items", follow= True),
    #)

    def start_requests(self):
            for i in range(9):
                yield Request("http://sfbay.craigslist.org/search/npo?s=" + str(i) + "00" , self.parse_items)

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//span[@class="pl"]')
        items = []
        for ii in titles:
            item = CraigslistSampleItem()
            item ["title"] = ii.select("a/text()").extract()
            item ["link"] = ii.select("a/@href").extract()
            items.append(item)
            return(items)

代码的问题是您在for循环中执行return(items)。这意味着你将在第一个标题之后返回。所以即使每页上有100个标题,你也会返回第一个。因此,将return(items)向左移动一个块,您就可以了:

def parse_items(self, response):
    hxs = HtmlXPathSelector(response)
    titles = hxs.select('//span[@class="pl"]')
    items = []
    for ii in titles:
        item = CraigslistSampleItem()
        item ["title"] = ii.select("a/text()").extract()
        item ["link"] = ii.select("a/@href").extract()
        items.append(item)
    return(items)

注意,在这种情况下,return(items)for循环在相同的缩进级别上,而不是在循环中。这将在我的机器上返回CSV输出中的900个条目。

Ooorza的解决方案也很好,但你不需要全部。在这种情况下,解决方案是在循环中对每个item进行yield。在本例中,您将for循环转换为生成器函数,该函数将解析后的项发送到进一步处理。在这种情况下,您不需要将当前项append添加到列表中。parse_items方法看起来像这样:

def parse_items(self, response):
    hxs = HtmlXPathSelector(response)
    titles = hxs.select('//span[@class="pl"]')
    for ii in titles:
        item = CraigslistSampleItem()
        item ["title"] = ii.select("a/text()").extract()
        item ["link"] = ii.select("a/@href").extract()
        yield item

尝试以下代码:

class MySpider(CrawlSpider):
    name = "craig2"
    allowed_domains = ["sfbay.craigslist.org"]
    start_urls = ["http://sfbay.craigslist.org/search/npo?s=%s" % i for i in xrange(1,9)]
    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//span[@class="pl"]')
        items = []
        for ii in titles:
            item = CraigslistSampleItem()
            item ["title"] = ii.select("a/text()").extract()
            item ["link"] = ii.select("a/@href").extract()
            items.append(item)
            yield item