Selenium Click()不适用于Scratchy spider

Selenium Click() not working with scrapy spider

本文关键字:适用于 Scratchy spider 不适用 Click Selenium      更新时间:2024-05-25

我正试图使用一个抓取蜘蛛从列表页面抓取到产品页面的链接。该页面显示了前10台机器,并有一个用于"显示所有机器"的按钮,该按钮调用一些javascript。javascript相当复杂(即,我不能只看函数就可以看到按钮指向的url)。我正试图使用selenium网络驱动程序来模拟点击按钮,但由于某种原因,它不起作用。当我抓取产品链接时,我只得到前10个,而不是完整的列表。

有人能告诉我为什么它不起作用吗?

我要刮的那一页是http://www.ncservice.com/en/second-hand-milling-machines

蜘蛛是

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.http import Request, FormRequest
from scrapy import log
from scrapy.exceptions import DropItem
from scrapy import signals
from mtispider.items import MachineItem
import urlparse
import time
import MySQLdb
import unicodedata
import re
from mtispider import tools
from selenium import webdriver

class MachineSpider(CrawlSpider):
name = 'nc-spider'
allowed_domains = ['ncservice.com']
def start_requests(self):
    requests = list(super(MachineSpider, self).start_requests())
    requests.append(Request('http://www.ncservice.com/en/second-hand-milling-machines', callback=self.parsencmilllist))
    return requests
def parsencmilllist(self,response):
    hxs=HtmlXPathSelector(response)
    driver= webdriver.Firefox()
    driver.get(response.url)
    try:
        driver.FindElement(By.Id("mas-resultados-fresadoras")).Click()
    except:
        log.msg("Couldnt get all the machines", level=log.INFO)
    ncmachs = hxs.select('//div[@id="resultados"]//a/@href').extract()
    for ncmach in ncmachs:
        yield Request(ncmach,
                      meta = {'type':'Milling'},
                      callback=self.parsencmachine)
    driver.quit()
def parsencmachine(self,response):
    #scrape the machine
    return item

谢谢!

主要问题是,您需要从Web驱动程序的page_source初始化Selector,而不是从传入回调的response初始化:

from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import Request
from scrapy import Selector
from selenium import webdriver
class MachineSpider(CrawlSpider):
    name = 'nc-spider'
    allowed_domains = ['ncservice.com']
    def start_requests(self):
        yield Request('http://www.ncservice.com/en/second-hand-milling-machines',
                      callback=self.parsencmilllist)
    def parsencmilllist(self, response):
        driver = webdriver.Firefox()
        driver.get(response.url)
        driver.find_element_by_id("mas-resultados-fresadoras").click()
        sel = Selector(text=driver.page_source)
        driver.quit()
        links = sel.xpath('//div[@id="resultados"]//a/@href').extract()
        for link in links:
            yield Request(link,
                          meta={'type': 'Milling'},
                          callback=self.parsencmachine)
    def parsencmachine(self, response):
        print response.url