rubygems Anemone

30% OFF - 9th Anniversary discount on Entity Framework Extensions until December 15 with code: ZZZANNIVERSARY9


  • Use, options) to initialize the crawler
  • Use on_every_page block to run code on every page visited
  • Use .run method to start the crawl. No code beforehand will actually start any GET calls.


urlURL (including protocol to be crawled)
optionsoptional hash, see all options here


  • The crawler will by only visit links that are on the same domain as the starting URL. This is important to know when dealing with content subdomains such as since they will be ignored when crawling
  • The crawler is HTTP / HTTPS aware and will by default stay on the initial protocol and not visit other links on the same domain
  • The page object in the on_every_page block above has a .doc method which returns the Nokogiri document for the HTML body of the page. This means you can use Nokogiri selectors inside the on_every_page block such as page.doc.css('div#id')
  • Other information to start can be found here

Got any rubygems Question?