skrape{it}
1.1.X
1.1.X
  • Introduction
  • Why it exists
  • overview
    • Setup
    • Who should be using it
  • Http Client
    • Overview
    • Fetchers
      • HttpFetcher
      • BrowserFetcher
      • AsyncFetcher
      • Implement your own
    • Request Options
    • Pre-configure client
    • Response
      • Status
      • Cookies
  • Html Parser
    • Parsing HTML
  • assertions
    • expect content
  • How to Use
    • Testing
    • Scraping
    • JS-rendered sites
  • Examples
    • Grab all links from a Website
    • Creating a RESTful API (Spring-Boot)
  • GitHub Repo
  • Extensions
    • MockMvc
      • Getting Started
      • GitHub Repo
    • Ktor
      • Getting Started
      • GitHub Repo
  • About skrape{it}
Powered by GitBook
On this page

Was this helpful?

  1. Http Client

Overview

Why does skrape{it} provide its own http client implementations?

PreviousWho should be using itNextFetchers

Last updated 3 years ago

Was this helpful?

Skrape{it} offers an unified, intuitive and DSL-controlled way to make parsing of websites as comfortable as possible.

A Http request is done as easy as in the given example. Just call the skrape function wherever you want in your code. It will force you to pass a and makes further available in the clojure.

skrape(HttpFetcher) { // <-- pass any Fetcher, e.g. HttpFetcher, BrowserFetcher, ...
    request {
        // ... request options goes here, e.g the most basic would be url
    }
    response {
        // do stuff with the response like parsing the response body ...
    }
}

The http-request is only executed after either the response function has been called. This behaviour also allows to and reusing request settings for multiple calls.

Http-Client DSL
Pre-configure a client
handle client side rendered web pages
fetcher
request option
preconfigure the http-client