# Overview

Skrape{it} offers an unified, intuitive and DSL-controlled way to make parsing of websites as comfortable as possible.&#x20;

* [x] [Http-Client DSL](https://docs.skrape.it/docs/1.0.x/http-client/parse-html-from-web) without verbosity and ceremony to make requests and corresponding request options like headers, cookies etc. in a fluent style interface.&#x20;
* [x] [Pre-configure a client](https://docs.skrape.it/docs/1.0.x/http-client/pre-configure-client) once to either reuse it or adjust only the things that differ at certain requests - especially handy while working with authentication flows or custom headers.
* [x] Can [handle client side rendered web pages](https://docs.skrape.it/docs/1.0.x/http-client/browserfetcher) (e.g. pages created with frameworks like React.js, Angular or Vue.js or pages manipulated with jQuery or other javascript)

A Http request is done as easy as in the given example. Just call the `skrape` function wherever you want in your code. It will force you to pass a [fetcher](#the-different-fetchers) and make further[ request option](https://docs.skrape.it/docs/1.0.x/http-client/parse-html-from-web) available in the clojure.

```kotlin
skrape(HttpFetcher) { // <-- pass any Fetcher, e.g. HttpFetcher, BrowserFetcher, ...
    // ... request options goes here, e.g the most basic would be url
    url = "https://docs.skrape.it"
    
    expect {}
    extract {}
}
```

{% hint style="info" %}
The http-request is only executed after either the [**`extract`**](https://docs.skrape.it/docs/1.0.x/dsl/extracting-data-from-websites) or [**`expect`**](https://docs.skrape.it/docs/1.0.x/dsl/basic-test-scenario) function has been called. This behaviour also allows to[ preconfigure the http-client](https://docs.skrape.it/docs/1.0.x/http-client/pre-configure-client) for multiple calls. If you use expect as well as extract it will only make 1 request.
{% endhint %}

### The Different Fetchers

Skrape{it} provides different types of Fetchers (aka Http-Clients) that can be passed to its DSL. All of them will execute http requests but each of them handles a different use-case.&#x20;

#### You want to scrape a simple HTML page, easy, as fast as possible, but with deactivated Javascript?

{% content-ref url="httpfetcher" %}
[httpfetcher](https://docs.skrape.it/docs/1.0.x/http-client/httpfetcher)
{% endcontent-ref %}

#### You want to scrape a complex website, maybe a SPA app that has been written with frameworks like React.js, Angular or Vue.js or at least rely on javascript a lot?

{% content-ref url="browserfetcher" %}
[browserfetcher](https://docs.skrape.it/docs/1.0.x/http-client/browserfetcher)
{% endcontent-ref %}

#### You want to scrape multiple HTML pages in parallel from inside a coroutine?

{% content-ref url="asyncfetcher" %}
[asyncfetcher](https://docs.skrape.it/docs/1.0.x/http-client/asyncfetcher)
{% endcontent-ref %}
