You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|**Python client**|`Client` with flat methods like `smartscraper()`, `searchscraper()`| Same `Client`, but **`extract()`**, **`search()`**, **`scrape()`**, and **namespaced** APIs: `crawl.start()`, `monitor.create()`, … |
|**Parameters**| Many top-level flags (`stealth`, `headers`, `mock`, …) on each call | Shared **`FetchConfig`** and **`LlmConfig`** objects (Python: `fetch_config` / `llm_config`; JS: `fetchConfig` / `llmConfig`) |
20
+
|**Naming**|`website_url`, `user_prompt`, snake_case in Python |**`url`** and **`prompt`** for extract/search; JS uses **camelCase** everywhere |
21
+
|**Errors (JS)**| Often returned as `{ status: 'error', error }`| Methods **throw** on failure; success is `{ data, requestId }`|
22
+
23
+
## Method-by-method migration
24
+
25
+
Use this table to map old entry points to new ones. Details and examples follow below.
26
+
27
+
| v1 | v2 | Notes |
28
+
|----|-----|------|
29
+
|`smartscraper` / `smartScraper`|**`extract`**| Same job: structured extraction from a URL. Rename params and pass extra fetch/LLM options via config objects. |
30
+
|`searchscraper` / `searchScraper`|**`search`**| Web search + extraction; use `query` (or positional string in JS). |
31
+
|`markdownify`|**`scrape`** with `format="markdown"` (Python) or `format: "markdown"` (JS) | HTML → markdown and related “raw page” outputs live under **`scrape`**. |
32
+
|`crawl` (single start call) |**`crawl.start`**, then **`crawl.status`**, **`crawl.stop`**, **`crawl.resume`**| Crawl is explicitly async: you poll or track job id. |
33
+
| Monitors (if you used them) |**`monitor.create`**, **`monitor.list`**, **`monitor.get`**, pause/resume/delete | Same product, namespaced API. |
34
+
|`sitemap`|**Removed from v2 SDKs**| Discover URLs with **`crawl.start`** and URL patterns, or call the REST sitemap endpoint if your integration still requires it—see [Sitemap](/services/sitemap) and SDK release notes. |
35
+
|`agenticscraper`|**Removed**| Use **`extract`** with `FetchConfig` (e.g. `render_js`, `wait_ms`, `stealth`) for hard pages, or **`crawl.start`** for multi-page flows. |
36
+
|`healthz` / `checkHealth`, `feedback`, built-in mock helpers |**Removed or changed**| Use **`credits`**, **`history`**, and dashboard features; check the SDK migration guides for replacements. |
37
+
38
+
## Code-level transition
39
+
40
+
### 1. SmartScraper → `extract`
41
+
42
+
**Before (v1):**`website_url` + `user_prompt`, optional flags on the same object.
If you call the API with `curl` or a generic HTTP client:
153
+
154
+
- Use the v2 host and path pattern: **`https://api.scrapegraphai.com/api/v2/<endpoint>`** (e.g. `/api/v2/extract`, `/api/v2/monitor`).
155
+
- Replace JSON fields to match v2 bodies (e.g. `url` and `prompt` instead of `website_url` and `user_prompt` on extract).
156
+
- Keep using the **`SGAI-APIKEY`** header unless the endpoint docs specify otherwise.
157
+
158
+
Exact paths and payloads are listed under each service (for example [Extract](/services/extract)) and in the [API reference](/api-reference/introduction).
159
+
160
+
## What else changed in v2 (docs & product)
13
161
14
162
- Unified and clearer API documentation
15
163
- Updated service pages and endpoint organization
16
164
- New guides for MCP server and SDK usage
17
165
18
-
## Recommended Path
166
+
## Recommended path
19
167
20
168
1. Log in at [scrapegraphai.com/login](https://scrapegraphai.com/login)
21
169
2. Start from [Introduction](/introduction)
22
170
3. Follow [Installation](/install)
171
+
4. Upgrade packages: `pip install -U scrapegraph-py` / `npm i scrapegraph-js@latest` (Node **≥ 22** for JS v2)
0 commit comments