AI Search
Multi-engine web search (Google, Bing, DDG, Brave, Mojeek). Clean JSON, geotargeted, Cloudflare-clean. 3 credits / query.
POST /v1/search
Hit /v1/search for multi-engine results, /v1/search/answer for
AI-summarized answers with citations, or any of our pre-built scraping endpoints. Clean JSON,
rotated proxies, JS rendering, and a cache that pays for itself.
# Pull every recent review for a Google Maps place curl -X POST https://api.scrapenest.dev/v1/scrape/google-maps/reviews \ -H "Authorization: Bearer sn_live_••••••••" \ -H "Content-Type: application/json" \ -d '{ "place_id": "ChIJN1t_tDeuEmsRUsoyG83frY4", "limit": 50, "sort": "newest" }'
"reviews": [ { "author": "Sasha L.", "rating": 5, "posted_at": "2026-05-09T14:21:00Z", "text": "Honestly the best flat white in Surry Hills...", "owner_response": null }, /* 49 more */ ]
A single endpoint that aggregates Google, Bing, DuckDuckGo, Brave and Mojeek — through
residential proxies, behind Cloudflare, in any of 60+ regions. Toggle /deep for
top-N pages already fetched and extracted, or /answer for a RAG-style
AI summary with inline citations.
curl -X POST https://api.scrapenest.dev/v1/search/answer \ -H "Authorization: Bearer sn_live_••••••••" \ -H "Content-Type: application/json" \ -d '{ "query": "what is the cheapest residential proxy provider in 2026?", "max_sources": 5 }'
"answer": "As of early 2026, the lowest residential bandwidth pricing is from Webshare at $1.75/GB on high-volume plans [1][3], with IPRoyal and BrightData coming in slightly higher at $2-2.50/GB [2][4]…", "citations": [ { "index": 1, "url": "webshare.io/pricing", … }, /* 4 more */ ], "model": "meta-llama/llama-3.3-70b-instruct", "credits_charged": 33
Google, Bing, DuckDuckGo, Brave, Mojeek — aggregated, deduped, re-ranked.
/search/deep returns the top-N results with their full extracted text inline. One round-trip instead of twenty.
/search/answer grounds an LLM on the freshest sources. Numbered [1][2] citations point at the URLs we used.
60+ region routing through residential IPs. FlareSolverr handles the challenge pages so your code never sees a CAPTCHA.
Same multi-engine results, half the lock-in. Plus /deep for one-call content extraction
and /answer for AI-grade summarization — neither of which ship in their fast-search SKU.
Rotating proxies. Headless browsers. Anti-bot fingerprints. Layout drift. PagerDuty at midnight when LinkedIn ships a new CDN. You shouldn't have to learn all of that just because Google Maps removed their public reviews API.
POST. Stable, versioned response schema.30 seconds, no card. You get 1,000 credits to play with and an sn_… bearer token.
Pick a pre-built scraper or use the generic /v1/scrape/url. Override proxy tier and cache TTL per request.
Stable schema. X-Cache header tells you if you got a cache hit. Usage is metered per-key, per-endpoint.
Each endpoint has its own response schema, cache TTL, and proxy tier — tuned to the source.
If yours isn't here yet, /v1/scrape/url covers the long tail.
Multi-engine web search (Google, Bing, DDG, Brave, Mojeek). Clean JSON, geotargeted, Cloudflare-clean. 3 credits / query.
POST /v1/search
Search + auto-fetch top N pages with extracted text. Skip the second round-trip for RAG ingestion. 3 + 5/page credits.
POST /v1/search/deep
RAG-style. Search + fetch + LLM summary with numbered citations. The dump-and-go endpoint for agent tool-use.
POST /v1/search/answer
Fetch any HTTP(S) page, apply a CSS schema, get JSON. Datacenter or residential proxy, JS rendering optional.
POST /v1/scrape/url
Recent reviews for any place_id via the Places API. Author, rating, text, posted_at, profile photo. Sort newest / relevance / rating.
POST /v1/scrape/google-maps/reviews
Search Reddit like a browser, or pull one post + the full ranked comment tree. Filter by subreddit, sort, time range.
POST /v1/scrape/reddit/search
POST /v1/scrape/reddit/post
Timestamped transcript chunks for any Apple Podcasts or Overcast episode. Whisper-grade accuracy at $0.02/hr of audio. 30-day cache.
POST /v1/scrape/podcast/transcript
Title, price, currency, shipping, seller, rating, sold count, image gallery, full variant matrix.
POST /v1/scrape/aliexpress/product
Async path for long scrapes. Returns a job_id; poll until status: completed. Built on Redis-backed Arq.
GET /v1/jobs/{id}
We ship custom endpoints in 2–4 weeks for Growth and Scale customers. Tell us what you need.
Request endpoint →Tab through to see how the same auth + cache machinery handles different sources.
curl -X POST https://api.scrapenest.dev/v1/scrape/url \
-H "Authorization: Bearer sn_live_••••••••" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/product/abc",
"schema": { "title": "h1", "price": ".price" },
"render_js": true,
"wait_for": ".price",
"proxy_tier": "auto"
}'
{
"url": "https://example.com/product/abc",
"status_code": 200,
"cache_hit": false,
"proxy_tier_used": "datacenter",
"response_time_ms": 1843,
"title": "Example product",
"data": {
"title": "Example product",
"price": "$24.99"
}
}
curl -X POST https://api.scrapenest.dev/v1/scrape/reddit/search \
-H "Authorization: Bearer sn_live_••••••••" \
-H "Content-Type: application/json" \
-d '{
"query": "best wireless earbuds 2026",
"sort": "top",
"time_range": "month",
"num_results": 25
}'
{
"query": "best wireless earbuds 2026",
"results": [
{
"title": "What's the consensus on the AirPods Pro 3?",
"subreddit": "headphones",
"author": "audiophile_42",
"score": 1284,
"num_comments": 612,
"created_at": 1747923600,
"permalink": "https://reddit.com/r/headphones/comments/1ab2xyz/..."
}
/* 24 more */
]
}
curl -X POST https://api.scrapenest.dev/v1/scrape/google-maps/reviews \
-H "Authorization: Bearer sn_live_••••••••" \
-H "Content-Type: application/json" \
-d '{
"place_id": "ChIJN1t_tDeuEmsRUsoyG83frY4",
"limit": 50,
"sort": "newest"
}'
{
"place": "Single O — Surry Hills",
"rating": 4.7,
"review_count": 1842,
"reviews": [
{
"author": "Sasha L.",
"rating": 5,
"posted_at": "2026-05-09T14:21:00Z",
"text": "Honestly the best flat white in Surry Hills...",
"owner_response": null
}
/* 49 more */
]
}
curl -X POST https://api.scrapenest.dev/v1/scrape/podcast/transcript \
-H "Authorization: Bearer sn_live_••••••••" \
-H "Content-Type: application/json" \
-d '{
"url": "https://podcasts.apple.com/.../id1234567890?i=1000600000000"
}'
{
"title": "Ep. 412 — The state of small models",
"podcast": "Latent Space",
"host": "swyx & Alessio",
"duration_seconds": 4287,
"transcript": [
{ "t": 0, "speaker": "swyx", "text": "Welcome back to Latent Space..." },
{ "t": 12.4, "speaker": "Alessio", "text": "Today we're talking about..." }
/* hundreds more */
]
}
Subscribe and get a monthly credit bundle for the lowest per-credit rate, or go pay-as-you-go and only pay for what you use. All plans include every endpoint, proxy tier, and JS rendering — the higher you go, the cheaper each credit.
Kick the tires
Side projects
For shipping
For teams
Data teams
Need 25M+ credits or dedicated infrastructure? Talk to sales about a custom plan. All paid plans auto-renew monthly and cancel anytime — no contracts.
No subscription, no monthly commit. Pay only for the requests you actually make — priced by the infrastructure each one uses.
Pre-built scrapers (Google Maps, Reddit, AliExpress …) bill the same way — the per-call credit cost is shown on each endpoint's docs. A typical Maps-reviews call is 25 credits; a podcast transcript is 5.
Every plan ships with a managed Playwright pool. Render SPAs, wait for selectors, capture screenshots — no Docker, no Chromium memory leaks at 3 AM.
60+ country geotargeting, datacenter and residential tiers, automatic per-domain routing. We pay the bandwidth, you don't manage IPs.
Pre-built endpoints return clean, stable schemas. The generic /v1/scrape/url takes CSS selectors and emits structured data, so you skip the BeautifulSoup ritual.
If our proxy chain fails or the target 5xxs, the credits go back to your balance automatically. Failures don't drain your plan.
Those are generic scraping APIs — you bring the URL and the selectors and you write the parsers. ScrapeNest is built around pre-built endpoints for specific high-value sources (Google Maps reviews, Reddit posts, podcast transcripts, AliExpress products). The response schema is stable, you don't write selectors, and the credit cost matches what the call actually used.
We also expose /v1/scrape/url for the long tail — same proxy router, same credit pricing as our generic peers.
No. Proxy cost is bundled into the per-request price. We route each request through the right tier automatically (datacenter for unprotected sources, residential for hostile ones). You can override per-request with proxy_tier.
If our proxy chain is exhausted and the target still refuses (CAPTCHA wall, geofence, hard-block), the request returns 502 and the credits are returned to your balance automatically. We don't bill failures.
Across paid traffic we target 60–65% blended cache hit rate. Cached hits bill at the cached-datacenter rate regardless of which tier originally fetched. To bypass caching for live data, set cache_ttl_seconds: 0 on the request.
Yes — that's a primary use case. We have Growth/Scale customers running daily ingest jobs for fine-tuning corpora and RAG indexes. Talk to us about volume rates above 500k req/mo.
Primary region is Hetzner Helsinki (FIN). Worker pool in OVH Beauharnois (CAN). Public API edge fronted by Cloudflare. We can ship a dedicated VPC for Scale customers on request.
Async job polling is in beta. Dashboard is rolling out with v1.1. Webhooks for job completion ship in v1.2. The endpoint catalog grows ~2 per month — vote on the next ones here.
25,000 free requests. No card. The API key prints once — copy it, paste it, ship.