What is Nabz Search
Nabz Search is a new set of endpoints on the platform that let you search the web, scrape pages, discover site URLs, and extract structured data — all through our API. If you're building something that needs web data, this is for you.
The Endpoints
Search
Hit /api/nabzsearch/search with a query and get back structured results. You can also pass scrape: true to pull page content inline with your results — useful when you need more than just titles and URLs.
GET /api/nabzsearch/search?query=your+query&limit=5
Supports geo-targeting, source filtering (web, images, news), and you can cap content size with max_content_length so responses stay small — handy if you're feeding results into an LLM.
Scrape
Give it a URL, get back the page content as markdown, HTML, or other formats. Handles JS-rendered pages too if you set a wait_for delay.
POST /api/nabzsearch/scrape
{"url": "https://example.com", "format": "markdown"}
Map
Discovers all URLs on a site. Point it at a domain, get back a list of pages. Good for crawling prep, SEO analysis, or finding what's on a site before you start scraping.
GET /api/nabzsearch/map?url=https://example.com&limit=100
Extract (AI) — VIP Only
This one's different. You send a prompt describing what data you want, and the AI pulls it out of the page for you. You can also pass a JSON schema so the output is always in the shape you expect.
Two ways to use it:
- From URLs — give it specific pages to extract from.
- Web Search mode — set
web_search: trueand just ask a question. The AI finds sources on its own and gives you a structured answer. No URLs needed.
// Extract from a specific page
POST /api/nabzsearch/extract
{"urls": ["https://example.com"], "prompt": "Get the company name and contact email"}
// Or let it search the web for you
POST /api/nabzsearch/extract
{"web_search": true, "prompt": "What is the current price of Bitcoin?"}
This endpoint requires a VIP plan and counts against your AI quota. The AI side is still in beta — results depend on the page and the prompt. If something doesn't come back right, try being more specific or add a schema.
How It Works Under the Hood
- Results are cached automatically (search: 5 min, scrape: 10 min, map: 15 min). Extract is not cached since every response is AI-generated.
- All URLs are validated before processing — localhost, private IPs, and non-HTTP schemes are blocked.
- Failed requests retry automatically. If the upstream is down, you'll get a clear error instead of hanging.
- Standard endpoints (search, scrape, map) use your normal plan quota. Extract uses your AI quota separately.
Full Reference
Search Parameters
| Name | Type | Required | Description |
|---|---|---|---|
query | string | Yes | Search query (max 500 chars) |
limit | integer | No | Max results, 1-10 (default: 5) |
format | string | No | markdown, html, rawHtml, links. Auto-enables scraping. |
sources | string | No | web, images, or news |
scrape | boolean | No | Fetch page content for each result |
location | string | No | Geo-targeting (e.g., "Germany") |
country | string | No | ISO country code (e.g., US, DE) |
max_content_length | integer | No | Truncate scraped content to N characters (100-100k) |
Scrape Parameters
| Name | Type | Required | Description |
|---|---|---|---|
url | string | Yes | URL to scrape (public only, max 2000 chars) |
format | string | No | markdown (default), html, rawHtml, links, images, screenshot, summary |
only_main_content | boolean | No | Strip nav/sidebars, return main content only |
wait_for | integer | No | Wait time in ms for JS pages (0-10000) |
country | string | No | ISO country code for geo-targeted scraping |
max_content_length | integer | No | Truncate content to N characters (100-100k) |
Map Parameters
| Name | Type | Required | Description |
|---|---|---|---|
url | string | Yes | Website URL to map (public only, max 2000 chars) |
limit | integer | No | Max URLs to find, 1-500 (default: 100) |
search | string | No | Filter results by keyword |
include_subdomains | boolean | No | Include subdomains |
Extract Parameters
| Name | Type | Required | Description |
|---|---|---|---|
urls | array | Conditional | URLs to extract from (max 10). Required unless web_search is true. |
prompt | string | Yes | What to extract (5-2000 chars) |
schema | object | No | JSON schema for structured output |
web_search | boolean | No | Let the AI search the web instead of requiring URLs |
Example Responses
// Search
{
"success": true,
"query": "laravel sanctum",
"result_count": 2,
"results": [{
"url": "https://laravel.com/docs/sanctum",
"title": "Laravel Sanctum - Laravel Documentation",
"description": "Laravel Sanctum provides a featherweight authentication system..."
}]
}
// Extract with web search
{
"success": true,
"prompt": "What is the current price of Bitcoin in USD?",
"results": {
"price_usd": "$83,721.00",
"source": "CoinMarketCap"
}
}
Caching
| Endpoint | TTL |
|---|---|
| Search | 5 min |
| Scrape | 10 min |
| Map | 15 min |
| Extract | Not cached |
Cached responses have "cached": true in the body.
Errors
| Status | Meaning |
|---|---|
| 403 | VIP required (extract only) |
| 422 | Bad params, blocked URL, or no results |
| 502 | Upstream search engine error |
| 503 | Search service not configured |
Getting Started
Everything lives under /api/nabzsearch/. Use the same Bearer token you use for every other endpoint on the platform. Full docs with more examples are here.