Define scraping rules for any URL, schedule runs, store results in structured tables, export to CSV or JSON. Your own Apify without paying per scrape.
Every operation in Rustler is available through a JSON REST API. No SDK required — use curl, fetch, or any HTTP client.
GET /api/scrape_jobs — list all scrape_jobs with optional search and filter query parametersPOST /api/scrape_jobs — create a new scrapejob recordGET /api/scrape_jobs/{id} — retrieve a single scrapejob by IDPUT /api/scrape_jobs/{id} — update an existing scrapejobDELETE /api/scrape_jobs/{id} — remove a scrapejobGET /api/stats — aggregated statistics with status breakdownGET /api/health — health check endpoint for monitoringDownload the binary and run it. Rustler starts serving immediately with a dashboard at localhost and a REST API for automation. No cloud account, no API keys to provision, no monthly invoice. Your data lives in a SQLite file you can back up, move, or query directly.
Every scrapejob your team creates contains context that matters — name, url, selector, schedule, last result. When that data lives in a third-party service, you are one acquisition or policy change away from losing access. Rustler keeps it local.
GET /api/scrape_jobs — List all scrape jobs. Supports ?q=keyword for search and ?status=value for filteringPOST /api/scrape_jobs — Create a new scrapejob. Send JSON with at least nameGET /api/scrape_jobs/{id} — Fetch one scrapejob by IDPUT /api/scrape_jobs/{id} — Update fields on an existing scrapejobDELETE /api/scrape_jobs/{id} — Remove a scrapejobGET /api/stats — Returns total count and breakdown by statusGET /api/health — Returns {"status":"ok"} for uptime monitoringStart Rustler with a port and a data directory. It creates its SQLite database on first run and serves both the API and the dashboard on the same port. Create scrape jobs through the web interface or POST JSON to the API. Filter by url, selector, or search by keyword. Update records with PUT, delete with DELETE. The stats endpoint returns aggregate counts grouped by status for monitoring.
Self-hosted web scraper and data collector. Self-hosted on your infrastructure. Your data never leaves your server.
curl -fsSL https://stockyard.dev/install.sh | sh -s -- --tool rustler
PORT=8980 ./rustler
http://localhost:8980
Single binary. Embedded SQLite. No Docker. No database. No dependencies.
Your license key arrives by email within 5 minutes of checkout. Set it as an environment variable and restart the binary.
export RUSTLER_LICENSE_KEY=stockyard_xxxxxxxxxxxxxxxxxxxx ./rustler
No cloud connectivity required. The binary validates the key offline with Ed25519 signatures.