Skip to the content

Automate Everything !

🤖 Explore with AI: ChatGPT Perplexity Claude Google AI Grok

For Enterprises | Teams | Start-Ups

eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

0
$0.00
eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

Menu
0
$0.00
  • Categories
    • Workflow Automation
    • AI Workflow
    • AI Agent
    • Agentic AI
  • Home
  • Automate Now !
  • About Us
  • Contact
  • Blog
  • Free AI Workflow
  • Free AI Agents

eZintegrations

  • eZintegrations Introduction
  • Data Source API
    • Response Parameters: Text, XML, and JSON Formats
    • Environment Settings for Reusable and Dynamic Configuration
    • API Numeric Parameters for Pagination and Record Limits
    • API Time Parameters for Date and Time-Based Data Filtering
    • Authorization
    • How to test the Data Source API
    • Pre- Request Scripts
      • Pre- Request Scripts for Amazon S3
      • Pre- Request Scripts for Azure Cosmos DB
      • Pre-Request Script for Amazon SP API
      • Pre- Request Scripts
    • API Pagination Methods
      • Custom Pagination
      • Encoded Next Token Pagination
      • Cursor Pagination
      • Pagination with Body
      • Total Page Count Pagination
      • Offset Pagination
      • Next URL Pagination
      • API Pagination Introduction
      • Pagination examples
        • SAP Shipment API Pagination
        • Amazon SP API Pagination

Goldfinch AI

  • Goldfinch AI Introduction

Bizdata API

  • Universal API for Database
    • API for PostgreSQL Database – Universal API
    • API for Amazon Aurora Database (MySQL/Maria) – Universal API
    • API for Amazon Redshift Database – Universal API
    • API for Snowflake Database – Universal API
    • API for MySQL/Maria Database – Universal API
    • API for MS-SQL Database-Universal API
    • API for Oracle Database- Universal API
    • Introduction to Universal API for Databases
  • SFTP API
    • SFTP API
  • Document Understanding APIs
    • Document Understanding API- Extract data from Documents
  • Web Crawler API
    • Web Crawler API – Fast Website Scraping
  • AI Workflow Testing APIs
    • Netsuite Source Testing API (Netsuite API Replica)
    • Salesforce Testing API (Salesforce API replica)
    • OAuth2.0 Testing API 
    • Basic Auth Testing API 
    • No Auth Testing API
    • Pagination with Body Testing API
    • Next URL Pagination Testing API 
    • Total Page Count Pagination Testing API
    • Cursor Pagination Testing API 
    • Offset Pagination Testing API
  • Import IB API
    • Import IB API
  • Linux File & Folder Monitoring APIs
    • Linux File & Folder Monitoring APIs
  • Webhook
    • Webhook Integration-Capture Events in Real Time
  • Websocket
    • Websocket Integration- Fetch Real Time Data
  • Image Understanding
    • Image Understanding API – Extract data from Images
View Categories

Web Crawler API – Fast Website Scraping

Web Crawler API

The Web Crawler API enables efficient web scraping and data extraction from websites.
It supports Markdown, JSON, and cleaned HTML output formats, making it ideal for automation and data processing workflows.


Method

POST

Endpoint

{{base_url}}/webcrawl

Authentication

The following parameters and headers are required for authentication:

Required Params

  • client_id — Your API authentication ID.

Required Headers

  • client-secret — Your API authentication secret.
  • Content-Type: application/json

Input Formats

The API accepts JSON input containing crawl configuration settings.

Basic Crawl (Markdown Output, Default)

{
  "url": "https://example.com",
  "deep_crawl": "false",
  "max_pages": 10
}

Deep Crawl

{
  "url": "https://example.com",
  "deep_crawl": "true",
  "max_pages": 10
}

Deep Crawl (All Pages)

{
  "url": "https://example.com",
  "deep_crawl": "true",
  "max_pages": "all"
}

Features of Web Crawler API

  • Fast and efficient crawling that outperforms many paid services.
  • Flexible output formats: JSON, cleaned HTML, Markdown.
  • Media extraction: Detects images, audio, and video tags.
  • Link extraction: Captures external and internal page links.
  • Metadata extraction: Retrieves structured metadata.
  • Multi-URL crawling capability for complex workflows.

Notes

  • Basic Crawl: Only fetches the provided URL (single page).
  • Deep Crawl: Crawls multiple pages up to the max_pages limit.
  • If max_pages is set to “all” (case-insensitive),
    and deep_crawl is “true”, the crawler retrieves all reachable pages.

Supported Output Formats

  • JSON
  • Markdown
  • Cleaned HTML

Authentication Instructions

To acquire your Base URL, Client ID, and Client Secret,
please visit the My Profile section inside your eZintegrations account.

Updated on December 11, 2025

What are your Feelings

  • Happy
  • Normal
  • Sad

Share This Article :

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Table of Contents
  • Web Crawler API
    • Method
    • Endpoint
    • Authentication
      • Required Params
      • Required Headers
    • Input Formats
      • Basic Crawl (Markdown Output, Default)
      • Deep Crawl
      • Deep Crawl (All Pages)
    • Features of Web Crawler API
    • Notes
    • Supported Output Formats
    • Authentication Instructions
© Copyright 2025 Bizdata Inc. | All Rights Reserved | Terms of Use | Privacy Policy