Skip to the content

Automate Everything !

🤖 Explore with AI: ChatGPT Perplexity Claude Google AI Grok

For Enterprises | Teams | Start-Ups

eZintegrations

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

0
$0.00
eZintegrations

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

Menu
0
$0.00
  • Categories
    • Workflow Automation
    • AI Workflow
    • AI Agent
    • Agentic AI
  • Home
  • Automate Now !
  • About Us
  • Contact
  • Blog
  • Pricing
  • Free AI Workflow
  • Free AI Agents

eZintegrations

  • eZintegrations Introduction
  • Integration Bridge
    • Rename Integration Bridge
    • Enable and Disable Integration Bridge
    • Integration Bridge Save
    • Integration Bridge Run Once
    • Clear Logs of An Integration Bridge
    • Integration Bridge Share Feature
    • Copy Operation
    • Integration Bridge Import/Export
    • Integration Bridge Auto Save Feature
    • View An Integration Bridge
    • Copy Integration Bridge
    • Streaming Logs of Integration Bridge
    • Download Logs of An Integration Bridge
    • Status of Integration Bridge
    • Refresh an Integration Bridge
    • Stop An Integration Bridge
    • Start An Integration Bridge
    • Frequency
  • Feedback
    • Feedback: Tell Us What You Think
  • Understanding Session Timeout
    • Understanding Session Timeout and the Idle Countdown Timer
  • Alerts
    • Alerts
  • Marketplace
    • Marketplace
  • DIY Articles
    • 60+ Transformations for Smarter Data: How eZintegrations Powers Operations
    • From SOAP to GraphQL: Modernizing Integrations with eZintegrations
    • Accelerate Growth with eZintegrations Unified API Marketplace
    • Collaborative Integrations: Sharing Bridges in eZintegrations to Foster Cross-Team Innovation
    • Unlocking Hidden Value in Unstructured Data: eZintegrations AI Document Magic for Strategic Insights
    • Workflow Cloning Wizardry: Replicating Success with eZintegrations Integration Duplication for Rapid Scaling
    • Time Zone Triumph: Global Scheduling in eZintegrations for Synchronized Cross-Border Operations
    • Parallel Processing Power: eZintegrations Multi-Threaded Workflows for Lightning Fast Data Syncs
    • From Data Chaos to Competitive Edge: How eZintegrations AI Syncs Silos and Boosts ROI by 40%
    • From Emails to Insights: eZintegrations AI Turns Chaos into Opportunity
    • Handling XML Responses in eZintegrations
    • Text to Action: Shape Data with Plain English or Python in eZintegrations
    • AI Magic: Send Data to Any Database with a Simple English Prompt in eZintegrations
    • Configuring Netsuite as Source
    • Configuring Salesforce as Source
    • Overcoming Upsert Limitations: A Case Study on Enabling Upsert Operations in APIs without Inherent Support
    • Connecting QuickBooks to Datalake
    • Connecting Salesforce to Netsuite
    • Connecting My-SQL to Salesforce Using Bizdata Universal API
    • Effortless Integration Scheduling: Mastering Biweekly Execution with eZintegrations
    • Connecting MS-SQL or Oracle Database to Salesforce Using Bizdata Universal API
    • Establishing Token-Based Authentication within NetSuite
    • Registering a Salesforce App and Obtaining Client ID / Secret (for API Calls / OAuth)
  • Management
    • Adding Users and Granting Organization Admin Privileges : Step-by-Step Guide
    • Security Matrix
    • Adding Users as an Organization Admin (Step-by-Step Guide)
  • Appendix
    • Pivot Operation Use Cases
    • Efficient Column Renaming in eZintegration Using Python Operation
    • Filter Operation Use Cases
    • Connecting any Database to Database
    • Connecting Data Targets
    • Connecting Data Sources
  • Release Notes
    • Release Notes
  • Accounting & Billing
    • Invoices
    • Billing Information
    • Payment Method
    • Current Plan
    • Plans
    • Dashboard
  • My Profile
    • My Profile
  • OnBoarding
    • Microsoft Login
    • Multi-Factor Authentication
    • Login for New Users
  • Pycode Examples
    • Extract Domain Name from Email using Split
    • Split String with Regular Expression
    • Bulk Rename of Keys
    • Form a JSON Object from array of array
    • URL Parsing
    • Form a JSON Object based on the key and values available in JSON Dataset
    • Convert Empty String in a JSON to a “null” value
    • Generate a OAuth 1.0 Signature or Store a Code Response in a User Defined Variable
    • Rename JSON Key based on other key’s value
  • Sprintf
    • Sprintf
  • Data Source Management
    • Data Source Management
  • Data Source API
    • Response Parameters: Text, XML, and JSON Formats
    • Environment Settings for Reusable and Dynamic Configuration
    • API Numeric Parameters for Pagination and Record Limits
    • API Time Parameters for Date and Time Filtering
    • How to test the Data Source API
    • Pre- Request Scripts
      • Pre- Request Scripts for Amazon S3
      • Pre- Request Scripts for Oracle Netsuite
      • Pre-Request Script for Amazon SP API
      • Pre-Request Scripts
    • API Pagination Methods
      • Custom Pagination
      • Encoded Next Token Pagination
      • Cursor Pagination
      • Pagination with Body
      • Total Page Count Pagination
      • Offset Pagination
      • Next URL Pagination
      • API Pagination Introduction
      • Pagination examples
        • SAP Shipment API Pagination
        • Amazon SP API Pagination
    • API Authorization
      • OAuth 2.0 Authorization
      • OAuth 1.0 Authorization
      • Basic Authentication Method
      • API Key Authorization Method
      • Different Types of API Authorization
  • Console
    • Console: Check Your Data at Every Step
  • eZintegrations Dashboard Overview
    • eZintegrations Dashboard Overview
  • Monitoring Dashboard
    • Monitoring Dashboard
  • Advanced Settings
    • Advanced Settings
  • Summary
    • Summary
  • Data Target- Email
    • Data Target- Email
  • Data Target- Bizintel360 Datalake Ingestion
    • Data Target- Goldfinch Analytics Datalake Ingestion
  • Data Target- Database
    • Data Target – Database SQL Examples
    • Database as a Data Target
  • Data Target API
    • Response Parameters
    • REST API Target
    • Pre-Request Script
    • Test the Data Target
  • Bizdata Dataset
    • Bizdata Dataset Response
  • Data Source- Email
    • Extract Data from Emails
  • Data Source- Websocket
    • WebSocket Data Source Overview
  • Data Source Bizdata Data Lake
    • How to Connect Data Lake as Source
  • Data Source Database
    • How to connect Data Source Database
  • Data Operations
    • Deep Learning
    • Data Orchestration
    • Data Pipeline Controls
    • Data Cleaning
    • Data Wrangling
    • Data Transformation

Goldfinch AI

  • Goldfinch AI Introduction

Bizdata API

  • Universal API for Database
    • API for PostgreSQL Database – Universal API
    • API for Amazon Aurora Database (MySQL/Maria) – Universal API
    • API for Amazon Redshift Database – Universal API
    • API for Snowflake Database – Universal API
    • API for MySQL/Maria Database – Universal API
    • API for MS-SQL Database-Universal API
    • API for Oracle Database- Universal API
    • Introduction to Universal API for Databases
  • SFTP API
    • SFTP API
  • Document Understanding APIs
    • Document Understanding API- Extract data from Documents
  • Web Crawler API
    • Web Crawler API – Fast Website Scraping
  • AI Workflow Testing APIs
    • Netsuite Source Testing API (Netsuite API Replica)
    • Salesforce Testing API (Salesforce API replica)
    • OAuth2.0 Testing API 
    • Basic Auth Testing API 
    • No Auth Testing API
    • Pagination with Body Testing API
    • Next URL Pagination Testing API 
    • Total Page Count Pagination Testing API
    • Cursor Pagination Testing API 
    • Offset Pagination Testing API
  • Import IB API
    • Import Integration service with .JSON file
  • Linux File & Folder Monitoring APIs
    • Monitor Linux Files & Folder using APIs
  • Webhook
    • Webhook Integration-Capture Events in Real Time
  • Websocket
    • Websocket Integration- Fetch Real Time Data
  • Image Understanding
    • Image Understanding API – Extract data from Images

Goldfinch Analytics

  • Visualization Login
    • Enabling Two Factor Authentication
    • Visualization login for analytics users
  • Profile
    • Profile
  • Datalake
    • Datalake
  • Discover
    • Discover
  • Widgets
    • Filter
    • Widget List
    • Widgets Guide
    • Creating Widgets & Adding Widgets to Dashboard
  • Dashboard
    • Dashboard
  • Views
    • Views
  • Filter Queries
    • Filter Queries for Reports and Dashboard
  • Alerts
    • Alerts
  • Management
    • Management
  • Downloading Reports with Filtered Data
    • Downloading Reports with Filtered Data in Goldfinch Analytics
  • Downloads
    • Downloads – eZintegrations Documents & Resources | Official Guides & Manuals
View Categories

Data Target- Goldfinch Analytics Datalake Ingestion

Overview

Goldfinch Analytics Datalake Ingestion as a Data Target allows you to transfer and store data directly into the Bizintel360 Datalake.

After configuring the Data Source and Data Operations within the Integration Bridge, you can proceed to configure the Data Target to ingest processed data into the Datalake.

This configuration ensures structured, controlled, and efficient data storage in the selected index or table.

When to Use

Use Goldfinch Analytics Datalake Ingestion when you need to persist processed integration data into the Bizintel360 Datalake.

  • Storing transformed integration data
  • Building analytics-ready datasets
  • Updating existing Datalake records
  • Managing structured enterprise data
  • Supporting reporting and analytics workflows

How It Works

The Data Target configuration defines how data is written into the Datalake.

Users select the target type, define the index or table, choose the required action, and configure ingestion behavior. The Integration Bridge then writes the processed records into the selected Datalake version.

Steps to Configure

Step 1: Select Target Type

From the Target Type dropdown, select Bizintel360 Data Lake Ingestion.

Step 2: Select Data Lake Version

Select the required Datalake version from the available dropdown options.

Step 3: Add Index Name / Table Name

Enter the index or table name where the data should be ingested.

Step 4: Select Action Type

Choose the required action type based on how data should be handled.

  • Upsert – Combines update and insert functionality. Updates records if they exist or inserts them if they do not.
  • Update – Modifies existing records in the Datalake.
  • Delete – Removes specific data or records from the Datalake.
  • Create – Adds new records or entities to the Datalake.
  • Insert – Appends new records into an existing dataset.

Step 5: Insert Primary Key

Define the Primary Key of the index to reduce data duplication.

The Primary Key is required when using the following action types:

  • Upsert
  • Update
  • Delete

Step 6: Select Ingestion Type

Select the ingestion type according to your processing requirements.

  • Parallel Computing – Processes and ingests large volumes of data simultaneously using multiple computational resources for faster data intake.
  • Streaming Computing – Continuously processes data in real time as it flows into the system.
  • Bump Computing – Ingests data in batches based on source-defined batch segments.

Action Type Summary

Action Type Behavior
Upsert Updates existing records or inserts new ones if not found.
Update Modifies existing records only.
Delete Removes specific or entire record sets.
Create Creates new records.
Insert Appends new data to an existing dataset.

Ingestion Type Summary

Ingestion Type Description
Parallel Computing Simultaneous ingestion across multiple computational resources.
Streaming Computing Continuous, real-time ingestion and processing.
Bump Computing Batch-based ingestion based on defined data segments.

Frequently Asked Questions

What is Goldfinch Analytics Datalake Ingestion?

It is a Data Target feature that allows processed integration data to be stored directly in the Bizintel360 Datalake.

When is a Primary Key required?

A Primary Key is required when using Upsert, Update, or Delete actions to prevent data duplication.

Which ingestion type should I choose?

Use Parallel Computing for high-volume batch data, Streaming Computing for real-time data, and Bump Computing for batch-based ingestion.

Can I insert data without a Primary Key?

Yes. Insert and Create actions do not require a Primary Key.

Can I change the action type after configuration?

Yes. The action type can be modified before executing or redeploying the Integration Bridge.

Benefits

  • Structured data ingestion
  • Flexible action management
  • Reduced duplication
  • Support for real-time and batch processing
  • Enterprise-ready scalability

Notes

  • Verify index names before ingestion.
  • Configure Primary Keys correctly.
  • Select ingestion types based on workload.
  • Test in staging before production deployment.
  • Document configurations for maintenance.
Updated on February 19, 2026

What are your Feelings

  • Happy
  • Normal
  • Sad

Share This Article :

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Data Target- Goldfinch Analytics Datalake IngestionData Target- Goldfinch Analytics Datalake Ingestion
Table of Contents
  • Overview
  • When to Use
  • How It Works
  • Steps to Configure
    • Step 1: Select Target Type
    • Step 2: Select Data Lake Version
    • Step 3: Add Index Name / Table Name
    • Step 4: Select Action Type
    • Step 5: Insert Primary Key
    • Step 6: Select Ingestion Type
  • Action Type Summary
  • Ingestion Type Summary
  • Frequently Asked Questions
    • What is Goldfinch Analytics Datalake Ingestion?
    • When is a Primary Key required?
    • Which ingestion type should I choose?
    • Can I insert data without a Primary Key?
    • Can I change the action type after configuration?
  • Benefits
  • Notes
© Copyright 2026 Bizdata Inc. | All Rights Reserved | Terms of Use | Privacy Policy