Overview
Goldfinch Analytics Datalake Ingestion as a Data Target allows you to transfer and store data directly into the Bizintel360 Datalake.
After configuring the Data Source and Data Operations within the Integration Bridge, you can proceed to configure the Data Target to ingest processed data into the Datalake.
This configuration ensures structured, controlled, and efficient data storage in the selected index or table.
When to Use
Use Goldfinch Analytics Datalake Ingestion when you need to persist processed integration data into the Bizintel360 Datalake.
- Storing transformed integration data
- Building analytics-ready datasets
- Updating existing Datalake records
- Managing structured enterprise data
- Supporting reporting and analytics workflows
How It Works
The Data Target configuration defines how data is written into the Datalake.
Users select the target type, define the index or table, choose the required action, and configure ingestion behavior. The Integration Bridge then writes the processed records into the selected Datalake version.
Steps to Configure
Step 1: Select Target Type
From the Target Type dropdown, select Bizintel360 Data Lake Ingestion.
Step 2: Select Data Lake Version
Select the required Datalake version from the available dropdown options.
Step 3: Add Index Name / Table Name
Enter the index or table name where the data should be ingested.
Step 4: Select Action Type
Choose the required action type based on how data should be handled.
- Upsert – Combines update and insert functionality. Updates records if they exist or inserts them if they do not.
- Update – Modifies existing records in the Datalake.
- Delete – Removes specific data or records from the Datalake.
- Create – Adds new records or entities to the Datalake.
- Insert – Appends new records into an existing dataset.
Step 5: Insert Primary Key
Define the Primary Key of the index to reduce data duplication.
The Primary Key is required when using the following action types:
- Upsert
- Update
- Delete
Step 6: Select Ingestion Type
Select the ingestion type according to your processing requirements.
- Parallel Computing – Processes and ingests large volumes of data simultaneously using multiple computational resources for faster data intake.
- Streaming Computing – Continuously processes data in real time as it flows into the system.
- Bump Computing – Ingests data in batches based on source-defined batch segments.
Action Type Summary
| Action Type | Behavior |
|---|---|
| Upsert | Updates existing records or inserts new ones if not found. |
| Update | Modifies existing records only. |
| Delete | Removes specific or entire record sets. |
| Create | Creates new records. |
| Insert | Appends new data to an existing dataset. |
Ingestion Type Summary
| Ingestion Type | Description |
|---|---|
| Parallel Computing | Simultaneous ingestion across multiple computational resources. |
| Streaming Computing | Continuous, real-time ingestion and processing. |
| Bump Computing | Batch-based ingestion based on defined data segments. |
Frequently Asked Questions
What is Goldfinch Analytics Datalake Ingestion?
It is a Data Target feature that allows processed integration data to be stored directly in the Bizintel360 Datalake.
When is a Primary Key required?
A Primary Key is required when using Upsert, Update, or Delete actions to prevent data duplication.
Which ingestion type should I choose?
Use Parallel Computing for high-volume batch data, Streaming Computing for real-time data, and Bump Computing for batch-based ingestion.
Can I insert data without a Primary Key?
Yes. Insert and Create actions do not require a Primary Key.
Can I change the action type after configuration?
Yes. The action type can be modified before executing or redeploying the Integration Bridge.
Benefits
- Structured data ingestion
- Flexible action management
- Reduced duplication
- Support for real-time and batch processing
- Enterprise-ready scalability
Notes
- Verify index names before ingestion.
- Configure Primary Keys correctly.
- Select ingestion types based on workload.
- Test in staging before production deployment.
- Document configurations for maintenance.