Live Webinar | Jan 15, 11 AM EST — Search, Import, Automate: How Enterprises Launch AI Workflows in Minutes. Register Now !

Skip to the content

Automate Everything !

🤖 Explore with AI: ChatGPT Perplexity Claude Google AI Grok

For Enterprises | Teams | Start-Ups

eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

0
$0.00
eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

Menu
0
$0.00
  • Categories
    • Workflow Automation
    • AI Workflow
    • AI Agent
    • Agentic AI
  • Home
  • Automate Now !
  • About Us
  • Contact
  • Blog
  • Free AI Workflow
  • Free AI Agents

eZintegrations

  • eZintegrations Introduction
  • Integration Bridge
    • Rename Integration Bridge
    • Enable and Disable Integration Bridge
    • Integration Bridge Save
    • Integration Bridge Run Once
    • Clear Logs of An Integration Bridge
    • Integration Bridge Share Feature
    • Copy Operation
    • Integration Bridge Import/Export
    • Integration Bridge Auto Save Feature
    • View An Integration Bridge
    • Copy Integration Bridge
    • Streaming Logs of Integration Bridge
    • Download Logs of An Integration Bridge
    • Status of Integration Bridge
    • Refresh an Integration Bridge
    • Stop An Integration Bridge
    • Start An Integration Bridge
    • Frequency
  • Feedback
    • Feedback: Tell Us What You Think
  • Understanding Session Timeout
    • Understanding Session Timeout and the Idle Countdown Timer
  • Alerts
    • Alerts
  • Marketplace
    • Marketplace
  • DIY Articles
    • 60+ Transformations for Smarter Data: How eZintegrations Powers Operations
    • From SOAP to GraphQL: Modernizing Integrations with eZintegrations
    • Accelerate Growth with eZintegrations Unified API Marketplace
    • Collaborative Integrations: Sharing Bridges in eZintegrations to Foster Cross-Team Innovation
    • Unlocking Hidden Value in Unstructured Data: eZintegrations AI Document Magic for Strategic Insights
    • Workflow Cloning Wizardry: Replicating Success with eZintegrations Integration Duplication for Rapid Scaling
    • Time Zone Triumph: Global Scheduling in eZintegrations for Synchronized Cross-Border Operations
    • Parallel Processing Power: eZintegrations Multi-Threaded Workflows for Lightning Fast Data Syncs
    • From Data Chaos to Competitive Edge: How eZintegrations AI Syncs Silos and Boosts ROI by 40%
    • From Emails to Insights: eZintegrations AI Turns Chaos into Opportunity
    • Handling XML Responses in eZintegrations
    • Text to Action: Shape Data with Plain English or Python in eZintegrations
    • AI Magic: Send Data to Any Database with a Simple English Prompt in eZintegrations
    • Configuring Netsuite as Source
    • Configuring Salesforce as Source
    • Overcoming Upsert Limitations: A Case Study on Enabling Upsert Operations in APIs without Inherent Support
    • Connecting QuickBooks to Datalake
    • Connecting Salesforce to Netsuite
    • Connecting My-SQL to Salesforce Using Bizdata Universal API
    • Effortless Integration Scheduling: Mastering Biweekly Execution with eZintegrations
    • Connecting MS-SQL or Oracle Database to Salesforce Using Bizdata Universal API
    • Establishing Token-Based Authentication within NetSuite
    • Registering a Salesforce App and Obtaining Client ID / Secret (for API Calls / OAuth)
  • Management
    • Adding Users and Granting Organization Admin Privileges : Step-by-Step Guide
    • Security Matrix
    • Adding Users as an Organization Admin (Step-by-Step Guide)
  • Appendix
    • Pivot Operation Use Cases
    • Efficient Column Renaming in eZintegration Using Python Operation
    • Filter Operation Use Cases
    • Connecting any Database to Database
    • Connecting Data Targets
    • Connecting Data Sources
  • Release Notes
    • Release Notes
  • Accounting & Billing
    • Invoices
    • Billing Information
    • Payment Method
    • Current Plan
    • Plans
    • Dashboard
  • My Profile
    • My Profile
  • OnBoarding
    • Microsoft Login
    • Multi-Factor Authentication
    • Login for New Users
  • Pycode Examples
    • Extract Domain Name from Email using Split
    • Split String with Regular Expression
    • Bulk Rename of Keys
    • Form a JSON Object from array of array
    • URL Parsing
    • Form a JSON Object based on the key and values available in JSON Dataset
    • Convert Empty String in a JSON to a “null” value
    • Generate a OAuth 1.0 Signature or Store a Code Response in a User Defined Variable
    • Rename JSON Key based on other key’s value
  • Sprintf
    • Sprintf
  • Data Source Management
    • Data Source Management
  • Data Source API
    • Response Parameters: Text, XML, and JSON Formats
    • Environment Settings for Reusable and Dynamic Configuration
    • API Numeric Parameters for Pagination and Record Limits
    • API Time Parameters for Date and Time Filtering
    • How to test the Data Source API
    • Pre- Request Scripts
      • Pre- Request Scripts for Amazon S3
      • Pre- Request Scripts for Oracle Netsuite
      • Pre-Request Script for Amazon SP API
      • Pre-Request Scripts
    • API Pagination Methods
      • Custom Pagination
      • Encoded Next Token Pagination
      • Cursor Pagination
      • Pagination with Body
      • Total Page Count Pagination
      • Offset Pagination
      • Next URL Pagination
      • API Pagination Introduction
      • Pagination examples
        • SAP Shipment API Pagination
        • Amazon SP API Pagination
    • API Authorization
      • OAuth 2.0 Authorization
      • OAuth 1.0 Authorization
      • Basic Authentication Method
      • API Key Authorization Method
      • Different Types of API Authorization
  • Console
    • Console: Check Your Data at Every Step
  • eZintegrations Dashboard Overview
    • eZintegrations Dashboard Overview
  • Monitoring Dashboard
    • Monitoring Dashboard
  • Advanced Settings
    • Advanced Settings
  • Summary
    • Summary
  • Data Target- Email
    • Data Target- Email
  • Data Target- Bizintel360 Datalake Ingestion
    • Data Target- Goldfinch Analytics Datalake Ingestion
  • Data Target- Database
    • Data Target – Database SQL Examples
    • Database as a Data Target
  • Data Target API
    • Response Parameters
    • REST API Target
    • Pre-Request Script
    • Test the Data Target
  • Bizdata Dataset
    • Bizdata Dataset Response
  • Data Source- Email
    • Extract Data from Emails
  • Data Source- Websocket
    • WebSocket Data Source Overview
  • Data Source Bizdata Data Lake
    • How to Connect Data Lake as Source
  • Data Source Database
    • How to connect Data Source Database
  • Data Operations
    • Deep Learning
    • Data Orchestration
    • Data Pipeline Controls
    • Data Cleaning
    • Data Wrangling
    • Data Transformation

Goldfinch AI

  • Goldfinch AI Introduction

Bizdata API

  • Universal API for Database
    • API for PostgreSQL Database – Universal API
    • API for Amazon Aurora Database (MySQL/Maria) – Universal API
    • API for Amazon Redshift Database – Universal API
    • API for Snowflake Database – Universal API
    • API for MySQL/Maria Database – Universal API
    • API for MS-SQL Database-Universal API
    • API for Oracle Database- Universal API
    • Introduction to Universal API for Databases
  • SFTP API
    • SFTP API
  • Document Understanding APIs
    • Document Understanding API- Extract data from Documents
  • Web Crawler API
    • Web Crawler API – Fast Website Scraping
  • AI Workflow Testing APIs
    • Netsuite Source Testing API (Netsuite API Replica)
    • Salesforce Testing API (Salesforce API replica)
    • OAuth2.0 Testing API 
    • Basic Auth Testing API 
    • No Auth Testing API
    • Pagination with Body Testing API
    • Next URL Pagination Testing API 
    • Total Page Count Pagination Testing API
    • Cursor Pagination Testing API 
    • Offset Pagination Testing API
  • Import IB API
    • Import Integration service with .JSON file
  • Linux File & Folder Monitoring APIs
    • Monitor Linux Files & Folder using APIs
  • Webhook
    • Webhook Integration-Capture Events in Real Time
  • Websocket
    • Websocket Integration- Fetch Real Time Data
  • Image Understanding
    • Image Understanding API – Extract data from Images

Goldfinch Analytics

  • Visualization Login
    • Enabling Two Factor Authentication
    • Visualization login for analytics users
  • Profile
    • Profile
  • Datalake
    • Datalake
  • Discover
    • Discover
  • Widgets
    • Filter
    • Widget List
    • Widgets Guide
    • Creating Widgets & Adding Widgets to Dashboard
  • Dashboard
    • Dashboard
  • Views
    • Views
  • Filter Queries
    • Filter Queries for Reports and Dashboard
  • Alerts
    • Alerts
  • Management
    • Management
  • Downloading Reports with Filtered Data
    • Downloading Reports with Filtered Data in Goldfinch Analytics
  • Downloads
    • Downloads – eZIntegrations Documents & Resources | Official Guides & Manuals
View Categories

Filter Operation Use Cases

In this guide, we will be discussing various rules and scenarios in which Filter operation can be used and how the users can add them to their integration bridge.

Rules for Using Filter Operations

1. When the Target is an API

Single Filter Condition:

If a single filter condition is used to filter data and the filtered data is sent to the API as the request body (dropping unfiltered data), Data Aggregation operation is not required.

Multiple Filter Conditions:

If multiple filter conditions are used, and all the filtered data needs to be sent to the API as the request body:

1.  Use the Data Aggregation operation to aggregate the data.
2.  Follow this with the Singleline to Multiline operation.
3.  Send the data to the API.

2.When the Target is Datalake

General Rule:

Whether using one or more filter conditions, always:

1.  Aggregate the data using the Data Aggregation operation.
2.  Apply the Singleline to Multiline operation.
3.  Send the data to the Datalake.

Dropping Unfiltered Data:

After the filter operation, create a meta column to serve as a group-by key in the Data Aggregation operation. This ensures only filtered data is sent to the Datalake.

Retaining All Data:

Use a meta column (created before the filter operation in the Append operation) as the group-by key in the Data Aggregation operation. This ensures both filtered and unfiltered data are sent to the Datalake.

3.When the Target is a Database

General Rule:

For one or more filter conditions, always:
1.  Aggregate the data using the Data Aggregation operation.
2.  Apply the Singleline to Tuple operation.
3.  Send the data to the Database.

Dropping Unfiltered Data:

After the filter operation, create a meta column to serve as a group-by key in the Data Aggregation operation. This ensures only filtered data is sent to the Database.

Retaining All Data:

Use a meta column (created before the filter operation in the Append operation) as the group-by key in the Data Aggregation operation. This ensures both filtered and unfiltered data are sent to the Database.

4.When Ending a Filter Condition

•  Always use the Filter Ends operation once the data transformation is completed.
•  The Filter Ends operation ensures that the filter condition is finalized and no further filtering is applied to the data.
These rules aim to provide clear guidance on how to handle different filter scenarios for various targets.

 

To show these scenarios, a dataset of 12 records has been used in multiple ways. The dataset consists of records:

Scenario 1: With Target as Datalake and 1 filter condition

In this scenario, one filter condition will be applied on the data coming from the source and then the filtered record will be sent to datalake.

 

Operations Used:

  • To send the records to datalake, we apply Singleline to Multiline operation as the data coming from source is a singleline JSON data.
  • Next, the filter condition is applied as per the requirement (Here, Filter condition applied is ID not equal to 2, which means records associated with ID 2 should not be sent to target).
  • Once the filter operation is applied, create a new column using Append, which will help represent the status of the records sent, end the filter condition using Filter Ends, and aggregate the data using Data Aggregation for sending to the Target.

Example: Data in Source

{

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

“first_name”: “George”,

“last_name”: “Bluth”

 

},

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

 

{

 

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

 

“first_name”: “Emma”,

 

“last_name”: “Wong”

 

}

 Filter Condition: 

 

data[‘id’] !=2

Output in Datalake:

id

email

first_name

last_name

Status

1

george.bluth@reqres.in

George

Bluth

Success

3

emma.wong@reqres.in

Emma

Wong

Success

Scenario 2: With Target as API and 1 filter condition

In this scenario, one filter condition will be applied to the data coming from the source and then the filtered record will be sent to the target which is API.

Operations Used:

  • To send the records to the target (API), we apply Singleline to Multiline operation as the data coming from source is a singleline JSON data.
  • Next, the filter condition is applied as per the requirement (Here, Filter condition applied is ID not equal to 3, which means records associated with ID 3 should not be sent to target).
  • Once the filter operation is applied, create a new column using Append, which will help represent the status of the records sent, end the filter condition using Filter Ends, and now send the records to the target.

Note: Further operations can be used in the integration bridge depending on the Target and data type.

Example: Data in Source

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

{

 

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

“first_name”: “Emma”,

 

“last_name”: “Wong”

 

}

Filter Condition: 

 

data[‘id’] !=3

Output in Target:

 

{

 

“id”: 1,

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

}

Scenario 3: With Target as Database and 1 filter condition

In this scenario, one filter condition will be applied to the data coming from the source and then the filtered record will be sent to the Database.

Operations Used:

  • To send the records to database, we apply Singleline to Multiline operation on the data coming from source, as it is a singleline JSON data.
  • Next, the filter condition is applied as per the requirement (Here, Filter condition applied is ID not equal to 1, which means records associated with ID 1 should not be sent to target).
  • Once the filter operation is applied, create a new column using Append, which will help represent the status of the records sent, and end the filter condition using Filter Ends.
  • To send the data to the Database, we need to create a Tuple. But before creating the tuple, aggregate all the columns using Data Aggregation and store it in a tuple key.
  • Use Singleline to Tuple and send the records to the Database.

Example: Data in Source

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

“first_name”: “George”,

 

“last_name”: “Bluth”

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

 

{

 

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

 

“first_name”: “Emma”,

 

“last_name”: “Wong”

 

}

Filter Condition: 

 

data[‘id’] !=1

Output in Datalake:

id

email

first_name

last_name

Status

2

janet.weaver@reqres.in

Janet

Weaver

Success

3

emma.wong@reqres.in

Emma

Wong

Success

 

Scenario 4: With Target as Datalake but more than 1 filter condition

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to the Datalake.

Operations Used:

  • To send the records to Datalake, we will use the operation Singleline to Multiline as your records are in singleline JSON data.
  • Now using Append operation, we will add one dummy key-value pair as this will help to aggregate all records after using Filter operation more than once.
  • Here we will Aggregate Data based on a dummy key.
  • Once the key is created, apply the filter conditions along with the Append operation to create a new column for showing the status of the conditions applied.
  • Now, aggregate all the records using the dummy key created earlier and send the records to the target.

Example: Data in Source

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

}

Condition 1

  • Filter Condition: 

data[‘api_status’] == 200

  • Append Operation: 

 

“Status”: “Success”

Condition 2

  • Filter Condition: 

 

data[‘api_status’] != 200

  • Append Operation: 

 

“Status”: “Failure”

Output in Datalake:

id

email

first_name

last_name

Status

1

george.bluth@reqres.in

George

Bluth

Success

2

janet.weaver@reqres.in

Janet

Weaver

Success

Scenario 5: With Target as Database but more than 1 filter condition

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to the Database.

Operations Used:

  • To send the records to Datalake, we will use the operation Singleline to Multiline as your records are in singleline JSON data.
  • Now using Append operation, we will add one dummy key-value pair as this will help to aggregate all records after using Filter operation more than once.
  • Here we will Aggregate Data based on a dummy key.
  • Once the key is created, apply the filter conditions along with the Append operation to create a new column for showing the status of the conditions applied.
  • Now, aggregate all the records using the dummy key created earlier.
  • To send the records to the Database, create a tuple using Singleline to Tuple and send the records to the target.

Example: Data in Source

 

{

 

“id”: 4,

 

“email”: “eve.holt@reqres.in”,

 

“first_name”: “Eve”,

 

“last_name”: “Holt”

},

 

{

 

“id”: 5,

 

“email”: “charles.morris@reqres.in”,

“first_name”: “Charles”,

 

“last_name”: “Morris”

 

}

Condition 1

  • Filter Condition: 

 

data[‘api_status’] == 200

  • Append Operation: 

 

“Status”: “Success”

Condition 2

  • Filter Condition: 

 

data[‘api_status’] != 200

  • Append Operation: 

 

“Status”: “Failure”

Output in Database:

id

email

first_name

last_name

Status

4

eve.holt@reqres.in

Eve

Holt

Success

5

charles.morris@reqres.in

Charles

Morris

Success

Scenario 6: With Target as API but more than 1 filter condition

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to the target (API).

Operations Used:

  • To send the records to Datalake, we will use the operation Singleline to Multiline as your records are in singleline JSON data.
  • Now using the Append operation, we will add one dummy key-value pair as this will help to aggregate all records after using the Filter operation more than once.
  • Here we will Aggregate Data based on a dummy key.
  • Once the key is created, apply the filter conditions along with the Append operation to create a new column for showing the status of the conditions applied.
  • Now, aggregate all the records using the dummy key created earlier and send the records to the target which is API.

Note: Other required operations can also be used in the integration process based on the target and data type.

Example: Data in Source

 

{

 

“id”: 7,

 

“email”: “michael.lawson@reqres.in”,

 

“first_name”: “Michael”,

“last_name”: “Lawson”

 

},

 

{

 

“id”: 8,

 

“email”: “lindsay.ferguson@reqres.in”,

 

“first_name”: “Lindsay”,

“last_name”: “Ferguson”

 

}

Condition 1

  • Filter Condition: 

data[‘api_status’] == 200

  • Append Operation: 

 

“Status”: “Success”

Condition 2

  • Filter Condition: 

 

data[‘api_status’] != 200

  • Append Operation: 

 

“Status”: “Failure”

Output in Target:

id

email

first_name

last_name

Status

7

michael.lawson@reqres.in

Michael

Lawson

Success

8

lindsay.ferguson@reqres.in

Lindsay

Ferguson

Success

Scenario 7: With more than 1 filter condition and multiple Targets

Case A: Database and Datalake as Target

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to multiple targets which are Database and Datalake.

Operations Used:

In this scenario, we have two targets, Database and Datalake. Therefore, we will be sending the records to Database, followed by Datalake.

  • After applying the filter conditions and transforming the data based on the requirements and adding a meta column to our filtered data.
  • To send data to Database as target, We have to filter the data now with the dummy key-value pair condition after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Tuple Operation and insert the data to Database.
  • After sending the data we will use a Filter Ends operation.
  • To send data to Datalake as target, We have to filter the data now with the dummy key-value pair condition again after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Multiline Operation and ingest the data to Datalake.

Note: In the two keys created using Append operation, first key will represent the filter condition whereas the second key will help in aggregating the records for second filter condition.

Example: Data in Source

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

“first_name”: “George”,

 

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

 

{

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

 

“first_name”: “Emma”,

 

“last_name”: “Wong”

 

},

 

{

 

“id”: 4,

 

“email”: “eve.holt@reqres.in”,

 

“first_name”: “Eve”,

 

“last_name”: “Holt”

 

},

 

{

 

“id”: 5,

 

“email”: “charles.morris@reqres.in”,

 

“first_name”: “Charles”,

 

“last_name”: “Morris”

 

},

 

{

 

“id”: 6,

“email”: “tracey.ramos@reqres.in”,

 

“first_name”: “Tracey”,

 

“last_name”: “Ramos”,

 

“avatar”: “https://reqres.in/img/faces/6-image.jpg”

 

},

 

{

 

“id”: 7,

 

“email”: “michael.lawson@reqres.in”,

 

“first_name”: “Michael”,

 

“last_name”: “Lawson”

 

},

 

{

 

“id”: 8,

 

“email”: “lindsay.ferguson@reqres.in”,

 

“first_name”: “Lindsay”,

“last_name”: “Ferguson”

 

}

Condition 1 (For Database)

  • Filter Condition: 

 

data[‘id’] == 4 or data[‘id’] == 5

  • Append Operation:

 

“Status”: “Success”,

 

“filter_1”: “yes”

Condition 2 (For Datalake)

  • Filter Condition: 

 

data[‘id’] !=4 and data[‘id’]!=5

  • Append Operation:

 

“Status”: “Failure”,

 

“filter_2”: “no”

  • Filter Operation to Aggregate Data:

1data[‘k1’] ==‘v1’

Note: The filter operation to aggregate data is being used in 3rd and 4th filter operation in the bridge.

Output in Target (Database):

id

email

first_name

last_name

Status

4

eve.holt@reqres.in

Eve

Holt

Success

5

charles.morris@reqres.in

Charles

Morris

Success

 Output in Target (Datalake):

id

email

first_name

last_name

Status

1

george.bluth@reqres.in

George

Bluth

Failure

2

janet.weaver@reqres.in

Janet

Weaver

Failure

3

emma.wong@reqres.in

Emma

Wong

Failure

6

tracey.ramos@reqres.in

Tracey

Ramos

Failure

7

michael.lawson@reqres.in

Michael

Lawson

Failure

8

lindsay.ferguson@reqres.in

Lindsay

Ferguson

Failure

 

Case B: Datalake and Database as Target

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to multiple targets which are Datalake and Database.

Operations Used:

In this scenario, we have two targets, Datalake and Database. Therefore, here first we will be sending the records to Datalake, followed by Database.

  • After applying the filter conditions and transforming the data based on the requirements and adding a meta column to our filtered data.
  • To send data to Datalake as target, We have to filter the data now with the dummy key-value pair condition after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Multiline Operation and ingest the data to Datalake.
  • After sending the data we will use a Filter Ends operation.
  • To send data to Datalake as target, We have to filter the data now with the dummy key-value pair condition again after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Multiline Operation and ingest the data to Datalake.

Note: In the two keys created using Append operation, first key will represent the filter condition whereas the second key will help in aggregating the records for second filter condition.

Example: Data in Source

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

 

{

 

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

 

“first_name”: “Emma”,

 

“last_name”: “Wong”

 

},

 

{

 

“id”: 4,

 

“email”: “eve.holt@reqres.in”,

 

“first_name”: “Eve”,

 

“last_name”: “Holt”

 

},

 

{

 

“id”: 5,

 

“email”: “charles.morris@reqres.in”,

 

“first_name”: “Charles”,

 

“last_name”: “Morris”

 

},

 

{

 

“id”: 6,

 

“email”: “tracey.ramos@reqres.in”,

 

“first_name”: “Tracey”,

 

“last_name”: “Ramos”,

 

“avatar”: “https://reqres.in/img/faces/6-image.jpg”

 

},

 

{

 

“id”: 7,

 

“email”: “michael.lawson@reqres.in”,

 

“first_name”: “Michael”,

 

“last_name”: “Lawson”

 

},

 

{

 

“id”: 8,

 

“email”: “lindsay.ferguson@reqres.in”,

 

“first_name”: “Lindsay”,

 

“last_name”: “Ferguson”

 

}

Condition 1 (For Datalake)

  • Filter Condition: 

 

data[‘id’] == 3

  • Append Operation:

 

“Status”: “Success”,

 

“filter_1”: “yes”

Condition 2 (For Database)

  • Filter Condition: 

 

data[‘id’] !=3

  • Append Operation:

 

“Status”: “Failure”,

 

“filter_2”: “no”

  • Filter Operation to Aggregate Data:

 

data[‘k1’] ==‘v1’

Note: The filter operation to aggregate data is being used in 3rd and 4th filter operation in the bridge.

Output in Target (Datalake):

id

email

first_name

last_name

Status

3

emma.wong@reqres.in

Emma

Wong

Success

 Output in Target (Database):

id

email

first_name

last_name

Status

1

george.bluth@reqres.in

George

Bluth

Failure

2

janet.weaver@reqres.in

Janet

Weaver

Failure

4

eve.holt@reqres.in

Eve

Holt

Failure

5

charles.morris@reqres.in

Charles

Morris

Failure

6

tracey.ramos@reqres.in

Tracey

Ramos

Failure

7

michael.lawson@reqres.in

Michael

Lawson

Failure

8

lindsay.ferguson@reqres.in

Lindsay

Ferguson

Failure

Case C: Database and API as Target

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to multiple targets which are Database and API.

Operations Used:

In this scenario, we have two targets, API and Database. Therefore, here first we will be sending the records to API, followed by Database.

  • After applying the filter conditions and transforming the data based on the requirements and adding a meta column to our filtered data.
  • To send data to Database as target, We have to filter the data now with the dummy key-value pair condition after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Tuple Operation and insert the data to Database.
  • After sending the data we will use a Filter Ends operation.
  • To send data to API as target, We have to filter the data now with the dummy key-value pair condition again after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Multiline Operation and send the data as request body into the API target.

Note: In the two keys created using Append operation, first key will represent the filter condition whereas the second key will help in aggregating the records for second filter condition.

Example: Data in Source

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

 

{

 

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

“first_name”: “Emma”,

 

“last_name”: “Wong”

 

},

 

{

 

“id”: 4,

 

“email”: “eve.holt@reqres.in”,

 

“first_name”: “Eve”,

 

“last_name”: “Holt”

 

},

 

{

 

“id”: 5,

 

“email”: “charles.morris@reqres.in”,

 

“first_name”: “Charles”,

 

“last_name”: “Morris”

 

},

 

{

“id”: 6,

 

“email”: “tracey.ramos@reqres.in”,

 

“first_name”: “Tracey”,

 

“last_name”: “Ramos”,

 

“avatar”: “https://reqres.in/img/faces/6-image.jpg”

 

},

 

{

 

“id”: 7,

 

“email”: “michael.lawson@reqres.in”,

 

“first_name”: “Michael”,

 

“last_name”: “Lawson”

 

},

 

{

 

“id”: 8,

 

“email”: “lindsay.ferguson@reqres.in”,

 

“first_name”: “Lindsay”,

 

“last_name”: “Ferguson”

 

}

Condition 1 (For Database)

  • Filter Condition: 

 

data[‘id’] == 3

  • Append Operation:

 

“Status”: “Success”,

 

“filter_1”: “yes”

Condition 2 (For API)

  • Filter Condition: 

 

data[‘id’] !=3

  • Append Operation:

 

“Status”: “Failure”,

 

“filter_2”: “no”

  • Filter Operation to Aggregate Data:

 

data[‘k1’] ==‘v1’

Note: The filter operation to aggregate data is being used in 3rd and 4th filter operation in the bridge.

Output in Target (API):

id

email

first_name

last_name

Status

3

emma.wong@reqres.in

Emma

Wong

Success

 Output in Target (Database):

id

email

first_name

last_name

Status

1

george.bluth@reqres.in

George

Bluth

Failure

2

janet.weaver@reqres.in

Janet

Weaver

Failure

4

eve.holt@reqres.in

Eve

Holt

Failure

5

charles.morris@reqres.in

Charles

Morris

Failure

6

tracey.ramos@reqres.in

Tracey

Ramos

Failure

7

michael.lawson@reqres.in

Michael

Lawson

Failure

8

lindsay.ferguson@reqres.in

Lindsay

Ferguson

Failure

Case D: API and Datalake as Target

In this scenario, more than one filter condition will be applied to the data coming from the source and then the filtered record will be sent to multiple targets which are API and Datalake.

Operations Used:

In this scenario, we have two targets, API and Database. Therefore, here first we will be sending the records to API, followed by Datalake.

  • After applying the filter conditions and transforming the data based on the requirements and adding a meta column to our filtered data.
  • To send data to API as target, We have to filter the data now with the dummy key-value pair condition again after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Multiline Operation and send the data as request body into the API target.
  • After sending the data we will use a Filter Ends operation.
  • To send data to Datalake as target, We have to filter the data now with the dummy key-value pair condition again after this we will Aggregate the data based on the meta column added using Data aggregation operation and followed by Singleline to Multiline Operation and ingest the data to Datalake.

Note: In the two keys created using Append operation, first key will represent the filter condition whereas the second key will help in aggregating the records for second filter condition.

Example: Data in Source

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”

 

},

 

{

 

“id”: 2,

 

“email”: “janet.weaver@reqres.in”,

 

“first_name”: “Janet”,

 

“last_name”: “Weaver”

 

},

 

{

 

“id”: 3,

 

“email”: “emma.wong@reqres.in”,

 

“first_name”: “Emma”

“last_name”: “Wong”

 

},

 

{

 

“id”: 4,

 

“email”: “eve.holt@reqres.in”,

 

“first_name”: “Eve”,

 

“last_name”: “Holt”

 

},

 

{

 

“id”: 5,

 

“email”: “charles.morris@reqres.in”,

 

“first_name”: “Charles”,

 

“last_name”: “Morris”

 

}

Condition 1 (For API)

  • Filter Condition: 

 

data[‘id’] !=5

  • Append Operation:

 

“Status”: “Success”,

 

“filter_1”: “yes”

Condition 2 (For Datalake)

  • Filter Condition: 

 

data[‘id’] == 5

  • Append Operation:

 

“Status”: “Failure”,

 

“filter_2”: “no”

  • Filter Operation to Aggregate Data:

 

data[‘k1’] ==‘v1’

Note: The filter operation to aggregate data is being used in 3rd and 4th filter operation in the bridge.

Output in Target (API):

id

email

first_name

last_name

Status

1

george.bluth@reqres.in

George

Bluth

Success

2

janet.weaver@reqres.in

Janet

Weaver

Success

3

emma.wong@reqres.in

Emma

Wong

Success

4

eve.holt@reqres.in

Eve

Holt

Success

 Output in Target (Datalake):

id

email

first_name

last_name

Status

5

charles.morris@reqres.in

Charles

Morris

Failure

 

Updated on December 29, 2025

What are your Feelings

  • Happy
  • Normal
  • Sad

Share This Article :

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Efficient Column Renaming in eZintegration Using Python OperationConnecting any Database to Database
Table of Contents
  • Scenario 1: With Target as Datalake and 1 filter condition
  • Scenario 2: With Target as API and 1 filter condition
  • Scenario 3: With Target as Database and 1 filter condition
  • Scenario 4: With Target as Datalake but more than 1 filter condition
  • Scenario 5: With Target as Database but more than 1 filter condition
  • Scenario 6: With Target as API but more than 1 filter condition
  • Scenario 7: With more than 1 filter condition and multiple Targets
    • Case A: Database and Datalake as Target
    • Case B: Datalake and Database as Target
    • Case C: Database and API as Target
    • Case D: API and Datalake as Target
© Copyright 2026 Bizdata Inc. | All Rights Reserved | Terms of Use | Privacy Policy