Live Webinar | Jan 15, 11 AM EST — Search, Import, Automate: How Enterprises Launch AI Workflows in Minutes. Register Now !

Skip to the content

Automate Everything !

🤖 Explore with AI: ChatGPT Perplexity Claude Google AI Grok

For Enterprises | Teams | Start-Ups

eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

0
$0.00
eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

Menu
0
$0.00
  • Categories
    • Workflow Automation
    • AI Workflow
    • AI Agent
    • Agentic AI
  • Home
  • Automate Now !
  • About Us
  • Contact
  • Blog
  • Free AI Workflow
  • Free AI Agents

eZintegrations

  • eZintegrations Introduction
  • Integration Bridge
    • Rename Integration Bridge
    • Enable and Disable Integration Bridge
    • Integration Bridge Save
    • Integration Bridge Run Once
    • Clear Logs of An Integration Bridge
    • Integration Bridge Share Feature
    • Copy Operation
    • Integration Bridge Import/Export
    • Integration Bridge Auto Save Feature
    • View An Integration Bridge
    • Copy Integration Bridge
    • Streaming Logs of Integration Bridge
    • Download Logs of An Integration Bridge
    • Status of Integration Bridge
    • Refresh an Integration Bridge
    • Stop An Integration Bridge
    • Start An Integration Bridge
    • Frequency
  • Feedback
    • Feedback: Tell Us What You Think
  • Understanding Session Timeout
    • Understanding Session Timeout and the Idle Countdown Timer
  • Alerts
    • Alerts
  • Marketplace
    • Marketplace
  • DIY Articles
    • 60+ Transformations for Smarter Data: How eZintegrations Powers Operations
    • From SOAP to GraphQL: Modernizing Integrations with eZintegrations
    • Accelerate Growth with eZintegrations Unified API Marketplace
    • Collaborative Integrations: Sharing Bridges in eZintegrations to Foster Cross-Team Innovation
    • Unlocking Hidden Value in Unstructured Data: eZintegrations AI Document Magic for Strategic Insights
    • Workflow Cloning Wizardry: Replicating Success with eZintegrations Integration Duplication for Rapid Scaling
    • Time Zone Triumph: Global Scheduling in eZintegrations for Synchronized Cross-Border Operations
    • Parallel Processing Power: eZintegrations Multi-Threaded Workflows for Lightning Fast Data Syncs
    • From Data Chaos to Competitive Edge: How eZintegrations AI Syncs Silos and Boosts ROI by 40%
    • From Emails to Insights: eZintegrations AI Turns Chaos into Opportunity
    • Handling XML Responses in eZintegrations
    • Text to Action: Shape Data with Plain English or Python in eZintegrations
    • AI Magic: Send Data to Any Database with a Simple English Prompt in eZintegrations
    • Configuring Netsuite as Source
    • Configuring Salesforce as Source
    • Overcoming Upsert Limitations: A Case Study on Enabling Upsert Operations in APIs without Inherent Support
    • Connecting QuickBooks to Datalake
    • Connecting Salesforce to Netsuite
    • Connecting My-SQL to Salesforce Using Bizdata Universal API
    • Effortless Integration Scheduling: Mastering Biweekly Execution with eZintegrations
    • Connecting MS-SQL or Oracle Database to Salesforce Using Bizdata Universal API
    • Establishing Token-Based Authentication within NetSuite
    • Registering a Salesforce App and Obtaining Client ID / Secret (for API Calls / OAuth)
  • Management
    • Adding Users and Granting Organization Admin Privileges : Step-by-Step Guide
    • Security Matrix
    • Adding Users as an Organization Admin (Step-by-Step Guide)
  • Appendix
    • Pivot Operation Use Cases
    • Efficient Column Renaming in eZintegration Using Python Operation
    • Filter Operation Use Cases
    • Connecting any Database to Database
    • Connecting Data Targets
    • Connecting Data Sources
  • Release Notes
    • Release Notes
  • Accounting & Billing
    • Invoices
    • Billing Information
    • Payment Method
    • Current Plan
    • Plans
    • Dashboard
  • My Profile
    • My Profile
  • OnBoarding
    • Microsoft Login
    • Multi-Factor Authentication
    • Login for New Users
  • Pycode Examples
    • Extract Domain Name from Email using Split
    • Split String with Regular Expression
    • Bulk Rename of Keys
    • Form a JSON Object from array of array
    • URL Parsing
    • Form a JSON Object based on the key and values available in JSON Dataset
    • Convert Empty String in a JSON to a “null” value
    • Generate a OAuth 1.0 Signature or Store a Code Response in a User Defined Variable
    • Rename JSON Key based on other key’s value
  • Sprintf
    • Sprintf
  • Data Source Management
    • Data Source Management
  • Data Source API
    • Response Parameters: Text, XML, and JSON Formats
    • Environment Settings for Reusable and Dynamic Configuration
    • API Numeric Parameters for Pagination and Record Limits
    • API Time Parameters for Date and Time Filtering
    • How to test the Data Source API
    • Pre- Request Scripts
      • Pre- Request Scripts for Amazon S3
      • Pre- Request Scripts for Oracle Netsuite
      • Pre-Request Script for Amazon SP API
      • Pre-Request Scripts
    • API Pagination Methods
      • Custom Pagination
      • Encoded Next Token Pagination
      • Cursor Pagination
      • Pagination with Body
      • Total Page Count Pagination
      • Offset Pagination
      • Next URL Pagination
      • API Pagination Introduction
      • Pagination examples
        • SAP Shipment API Pagination
        • Amazon SP API Pagination
    • API Authorization
      • OAuth 2.0 Authorization
      • OAuth 1.0 Authorization
      • Basic Authentication Method
      • API Key Authorization Method
      • Different Types of API Authorization
  • Console
    • Console: Check Your Data at Every Step
  • eZintegrations Dashboard Overview
    • eZintegrations Dashboard Overview
  • Monitoring Dashboard
    • Monitoring Dashboard
  • Advanced Settings
    • Advanced Settings
  • Summary
    • Summary
  • Data Target- Email
    • Data Target- Email
  • Data Target- Bizintel360 Datalake Ingestion
    • Data Target- Goldfinch Analytics Datalake Ingestion
  • Data Target- Database
    • Data Target – Database SQL Examples
    • Database as a Data Target
  • Data Target API
    • Response Parameters
    • REST API Target
    • Pre-Request Script
    • Test the Data Target
  • Bizdata Dataset
    • Bizdata Dataset Response
  • Data Source- Email
    • Extract Data from Emails
  • Data Source- Websocket
    • WebSocket Data Source Overview
  • Data Source Bizdata Data Lake
    • How to Connect Data Lake as Source
  • Data Source Database
    • How to connect Data Source Database
  • Data Operations
    • Deep Learning
    • Data Orchestration
    • Data Pipeline Controls
    • Data Cleaning
    • Data Wrangling
    • Data Transformation

Goldfinch AI

  • Goldfinch AI Introduction

Bizdata API

  • Universal API for Database
    • API for PostgreSQL Database – Universal API
    • API for Amazon Aurora Database (MySQL/Maria) – Universal API
    • API for Amazon Redshift Database – Universal API
    • API for Snowflake Database – Universal API
    • API for MySQL/Maria Database – Universal API
    • API for MS-SQL Database-Universal API
    • API for Oracle Database- Universal API
    • Introduction to Universal API for Databases
  • SFTP API
    • SFTP API
  • Document Understanding APIs
    • Document Understanding API- Extract data from Documents
  • Web Crawler API
    • Web Crawler API – Fast Website Scraping
  • AI Workflow Testing APIs
    • Netsuite Source Testing API (Netsuite API Replica)
    • Salesforce Testing API (Salesforce API replica)
    • OAuth2.0 Testing API 
    • Basic Auth Testing API 
    • No Auth Testing API
    • Pagination with Body Testing API
    • Next URL Pagination Testing API 
    • Total Page Count Pagination Testing API
    • Cursor Pagination Testing API 
    • Offset Pagination Testing API
  • Import IB API
    • Import Integration service with .JSON file
  • Linux File & Folder Monitoring APIs
    • Monitor Linux Files & Folder using APIs
  • Webhook
    • Webhook Integration-Capture Events in Real Time
  • Websocket
    • Websocket Integration- Fetch Real Time Data
  • Image Understanding
    • Image Understanding API – Extract data from Images

Goldfinch Analytics

  • Visualization Login
    • Enabling Two Factor Authentication
    • Visualization login for analytics users
  • Profile
    • Profile
  • Datalake
    • Datalake
  • Discover
    • Discover
  • Widgets
    • Filter
    • Widget List
    • Widgets Guide
    • Creating Widgets & Adding Widgets to Dashboard
  • Dashboard
    • Dashboard
  • Views
    • Views
  • Filter Queries
    • Filter Queries for Reports and Dashboard
  • Alerts
    • Alerts
  • Management
    • Management
  • Downloading Reports with Filtered Data
    • Downloading Reports with Filtered Data in Goldfinch Analytics
  • Downloads
    • Downloads – eZIntegrations Documents & Resources | Official Guides & Manuals
View Categories

Data Pipeline Controls

Sleep Time

Description: 

Sleep Time operation helps in giving a pause of some seconds which is defined by the user.

Number of Parameters : 1

Parameter : Sleep Time

It specifies the sleep time duration in seconds.

Below is an example where we are keeping sleep time duration of 60 seconds.

60

Python Operation

Description: 

Python operation helps in when a user has a particular need or requirement to perform a special operation that is not predefined, a user can develop a new operation that will meet their unique requirements.

Python Operation Parameters to Know When Writing Python Code

When writing Python code for this operation, it is important for users to be aware of the following parameters:

  1. Responsedata = []Description:Always start your Python script for this operation by initializing Responsedata as an empty list. This serves as a placeholder for appending any results. By keeping Responsedata as an empty array, users can easily append their results to this variable as needed.2. pycode_data

    Description:This variable holds the data that is flowing in the Integration Bridge. If you want to make changes to the existing data using a Python script, you need to use pycode_data and write your script according to this variable.

    These parameters are essential for effectively utilizing the Python operation in your scripts, ensuring that your code interacts correctly with the data flow and result handling within the Integration Bridge.

Number of Parameters : 1

Parameter : Pycode

Pycode example

Input Data with multiline json

{

“attributeid”: 212077,

“attributename”: “item_no”,

“attributevalue”: “999898”,

“attribute_groupid”: 24315,

“attribute_groupname”: “Default”,

“Isvariant”: “false”

}

{

 

“attributeid”: 212078,

“attributename”: “Product Name”,

“attributevalue”: “Product ABC”,

“attribute_groupid”: 24315,

“attribute_groupname”: “Default”,

“Isvariant”: “false”

}

Below is the pycode ops

 When we want to run a script for the data which is flowing. In this case the data will be stored in `pycode_data` variable, so use this variable for making changes in data using your script.

 

Responsedata = []

 

keyname = [“attributename”]

 

keyvalue = [“attributevalue”]

for i in keyname:

 

attributename = pycode_data.pop(i, None)

for j in keyvalue:

attributevalue = pycode_data.pop(j, None)

pycode_data[attributename] = attributevalue

In the above example we can see that we are running a script to make changes in the existing data so we are using `pycode_data` variable.

Output data after applying Python Pycode

 

{

 

“attributeid”: 212077,

 

“item_no”: “999898”,

 

“attribute_groupid”: 24315,

 

“attribute_groupname”: “Default”,

 

“Isvariant”: “false”

 

}

 

{

 

“attributeid”: 212078,

 

“Product Name”: “Product ABC”,

“attribute_groupid”: 24315,

 

“attribute_groupname”: “Default”,

 

“Isvariant”: “false”

 

}

                                   

Filter Operation

Description:

if else is called as the filter operation in eZintegrations, basically it has 1 parameter: Conditional Statement

  1. Source has the JSON data of the previous operation.
  2. Target is the JSON data after performing the filter operation.
  3. Conditional Statement is the parameter where we include logical operators (and, or, not), Identity operators(is, is not), membership operators (in, not in), comparisional operators(==,!=,>, <,<=,>=)

Note: ‘data’ is fixed in the statement, we can change the name of the key and value as per requirement.

List of Operators that can be used : [==, !=, <, >, =<, =>, or, and]

Example 1: Filtering data where Order Status is pending

 

data[‘Order Status’]==‘pending’

Example 2: Filtering data where the value of “id” is 5 or 6.

 

data[‘id’]==5 or data[‘id’]==6

Example 3: Filtering data where order status is not failed and placed before a specific date.

 

data[‘Order Status’]!= ‘failed’ and data[‘Date’]<‘6/03/2022’

Example 4 : When the key  ipAddress does not exists in the pipeline. You can copy the below snippet directly in the Filter operations

 

‘ipAddress’ not in data

Example 5 : When the key  ipAddress exists in the pipeline. You can copy the below snippet directly in the Filter operations

 

‘ipAddress’ in data

AI Operation 

Description: 

The AI Operation helps you modify and improve your data using simple instructions written in everyday English. 

Parameters  

Number of Parameters: 1  

Parameter: Instructions 

A natural language instruction that describes how the input data should be transformed.  

Below is an example where we are using Instruction parameter to add new key. 

AI operation behaves as follows: 

  1. Input Data: Data provided to the AI Operation 
  2. Instructions: Natural language instruction describing the required transformation 
  3. Output: JSON object with transformations applied as per the prompt.  

Example 1: Use Case for Grouping Order Data into Relevant Categories

Input Data: 

 

{

 

“Row ID”: 43,

 

“Order ID”: “CA-2016-101343”,

 

“Order Date”: “17-07-2016”,

 

“Ship Date”: “22-07-2016”,

 

“Ship Mode”: “Standard Class”,

 

“Customer ID”: “RA-19885”,

 

“Customer Name”: “Ruben Ausman”,

 

“SEGMENT”: “Corporate”,

“COUNTRY”: “United States”,

 

“CITY”: “Los Angeles”,

 

“STATE”: “California”,

 

“Postal Code”: 90049,

 

“REGION”: “West”,

 

“Product ID”: “OFF-ST-10003479”,

 

“CATEGORY”: “Office Supplies”,

 

“Sub-Category”: “Storage”,

 

“Product Name”: “Eldon Base for stackable storage shelf, platinum”,

 

“SALES”: 77.88,

 

“QUANTITY”: 2,

 

“DISCOUNT”: 0,

 

“PROFIT”: 3.89,

 

“Order Returned”: “Yes”

 

}

Instruction:  

 

Group the given order data in {%bizdata_dataset_response%} into customer, product, shipping, and financial sections.

Output:  

 

{

 

“customer”: {

 

“Customer ID”: “RA-19885”,

 

“Customer Name”: “Ruben Ausman”,

 

“SEGMENT”: “Corporate”,

 

“COUNTRY”: “United States”,

 

“CITY”: “Los Angeles”,

 

“STATE”: “California”,

 

“Postal Code”: 90049,

 

“REGION”: “West”

 

},

 

“product”: {

 

“Product ID”: “OFF-ST-10003479”,

 

“CATEGORY”: “Office Supplies”,

 

“Sub-Category”: “Storage”,

 

“Product Name”: “Eldon Base for stackable storage shelf, platinum”,

 

“QUANTITY”: 2

 

},

 

“shipping”: {

 

“Order ID”: “CA-2016-101343”,

 

“Order Date”: “17-07-2016”,

 

“Ship Date”: “22-07-2016”,

 

“Ship Mode”: “Standard Class”,

 

“Order Returned”: “Yes”

 

},

 

“financial”: {

 

“SALES”: 77.88,

 

“DISCOUNT”: 0,

 

“PROFIT”: 3.89

 

}

}

Example 2: Use Case for Generating a Full Name from First Name and Last Name

Input Data: 

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”,

 

“avatar”: “https://reqres.in/img/faces/1-image.jpg”

 

}

Instruction:  

1

Take first name : ‘{%first_name%}, and last name : {%last_name%} and generate full name. return a python dict with key as ‘full_name’ and generated full name in its value.

Output:  

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”,

 

“avatar”: “https://reqres.in/img/faces/1-image.jpg”,

 

“full_name”: “George Bluth”

 

}

Example 3: Use Case for Generating a Primary Key from Markdown Data

Input Data: 

 

{

 

“markdown_data”: “{{markdowndata}}”

 

}

Instruction:  

1

Check this markdown data ”'{%neural_field%}”’ and help me in generating a unique, human-readable webpage name which will be used as primary key and store the value in this key ‘webpage_name’ The generated name must be between 100 and 120 characters in length. Return exactly one new key called ‘webpage_name’ with the generated name as its value inside a python dict.

Output:  

1

{ “markdonw_data”:“{{markdowndata}}”, “webpage_name” : “webpage_name” }

Example 4: Use {%data%} – Format Location String

Input Data (Key: data):

 

{

 

“data”: {

 

“city”: “New York”,

 

“country”: “USA”

 

}

}

{%data%} refers to the entire object under the data key.

Instruction:

1

From the location {%data%}, create a new key ‘full_location’ by combining city and country as “{city}, {country}”.

Output:

 

{

 

“full_location”: “New York, USA”

 

}

Text to Operations:

Description:

Text to Operations is an enhanced version of the previous Python operation. It allows users to either provide plain English instructions or write Python code directly to perform tasks on data. When using natural language, instructions are automatically transformed into executable Python code, enabling users to focus on describing their desired outcomes without needing to write code manually. Alternatively, users can write their own Python code if preferred. The operation supports natural language input, intelligent code generation, real-time execution, and direct code input, offering flexibility based on user preference.

Text to Operations converts natural language instructions or user-written code into runnable logic, built on Python and designed to support a wide range of operations moving forward.

How Parameters Are Handled in Text to Operations:

When users provide natural language instructions, they do not need to explicitly mention parameters. Instead, they can describe the desired changes in plain English, and the system will generate Python code in the backend that utilizes the following parameters to interact with the data. If users choose to write Python code directly, they can use these parameters as needed:

  1. Responsedata = []
  2. Description: The generated or user-written Python script typically initializes Responsedata as an empty list. This serves as a placeholder where results from the operation are appended. For natural language instructions, the system handles Responsedata automatically. For user-written code, users can manage Responsedata to collect results as needed.
  3. pycode_data
  4. Description: This variable holds the data flowing through the Integration Bridge. For natural language instructions, users describe modifications to the data, and the system generates code that uses pycode_data to apply those changes. For user-written code, users can directly reference pycode_data to manipulate the data.

These parameters are managed automatically by the system for natural language instructions, ensuring seamless integration with the data flow and result handling in the Integration Bridge. For user-written code, users have the flexibility to use these parameters as required.

Number of Parameters: 1

Parameter: Instruction (Plain English or Python code)

Users can choose to either provide a plain English instruction describing the desired data transformation or write Python code directly. The system will process the input accordingly, either generating code from the instruction or executing the provided code.

Examples:

Example 1: Converting last_name to Lowercase

Prompt:

1

From the source data, convert the value of the last_name key to lowercase.

Input Data:

 

{

 

“bizdata_dataset_response”: {

 

“data”: [

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”,

 

“avatar”: “https://reqres.in/img/faces/1-image.jpg”

 

}

 

]

 

}

 

}

Generated Code (From Plain English Instruction):

 

Responsedata = []

 

for user in pycode_data[‘bizdata_dataset_response’][‘data’]:

 

user[‘last_name’] = user[‘last_name’].lower()

 

Responsedata.append(user)

Output Data:

 

[

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “bluth”,

 

“avatar”: “https://reqres.in/img/faces/1-image.jpg”,

 

“python_code”: “Responsedata = []\nfor user in pycode_data[‘bizdata_dataset_response’][‘data’]:\n user[‘last_name’] = user[‘last_name’].lower()\n Responsedata.append(user)”

 

}

 

]

Example 2: Converting last_name to Uppercase

Prompt:

 

From the source data, convert the value of the last_name key to uppercase.

Input Data:

 

{

 

“bizdata_dataset_response”: {

 

“data”: [

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “Bluth”,

 

“avatar”: “https://reqres.in/img/faces/1-image.jpg”

 

}

 

]

 

}

 

}

Generated Code (From Plain English Instruction):

 

Responsedata = []

 

for user in pycode_data[‘bizdata_dataset_response’][‘data’]:

 

user[‘last_name’] = user[‘last_name’].upper()

 

Responsedata.append(user)

Output Data:

 

[

 

{

 

“id”: 1,

 

“email”: “george.bluth@reqres.in”,

 

“first_name”: “George”,

 

“last_name”: “BLUTH”,

 

“avatar”: “https://reqres.in/img/faces/1-image.jpg”,

 

“python_code”: “Responsedata = []\nfor user in pycode_data[‘bizdata_dataset_response’][‘data’]:\n user[‘last_name’] = user[‘last_name’].upper()\n Responsedata.append(user)”

 

}

 

]

Example 3: Previous Example from Python Operation

Prompt:

 

For each item in the source, create a new key using the value of ‘attributename’, set its value to the value of ‘attributevalue’, and remove the ‘attributename’ and ‘attributevalue’ keys and replace it with newly created key and value.

Generated Code (From Plain English Instruction)

 

Responsedata = []

for item in pycode_data:

 

new_item = item.copy()

 

new_key = new_item.pop(‘attributename’, None)

 

new_value = new_item.pop(‘attributevalue’, None)

 

if new_key and new_value:

 

new_item[new_key] = new_value

 

Responsedata.append(new_item)

Input Data:

 

[

 

{

 

“attributeid”: 212077,

 

“attributename”: “item_no”,

 

“attributevalue”: “999898”,

 

“attribute_groupid”: 24315,

 

“attribute_groupname”: “Default”,

 

“Isvariant”: “false”

 

},

 

{

 

“attributeid”: 212078,

 

“attributename”: “Product Name”,

 

“attributevalue”: “Product ABC”,

 

“attribute_groupid”: 24315,

 

“attribute_groupname”: “Default”,

 

“Isvariant”: “false”

 

}

 

]

Output Data:

 

[

 

{

 

“attributeid”: 212077,

 

“item_no”: “999898”,

 

“attribute_groupid”: 24315,

 

“attribute_groupname”: “Default”,

 

“Isvariant”: “false”

 

},

 

{

 

“attributeid”: 212078,

 

“Product Name”: “Product ABC”,

 

“attribute_groupid”: 24315,

 

“attribute_groupname”: “Default”,

 

“Isvariant”: “false”

 

}

 

]

 

Forward Code is a toggle button in the UI. When enabled, the generated Python code can be carried forward to the next operation.

Updated on December 29, 2025

What are your Feelings

  • Happy
  • Normal
  • Sad

Share This Article :

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Data OrchestrationData Cleaning
Table of Contents
  • Sleep Time
    • Parameter : Sleep Time
  • Python Operation
    • Parameter : Pycode
  • Filter Operation
    • AI Operation 
    • Parameter: Instructions 
© Copyright 2026 Bizdata Inc. | All Rights Reserved | Terms of Use | Privacy Policy