Live Webinar | Jan 15, 11 AM EST — Search, Import, Automate: How Enterprises Launch AI Workflows in Minutes. Register Now !

Skip to the content

Automate Everything !

🤖 Explore with AI: ChatGPT Perplexity Claude Google AI Grok

For Enterprises | Teams | Start-Ups

eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

0
$0.00
eZintegrations – AI Workflows & AI Agents Automation Hub

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

Menu
0
$0.00
  • Categories
    • Workflow Automation
    • AI Workflow
    • AI Agent
    • Agentic AI
  • Home
  • Automate Now !
  • About Us
  • Contact
  • Blog
  • Free AI Workflow
  • Free AI Agents

eZintegrations

  • eZintegrations Introduction
  • Integration Bridge
    • Rename Integration Bridge
    • Enable and Disable Integration Bridge
    • Integration Bridge Save
    • Integration Bridge Run Once
    • Clear Logs of An Integration Bridge
    • Integration Bridge Share Feature
    • Copy Operation
    • Integration Bridge Import/Export
    • Integration Bridge Auto Save Feature
    • View An Integration Bridge
    • Copy Integration Bridge
    • Streaming Logs of Integration Bridge
    • Download Logs of An Integration Bridge
    • Status of Integration Bridge
    • Refresh an Integration Bridge
    • Stop An Integration Bridge
    • Start An Integration Bridge
    • Frequency
  • Feedback
    • Feedback: Tell Us What You Think
  • Understanding Session Timeout
    • Understanding Session Timeout and the Idle Countdown Timer
  • Alerts
    • Alerts
  • Marketplace
    • Marketplace
  • DIY Articles
    • 60+ Transformations for Smarter Data: How eZintegrations Powers Operations
    • From SOAP to GraphQL: Modernizing Integrations with eZintegrations
    • Accelerate Growth with eZintegrations Unified API Marketplace
    • Collaborative Integrations: Sharing Bridges in eZintegrations to Foster Cross-Team Innovation
    • Unlocking Hidden Value in Unstructured Data: eZintegrations AI Document Magic for Strategic Insights
    • Workflow Cloning Wizardry: Replicating Success with eZintegrations Integration Duplication for Rapid Scaling
    • Time Zone Triumph: Global Scheduling in eZintegrations for Synchronized Cross-Border Operations
    • Parallel Processing Power: eZintegrations Multi-Threaded Workflows for Lightning Fast Data Syncs
    • From Data Chaos to Competitive Edge: How eZintegrations AI Syncs Silos and Boosts ROI by 40%
    • From Emails to Insights: eZintegrations AI Turns Chaos into Opportunity
    • Handling XML Responses in eZintegrations
    • Text to Action: Shape Data with Plain English or Python in eZintegrations
    • AI Magic: Send Data to Any Database with a Simple English Prompt in eZintegrations
    • Configuring Netsuite as Source
    • Configuring Salesforce as Source
    • Overcoming Upsert Limitations: A Case Study on Enabling Upsert Operations in APIs without Inherent Support
    • Connecting QuickBooks to Datalake
    • Connecting Salesforce to Netsuite
    • Connecting My-SQL to Salesforce Using Bizdata Universal API
    • Effortless Integration Scheduling: Mastering Biweekly Execution with eZintegrations
    • Connecting MS-SQL or Oracle Database to Salesforce Using Bizdata Universal API
    • Establishing Token-Based Authentication within NetSuite
    • Registering a Salesforce App and Obtaining Client ID / Secret (for API Calls / OAuth)
  • Management
    • Adding Users and Granting Organization Admin Privileges : Step-by-Step Guide
    • Security Matrix
    • Adding Users as an Organization Admin (Step-by-Step Guide)
  • Appendix
    • Pivot Operation Use Cases
    • Efficient Column Renaming in eZintegration Using Python Operation
    • Filter Operation Use Cases
    • Connecting any Database to Database
    • Connecting Data Targets
    • Connecting Data Sources
  • Release Notes
    • Release Notes
  • Accounting & Billing
    • Invoices
    • Billing Information
    • Payment Method
    • Current Plan
    • Plans
    • Dashboard
  • My Profile
    • My Profile
  • OnBoarding
    • Microsoft Login
    • Multi-Factor Authentication
    • Login for New Users
  • Pycode Examples
    • Extract Domain Name from Email using Split
    • Split String with Regular Expression
    • Bulk Rename of Keys
    • Form a JSON Object from array of array
    • URL Parsing
    • Form a JSON Object based on the key and values available in JSON Dataset
    • Convert Empty String in a JSON to a “null” value
    • Generate a OAuth 1.0 Signature or Store a Code Response in a User Defined Variable
    • Rename JSON Key based on other key’s value
  • Sprintf
    • Sprintf
  • Data Source Management
    • Data Source Management
  • Data Source API
    • Response Parameters: Text, XML, and JSON Formats
    • Environment Settings for Reusable and Dynamic Configuration
    • API Numeric Parameters for Pagination and Record Limits
    • API Time Parameters for Date and Time Filtering
    • How to test the Data Source API
    • Pre- Request Scripts
      • Pre- Request Scripts for Amazon S3
      • Pre- Request Scripts for Oracle Netsuite
      • Pre-Request Script for Amazon SP API
      • Pre-Request Scripts
    • API Pagination Methods
      • Custom Pagination
      • Encoded Next Token Pagination
      • Cursor Pagination
      • Pagination with Body
      • Total Page Count Pagination
      • Offset Pagination
      • Next URL Pagination
      • API Pagination Introduction
      • Pagination examples
        • SAP Shipment API Pagination
        • Amazon SP API Pagination
    • API Authorization
      • OAuth 2.0 Authorization
      • OAuth 1.0 Authorization
      • Basic Authentication Method
      • API Key Authorization Method
      • Different Types of API Authorization
  • Console
    • Console: Check Your Data at Every Step
  • eZintegrations Dashboard Overview
    • eZintegrations Dashboard Overview
  • Monitoring Dashboard
    • Monitoring Dashboard
  • Advanced Settings
    • Advanced Settings
  • Summary
    • Summary
  • Data Target- Email
    • Data Target- Email
  • Data Target- Bizintel360 Datalake Ingestion
    • Data Target- Goldfinch Analytics Datalake Ingestion
  • Data Target- Database
    • Data Target – Database SQL Examples
    • Database as a Data Target
  • Data Target API
    • Response Parameters
    • REST API Target
    • Pre-Request Script
    • Test the Data Target
  • Bizdata Dataset
    • Bizdata Dataset Response
  • Data Source- Email
    • Extract Data from Emails
  • Data Source- Websocket
    • WebSocket Data Source Overview
  • Data Source Bizdata Data Lake
    • How to Connect Data Lake as Source
  • Data Source Database
    • How to connect Data Source Database
  • Data Operations
    • Deep Learning
    • Data Orchestration
    • Data Pipeline Controls
    • Data Cleaning
    • Data Wrangling
    • Data Transformation

Goldfinch AI

  • Goldfinch AI Introduction

Bizdata API

  • Universal API for Database
    • API for PostgreSQL Database – Universal API
    • API for Amazon Aurora Database (MySQL/Maria) – Universal API
    • API for Amazon Redshift Database – Universal API
    • API for Snowflake Database – Universal API
    • API for MySQL/Maria Database – Universal API
    • API for MS-SQL Database-Universal API
    • API for Oracle Database- Universal API
    • Introduction to Universal API for Databases
  • SFTP API
    • SFTP API
  • Document Understanding APIs
    • Document Understanding API- Extract data from Documents
  • Web Crawler API
    • Web Crawler API – Fast Website Scraping
  • AI Workflow Testing APIs
    • Netsuite Source Testing API (Netsuite API Replica)
    • Salesforce Testing API (Salesforce API replica)
    • OAuth2.0 Testing API 
    • Basic Auth Testing API 
    • No Auth Testing API
    • Pagination with Body Testing API
    • Next URL Pagination Testing API 
    • Total Page Count Pagination Testing API
    • Cursor Pagination Testing API 
    • Offset Pagination Testing API
  • Import IB API
    • Import Integration service with .JSON file
  • Linux File & Folder Monitoring APIs
    • Monitor Linux Files & Folder using APIs
  • Webhook
    • Webhook Integration-Capture Events in Real Time
  • Websocket
    • Websocket Integration- Fetch Real Time Data
  • Image Understanding
    • Image Understanding API – Extract data from Images

Goldfinch Analytics

  • Visualization Login
    • Enabling Two Factor Authentication
    • Visualization login for analytics users
  • Profile
    • Profile
  • Datalake
    • Datalake
  • Discover
    • Discover
  • Widgets
    • Filter
    • Widget List
    • Widgets Guide
    • Creating Widgets & Adding Widgets to Dashboard
  • Dashboard
    • Dashboard
  • Views
    • Views
  • Filter Queries
    • Filter Queries for Reports and Dashboard
  • Alerts
    • Alerts
  • Management
    • Management
  • Downloading Reports with Filtered Data
    • Downloading Reports with Filtered Data in Goldfinch Analytics
  • Downloads
    • Downloads – eZIntegrations Documents & Resources | Official Guides & Manuals
View Categories

Data Wrangling

Single Line to Multiline

Description :

Single Line to Multiline operation is used to convert Single Line JSON to Multiline JSON.
The resulting JSON string can be used to store or transmit data in a structured format.

Number of Parameters : 1

Parameter : Chopkey

Chopkey Adds a new key on the fly with its value as a static value or dynamic value.

Below is an example where we are using chopkey operation to add new key.


1
['bizdata_dataset_response']['data']

Delimiter to JSON

Description :

Delimiter to JSON operation is used to convert Delimiter data to JSON data.
The resulting JSON string can be used to store or transmit data in a structured format.

Number of Parameters : 6

Parameter : Key Data

Key Data holds the value of delimiter data which we want to convert into JSON data.

Below is an example where we are using DataTable as key.


1
['DataTable']
Parameter : Delimiter

Delimiter is used to separate values in a list, record or file.

Below is an example where we are using , as delimiter.


1
,
Parameter : Fields

Fields is used in to specify headers name, it is not mandatory if the user wants then they can specify the fields else it will be passed as empty and pre defined fields will be generated.

Below is an example where we are using headers Customer ID, Organization Name, Month and Item as header name.


1
"Customer ID","Organization Name","Month","Item"
Parameter : Autodetect Column Names

Autodetect Column Names if the user defines fields then it will be false else it will be true. 

Below is an example we are taking true which is for pre defined field.


1
true
Parameter : Skip Header

If user defines the fields it will be true else it will be false.
Below is an example we are taking false which is for predefined fields.


1
 false
Parameter : Response Key

Response Key is the key in which your delimiter data will be stored.

Below is an example where  the JSON data will be store datatable.


1
datatable


JSON to Delimiter

Description :

JSON to Delimiter is used to convert JSON data to Delimiter data.
The resulting Delimiter data can be used to store or transmit data in a structured format.

Number of Parameters : 6

Parameter : Key Data

Key Data holds the value of JSON data which needs to be converted into delimiter data.

Below is an example where you provide the key name [‘items’] which holds your JSON data.


1
['items']
Parameter : Delimiter

Delimiter is used to separate values in a list, record, or file.

Below is an example where we using \t as delimiter.


1
\t
Parameter : Fields

Files is used in to specify headers name, it is not mandatory if the user wants then they can specify the fields else it will be passed as empty and pre defined fields will be generated.

Below is an example where we are using values like Item1,Item2 and Item3 as header name.


1
 "Item1","Item2","Item3"
Parameter : Autodetect Column Names

Autodetect Column Names if the user defines fields then it will be false else it will be true. 

Below is an example we are taking true which is for pre defined field.


1
true
Parameter : Skip Header

If user defines the fields it will be true else it will be false.

Below is an example we are taking false which is for predefined fields.


1
false
Parameter : Response Key

Response Key is the key in which your delimiter data will be stored.

Below is an example where  the JSON data will be store delimiter_data.


1
delimiter_data


Data Aggregation

Description: 

Data aggregation operation is used for the processing of raw data, this will also help in the grouping, summarizing, and processing of the data to make it easier to understand and analyze.

Number of Parameters : 3


Parameter : Agg Data Key

Agg Data Key is passed as empty for multiline data and will have data key in case of single line data.

Use this parameter when you have multi-line json data else leave blank

Below is an example where we using 
[‘bizdata_dataset_response’][‘items’] as Agg Data Key for singleline data


1
['bizdata_dataset_response']['items']

Parameter : Groupby Key

Groupby Key gives key name that the user wants to group by, basically this is a unique identifier in the dataset.

Below is an example where we using Orders as Groupby Key as a unique identifier of a dataset which is having Order and Order Line Items, for the one Order we can have multiple items.


1
"Orders"
Parameter : Array Key

Array Key gives the key where the user wants to hold the common keys, the user can provide any key name

Below is an example where Order Lines holds a common key.


1
"Order Lines"

Parameter : Array Key Nested Columns

Give the key name as comma separated keys that should be available inside the Array Key

Below is an example where we using id, name and year as column names


1
"id","name","year"

Unpivot

Description:

An unpivot operation is used to convert single object into list of object based on transposed values tracking transpose key name parameters.

Number of Parameters : 2

Parameter : Transposed Key Name

Transposed Key Name used in specify name of key which needs to be transposed

Below is an example where we using bucket_type and bucket_value as values that needs to be transpose.


1
"bucket_type","bucket_value"

Parameter : Transposed Value

Transposed Value used in specify the values which needs to be transposed

Below is an example where the transpose values are on_hand, purchase_orders and goods_in_transit


1
"on_hand", "purchase_orders","goods_in_transit"

Pivot

Description: 

In pivot operation we are combining multiple dictionaries (object) into a single dictionary (object) based on the transposed value and the transposed key name provided by the user.

Number of Parameters : 3

Parameter : Get Key

Get key is will be passed as empty for multiline data and will have get key in case of single line data.

Below is an example where we are using get key as items because of singleline data.


1
items
Parameter : Transposed Key Name

Transposed Key Name specifies name of the key which needs to be transposed.

Below is an example where we are using Item Id and Item Name as key name.


1
"Item Id","Item Name"

Parameter : 
Transposed Value

Transposed value specifies which values needs to be transposed.

Below is an example where we are using some particular item ids to be transposed OrderID-1, OrderID-2, OrderID-3


1
"OrderID-1", "OrderID-2"

Example Scenario :

Consider the following input data representing a dataset with various attributes.


1
{"bizdata_dataset":{
2
        "id":123,
3
        "name":"sample",
4
        "lastname":"dataset",
5
        "attributes":[
6
            {
7
                "attributename":"item",
8
                "attributevalue":"27",
9
                "attribute_code":12234
10
            },
11
            {
12
                "attributename":"item2",
13
                "attributevalue":"47",
14
                "attribute_code":12334
15
            },
16
            {
17
                "attributename":"item1",
18
                "attributevalue":"37",
19
                "attribute_code":13234
20
            }]}}

Pivot Operation Parameters:

Parameter : Get Key


1
attributes

Parameter : Transposed Key Name


1
"attributename"

Parameter : Transposed Value

 


1
"attributevalue"


Sample Output :

After applying the Pivot Operation, the input data will be transformed as follows:


1
{"bizdata_dataset":{
2
    "id": 123, 
3
    "name": "sample",
4
    "lastname": "dataset", 
5
    "attributes":[
6
        {
7
            "attributename": "item", 
8
            "attributevalue": "27", 
9
            "attribute_code": 12234
10
        },
11
        {
12
            "attributename": "item2",
13
            "attributevalue": "47",
14
            "attribute_code": 12334
15
        },
16
        {
17
            "attributename": "item1",
18
            "attributevalue": "37",
19
            "attribute_code": 13234
20
        }],
21
        "item": "27", 
22
        "item2": "47",
23
        "item1": "37"}}

Explanation of Output :

  • The original array of attributes remains unchanged.
  • The attributename values (“item”, “item2”, “item1”) have been transposed to the root level of the bizdata_dataset object as keys.
  • The corresponding attributevalue values (“27”, “47”, “37”) are now associated with the newly created keys (“item”, “item2”, “item1”).


Single Line to Tuple

Description: 

Single Line to Tuple operation is used to convert a single line of data to a tuple.

Number of Parameters : 3

Parameter : Singleline Key

Singleline Key helps in reading the dataset from a single line.

Below is an example where we are using a key DataTable holds the single line data.


1
['DataTable']
Parameter : Table Headers

Table Headers specifies the sequence of the converted tuple data.

Below is an example where we are using a key names Item, Customer and Month the values of these key names will appear in same sequence in the tuple data.


1
"Item","Customer","Month"
Parameter : Tuple Key

Tuple Key is the key which is holds tuple data.

Below is an example where we are using data as tuple key.

 


1
data

Tuple to Single line

Description: 

Tuple to Single line operation is used in convert a tuple into a single line, which involves taking a tuple and converting it to a single line string.

Number of Parameters : 3

 

Parameter : Tuple Key

Tuple Key is used to read the user’s tuple data.

Below is an example where we are using Data which holds the tuple data .

 


1
['Data']
Parameter : Headers

Table Headers are the headers or key name of the new JSON.

Below is an example where we are using key name’s Item, Customer and Month as the values of these key names will appear in same sequence in the singleline data.


1
"Item","Customer","Month"

Parameter : Singleline Key

Singleline Key helps in to storing the converted singleline data.

Below is an example where we are using datatable to store the singleline data.

 


1
datatable

 

Delimiter to Array

No of Parameters:- 2

Parameter:- Dl Key

Converts a given key’s delimited value into array.

Below is the example how we can use Dl key.


1
"email"

Parameter:- Delimiter

Give the delimiter used to separate the delimited values. Delimiter can be any of “,” , “/t”, “|” etc.

Below is the example how we can use Delimiter.


1
“,”

Example:-
Input = {“data”:”a,b,c,d”}
Output = [a,b,c,d] .
In above example, Delimeter will be “,” and Dl key will be “data”.

How it works : From the above example we can understand that this operation is used to convert delimeter separated values to a array.

 

Zipfile in Base64

No of Parameters:- 5

Parameter:  Source Key

This key contains all the records.

In the example below, “items” serves as the Source Key.


1
"items"

Parameter: File Name Key

This key holds the file name.

In the given example, “FILE_NAME” will serve as the key for the File Name Key.


1
"FILE_NAME"

Parameter: File Extension Key

This key contains the file extension.

In the example below, “EXTENSION” will act as the key for the file extension.


1
"EXTENSION"

Parameter: File Data Key

This key contains the file’s data.

In the example below, “FILE_DATA” is designated as the key for the file’s data.


1
"FILE_DATA"

Parameter: Base64 Response Key

This key holds the ultimate base64 encoded string of a zip file.

In the example below, “File_string” is designated as the key for the Base64 Response key.


1
"File_string"

Example:

Input = {“data “: {“items”: [{“FILE_NAME”: “file_01”, “EXTENSION”: “.csv”, “FILE_DATA”: “bnIsdGVzdGluZyxvcHMNCjEsZmlsZTEsemlwb3BzDQoyLGZpbGUxLHppcG9wcw0KMyxmaWxlMSx6aXBvcHM=”},
{“FILE_NAME”: “file_02”, “EXTENSION”: “.tsv”, “FILE_DATA”: “bnIJdGVzdGluZwlvcHMNCjEJZmlsZTIJemlwb3BzDQoyCWZpbGUyCXppcG9wcw0KMwlmaWxlMgl6aXBvcHM=”},
{“FILE_NAME”: “file_03”, “EXTENSION”: “.psv”, “FILE_DATA”: “bnJ8dGVzdGluZ3xvcHMNCjF8ZmlsZTJ8emlwb3BzDQoyfGZpbGUyfHppcG9wcw0KM3xmaWxlMnx6aXBvcHM=”}]}}

Output = [“File_string”: “Encoded zip file string”] .

                                

Updated on December 29, 2025

What are your Feelings

  • Happy
  • Normal
  • Sad

Share This Article :

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Data CleaningData Transformation
Table of Contents
  • Single Line to Multiline
    • Parameter : Chopkey
  • Delimiter to JSON
    • Parameter : Key Data
    • Parameter : Delimiter
    • Parameter : Fields
    • Parameter : Autodetect Column Names
    • Parameter : Skip Header
    • Parameter : Response Key
  • JSON to Delimiter
    • Parameter : Key Data
    • Parameter : Delimiter
    • Parameter : Fields
    • Parameter : Autodetect Column Names
    • Parameter : Skip Header
    • Parameter : Response Key
  • Data Aggregation
    • Parameter : Agg Data Key
    • Parameter : Groupby Key
    • Parameter : Array Key
    • Parameter : Array Key Nested Columns
  • Unpivot
    • Parameter : Transposed Key Name
  • Pivot
    • Parameter : Get Key
    • Parameter : Transposed Key Name
    • Parameter : Transposed Value
  • Single Line to Tuple
    • Parameter : Singleline Key
    • Parameter : Table Headers
    • Parameter : Tuple Key
  • Tuple to Single line
    • Parameter : Tuple Key
    • Parameter : Headers
© Copyright 2026 Bizdata Inc. | All Rights Reserved | Terms of Use | Privacy Policy