Skip to the content

Automate Everything !

🤖 Explore with AI: ChatGPT Perplexity Claude Google AI Grok

For Enterprises | Teams | Start-Ups

eZintegrations

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

0
$0.00
eZintegrations

eZintegrations – AI Workflows & AI Agents Automation Hub

Automate to Innovate

Menu
0
$0.00
  • Categories
    • Workflow Automation
    • AI Workflow
    • AI Agent
    • Agentic AI
  • Home
  • Automate Now !
  • Platform
  • Goldfinch AI
  • About Us
  • Contact
  • Blog
  • Pricing
  • Free AI Workflow
  • Free AI Agents

eZintegrations

  • eZintegrations Introduction
  • Integration Bridge
    • Rename Integration Bridge
    • Enable and Disable Integration Bridge
    • Integration Bridge Save
    • Integration Bridge Run Once
    • Clear Logs of An Integration Bridge
    • Integration Bridge Share Feature
    • Copy Operation
    • Integration Bridge Import/Export
    • Integration Bridge Auto Save Feature
    • View An Integration Bridge
    • Copy Integration Bridge
    • Streaming Logs of Integration Bridge
    • Download Logs of An Integration Bridge
    • Status of Integration Bridge
    • Refresh an Integration Bridge
    • Stop An Integration Bridge
    • Start An Integration Bridge
    • Frequency
  • Feedback
    • Feedback: Tell Us What You Think
  • Understanding Session Timeout
    • Understanding Session Timeout and the Idle Countdown Timer
  • Alerts
    • Alerts
  • Marketplace
    • Marketplace
  • DIY Articles
    • 60+ Transformations for Smarter Data: How eZintegrations Powers Operations
    • From SOAP to GraphQL: Modernizing Integrations with eZintegrations
    • Accelerate Growth with eZintegrations Unified API Marketplace
    • Collaborative Integrations: Sharing Bridges in eZintegrations to Foster Cross-Team Innovation
    • Unlocking Hidden Value in Unstructured Data: eZintegrations AI Document Magic for Strategic Insights
    • Workflow Cloning Wizardry: Replicating Success with eZintegrations Integration Duplication for Rapid Scaling
    • Time Zone Triumph: Global Scheduling in eZintegrations for Synchronized Cross-Border Operations
    • Parallel Processing Power: eZintegrations Multi-Threaded Workflows for Lightning Fast Data Syncs
    • From Data Chaos to Competitive Edge: How eZintegrations AI Syncs Silos and Boosts ROI by 40%
    • From Emails to Insights: eZintegrations AI Turns Chaos into Opportunity
    • Handling XML Responses in eZintegrations
    • Text to Action: Shape Data with Plain English or Python in eZintegrations
    • AI Magic: Send Data to Any Database with a Simple English Prompt in eZintegrations
    • Configuring Netsuite as Source
    • Configuring Salesforce as Source
    • Overcoming Upsert Limitations: A Case Study on Enabling Upsert Operations in APIs without Inherent Support
    • Connecting QuickBooks to Datalake
    • Connecting Salesforce to Netsuite
    • Connecting My-SQL to Salesforce Using Bizdata Universal API
    • Effortless Integration Scheduling: Mastering Biweekly Execution with eZintegrations
    • Connecting MS-SQL or Oracle Database to Salesforce Using Bizdata Universal API
    • Establishing Token-Based Authentication within NetSuite
    • Registering a Salesforce App and Obtaining Client ID / Secret (for API Calls / OAuth)
  • Management
    • Adding Users and Granting Organization Admin Privileges : Step-by-Step Guide
    • Security Matrix
    • Adding Users as an Organization Admin (Step-by-Step Guide)
  • Appendix
    • Pivot Operation Use Cases
    • Efficient Column Renaming in eZintegration Using Python Operation
    • Filter Operation Use Cases
    • Connecting any Database to Database
    • Connecting Data Targets
    • Connecting Data Sources
  • Release Notes
    • Release Notes
  • Accounting & Billing
    • Invoices
    • Billing Information
    • Payment Method
    • Current Plan
    • Plans
    • Dashboard
  • My Profile
    • My Profile
  • OnBoarding
    • Microsoft Login
    • Multi-Factor Authentication
    • Login for New Users
  • Pycode Examples
    • Extract Domain Name from Email using Split
    • Split String with Regular Expression
    • Bulk Rename of Keys
    • Form a JSON Object from array of array
    • URL Parsing
    • Form a JSON Object based on the key and values available in JSON Dataset
    • Convert Empty String in a JSON to a “null” value
    • Generate a OAuth 1.0 Signature or Store a Code Response in a User Defined Variable
    • Rename JSON Key based on other key’s value
  • Sprintf
    • Sprintf
  • Data Source Management
    • Data Source Management
  • Data Source API
    • Response Parameters: Text, XML, and JSON Formats
    • Environment Settings for Reusable and Dynamic Configuration
    • API Numeric Parameters for Pagination and Record Limits
    • API Time Parameters for Date and Time Filtering
    • How to test the Data Source API
    • Pre- Request Scripts
      • Pre- Request Scripts for Amazon S3
      • Pre- Request Scripts for Oracle Netsuite
      • Pre-Request Script for Amazon SP API
      • Pre-Request Scripts
    • API Pagination Methods
      • Custom Pagination
      • Encoded Next Token Pagination
      • Cursor Pagination
      • Pagination with Body
      • Total Page Count Pagination
      • Offset Pagination
      • Next URL Pagination
      • API Pagination Introduction
      • Pagination examples
        • SAP Shipment API Pagination
        • Amazon SP API Pagination
    • API Authorization
      • OAuth 2.0 Authorization
      • OAuth 1.0 Authorization
      • Basic Authentication Method
      • API Key Authorization Method
      • Different Types of API Authorization
  • Console
    • Console: Check Your Data at Every Step
  • eZintegrations Dashboard Overview
    • eZintegrations Dashboard Overview
  • Monitoring Dashboard
    • Monitoring Dashboard
  • Advanced Settings
    • Advanced Settings
  • Summary
    • Summary
  • Data Target- Email
    • Data Target- Email
  • Data Target- Bizintel360 Datalake Ingestion
    • Data Target- Goldfinch Analytics Datalake Ingestion
  • Data Target- Database
    • Data Target – Database SQL Examples
    • Database as a Data Target
  • Data Target API
    • Response Parameters
    • REST API Target
    • Pre-Request Script
    • Test the Data Target
  • Bizdata Dataset
    • Bizdata Dataset Response
  • Data Source- Email
    • Extract Data from Emails
  • Data Source- Websocket
    • WebSocket Data Source Overview
  • Data Source Bizdata Data Lake
    • How to Connect Data Lake as Source
  • Data Source Database
    • How to connect Data Source Database
  • Data Operations
    • Deep Learning
    • Data Orchestration
    • Data Pipeline Controls
    • Data Cleaning
    • Data Wrangling
    • Data Transformation

Goldfinch AI

  • Goldfinch AI Introduction

Bizdata API

  • Universal API for Database
    • API for PostgreSQL Database – Universal API
    • API for Amazon Aurora Database (MySQL/Maria) – Universal API
    • API for Amazon Redshift Database – Universal API
    • API for Snowflake Database – Universal API
    • API for MySQL/Maria Database – Universal API
    • API for MS-SQL Database-Universal API
    • API for Oracle Database- Universal API
    • Introduction to Universal API for Databases
  • SFTP API
    • SFTP API
  • Document Understanding APIs
    • Document Understanding API- Extract data from Documents
  • Web Crawler API
    • Web Crawler API – Fast Website Scraping
  • AI Workflow Testing APIs
    • Netsuite Source Testing API (Netsuite API Replica)
    • Salesforce Testing API (Salesforce API replica)
    • OAuth2.0 Testing API 
    • Basic Auth Testing API 
    • No Auth Testing API
    • Pagination with Body Testing API
    • Next URL Pagination Testing API 
    • Total Page Count Pagination Testing API
    • Cursor Pagination Testing API 
    • Offset Pagination Testing API
  • Import IB API
    • Import Integration service with .JSON file
  • Linux File & Folder Monitoring APIs
    • Monitor Linux Files & Folder using APIs
  • Webhook
    • Webhook Integration-Capture Events in Real Time
  • Websocket
    • Websocket Integration- Fetch Real Time Data
  • Image Understanding
    • Image Understanding API – Extract data from Images

Goldfinch Analytics

  • Visualization Login
    • Enabling Two Factor Authentication
    • Visualization login for analytics users
  • Profile
    • Profile
  • Datalake
    • Datalake
  • Discover
    • Discover
  • Widgets
    • Filter
    • Widget List
    • Widgets Guide
    • Creating Widgets & Adding Widgets to Dashboard
  • Dashboard
    • Dashboard
  • Views
    • Views
  • Filter Queries
    • Filter Queries for Reports and Dashboard
  • Alerts
    • Alerts
  • Management
    • Management
  • Downloading Reports with Filtered Data
    • Downloading Reports with Filtered Data in Goldfinch Analytics
  • Downloads
    • Downloads – eZintegrations Documents & Resources | Official Guides & Manuals
View Categories

Data Transformation

Upper Case

Overview

Upper Case converts specified key values to uppercase while data is in-flight.

Number of Parameters

1

Parameter: Uppercase

Provide comma-separated keys in double quotes.

"first_name","last_name"

Lower Case

Overview

Lower Case converts specified key values to lowercase during data processing.

Number of Parameters

1

Parameter: Lowercase

"first_name","last_name"

Data Type

Overview

Data Type converts string values into Boolean, Float, Integer, or DateTime data types.

Number of Parameters

4

Boolean

"test_passed"

Float

"Amount"

Integer

"Quantity"

Date Time

"startweekdate1","%Y-%m-%d %H:%M:%S.%f%z","%Y-%m-%d %H:%M:%S", "startweekdate2","%Y-%m-%d %H:%M:%S","%Y-%m-%d"

Goldfinch Datalake Date Format:

%Y-%m-%dT%H:%M:%S.%f%z

Append

Overview

Append adds new keys with static or dynamic values during data flow.

Number of Parameters

1

Examples

"export_flag_y":"Y","export_flag_p":"P"

"concatenate_key_name":"{%ORDERNUMBER%}|{%ORDER_TYPE%}"

Title Case

Overview

Title Case converts specified key values into title case format.

"amount","first_name"

Data Extractor

Overview

Extracts specified keys and their values from JSON responses.

"['access_token']","['feedDocumentId']"

Trim

"first_name"

JSON to String / String to JSON

"key1","key2"

JSON to XML / XML to JSON

product_data_response data_response

Base64 Encoding / Decoding

"email"

Generate Array Sequence Number

key_name DATA

Today Timestamp

%Y-%m-%dT%H:%M:%S.%f%z dl_insert_date

Calculator

"Amount1-Amount2","Amount1+Amount2" "key1","key2"

Grok Pattern

Overview

Grok extracts structured data from unstructured text using predefined patterns.

This is endpoint url %{URI:endpoint_url} for mac add %{MAC:mac_address} and v4 %{IPV4:ip_address_v4} and V6 %{IPV6:ip_address_v6}

PDF Extractor

Items @xyz.grapgh.downloadUrl

ARRAY COUNT

['bizdata_dataset_response']['data']

RAW SENTENCE GENERATOR

"Name","Commands" Response

TIME UNITS

timestamp "year","month","day","hour","minute","second","microsecond"

Data Chunking

['text'] chunks 1000 Data_Chunks

Extract to Array Operation

"data" ["id","content"] ["ids","documents"]

Upper Case

Overview

Upper Case converts specified key values to uppercase while data is in-flight.

Number of Parameters

1

Parameter: Uppercase

Provide comma-separated keys in double quotes.

"first_name","last_name"

Lower Case

Overview

Lower Case converts specified key values to lowercase during data processing.

Number of Parameters

1

Parameter: Lowercase

"first_name","last_name"

Data Type

Overview

Data Type converts string values into Boolean, Float, Integer, or DateTime data types.

Number of Parameters

4

Boolean

"test_passed"

Float

"Amount"

Integer

"Quantity"

Date Time

"startweekdate1","%Y-%m-%d %H:%M:%S.%f%z","%Y-%m-%d %H:%M:%S",
"startweekdate2","%Y-%m-%d %H:%M:%S","%Y-%m-%d"

Goldfinch Datalake Date Format:

%Y-%m-%dT%H:%M:%S.%f%z

Append

Overview

Append adds new keys with static or dynamic values during data flow.

Number of Parameters

1

Examples

"export_flag_y":"Y","export_flag_p":"P"

"concatenate_key_name":"{%ORDERNUMBER%}|{%ORDER_TYPE%}"

Title Case

Overview

Title Case converts specified key values into title case format.

"amount","first_name"

Data Extractor

Overview

Extracts specified keys and their values from JSON responses.

"['access_token']","['feedDocumentId']"

Trim

"first_name"

JSON to String / String to JSON

"key1","key2"

JSON to XML / XML to JSON

product_data_response
data_response

Base64 Encoding / Decoding

"email"

Generate Array Sequence Number

key_name
DATA

Today Timestamp

%Y-%m-%dT%H:%M:%S.%f%z
dl_insert_date

Calculator

"Amount1-Amount2","Amount1+Amount2"
"key1","key2"

Grok Pattern

Overview

Grok extracts structured data from unstructured text using predefined patterns.

This is endpoint url %{URI:endpoint_url} for mac add %{MAC:mac_address} and v4 %{IPV4:ip_address_v4} and V6 %{IPV6:ip_address_v6}

PDF Extractor

Items
@xyz.grapgh.downloadUrl

ARRAY COUNT

['bizdata_dataset_response']['data']

RAW SENTENCE GENERATOR

"Name","Commands"
Response

TIME UNITS

timestamp
"year","month","day","hour","minute","second","microsecond"

Data Chunking

['text']
chunks
1000
Data_Chunks

Extract to Array Operation

"data"
["id","content"]
["ids","documents"]

HTML Extractor

Description:

The HTML Extractor operation extracts textual and structured data from given HTML content.

Number of Parameters : 2

Parameter : Input HTML Key

Provide the key name that contains the raw HTML data we aim to extract information from.

Below is an example where we provide the key containing HTML content.

bizdata_dataset_response

Parameter : Output Data Key

Specify the key name where you want to store the extracted structured data.

Below is an example where we aim to store the extracted data in the html_text_data key.

html_text_data

Various Use Cases for the Parameters:

Case 1:

When it’s necessary to extract plain text content from an HTML string.

Input HTML Key

bizdata_dataset_response

Output Data Key

html_text_data

Example for Case 1:

Input JSON

{
  "bizdata_dataset_response": "<html><body><h1>Hello World</h1><p>This is a paragraph.</p></body></html>"
}

Result

{
  "html_text_data": "Hello World This is a paragraph."
}

File Extractor

Description:

The File Extractor operation extracts textual data from various file formats such as txt, docs, ppt, pdf, and many others.

Number of Parameters : 1

Parameter : File Data Key

Provide the key name that contains the bytes data of the file to be extracted.

Below is an example where we provide the key containing the file data.

bizdata_dataset_response

Various Use Cases for the Parameters:

Case 1:

When you have a document represented as bytes and want to pull out its text content.

File Data Key

bizdata_dataset_response

Example for Case 1:

Input JSON

{
  "bizdata_dataset_response": "b'%PDF-1.1\\n1 0 obj\\n<<>>\\nstream\\nBT (Hello , This is a File Extractor ops.) Tj ET\\nendstream\\nendobj\\n%%EOF'"
}

Result

{
  "extracted_text": "Hello , This is a File Extractor ops.",
  "extracted_Images": [],
  "extracted_tables": []
}

JSON to Avro

Description:

The JSON to AVRO operation converts your structured JSON data into the AVRO format using a specified valid schema.

Number of Parameters : 3

Parameter : JSON Data Key

bizdata_dataset_response

Parameter : AVRO Schema

{"type": "record", "name": "User", "fields": [{"name": "name", "type": "string"}, {"name": "age", "type": "int"}]}

Parameter : AVRO Data Key

avro_data_key

Avro to JSON

Description:

The AVRO to JSON operation converts AVRO formatted byte data back into structured JSON data.

Number of Parameters : 2

Parameter : AVRO Data Key

avro_data_key

Parameter : JSON Data Key

json_data_response

Frequently Asked Questions

What does the Upper Case operation do?

The Upper Case operation converts the values of specified keys into uppercase format while the data is in-flight within the pipeline.

What does the Lower Case operation do?

The Lower Case operation converts the values of selected keys into lowercase format during data processing.

How does the Data Type operation work?

The Data Type operation converts string values into their respective data types such as Boolean, Float, Integer, or DateTime based on the provided configuration.

When should I use the Append operation?

Use the Append operation when you need to add new keys with static values or dynamic values derived from existing pipeline data.

What is the purpose of Title Case?

Title Case converts the values of specified keys so that each word starts with an uppercase letter.

How does Data Extractor help?

Data Extractor retrieves specific keys and their corresponding values from a JSON response for further processing.

What is the difference between JSON to String and String to JSON?

JSON to String converts structured JSON data into a string format, while String to JSON parses a string and converts it into structured JSON.

When should JSON to XML or XML to JSON be used?

These operations are used when converting data between JSON and XML formats to meet integration or system requirements.

Notes

  • Validate key names before execution.
  • Ensure correct datatype formats during conversion.
  • Test transformations using sample datasets.

Upper Case

Overview

Upper Case converts specified key values to uppercase while data is in-flight.

Number of Parameters

1

Parameter: Uppercase

Provide comma-separated keys in double quotes.

"first_name","last_name"

Lower Case

Overview

Lower Case converts specified key values to lowercase during data processing.

Number of Parameters

1

Parameter: Lowercase

"first_name","last_name"

Data Type

Overview

Data Type converts string values into Boolean, Float, Integer, or DateTime data types.

Number of Parameters

4

Boolean

"test_passed"

Float

"Amount"

Integer

"Quantity"

Date Time

"startweekdate1","%Y-%m-%d %H:%M:%S.%f%z","%Y-%m-%d %H:%M:%S",
"startweekdate2","%Y-%m-%d %H:%M:%S","%Y-%m-%d"

Goldfinch Datalake Date Format:

%Y-%m-%dT%H:%M:%S.%f%z

Append

Overview

Append adds new keys with static or dynamic values during data flow.

Number of Parameters

1

Examples

"export_flag_y":"Y","export_flag_p":"P"

"concatenate_key_name":"{%ORDERNUMBER%}|{%ORDER_TYPE%}"

Title Case

Overview

Title Case converts specified key values into title case format.

"amount","first_name"

Data Extractor

Overview

Extracts specified keys and their values from JSON responses.

"['access_token']","['feedDocumentId']"

Trim

"first_name"

JSON to String / String to JSON

"key1","key2"

JSON to XML / XML to JSON

product_data_response
data_response

Base64 Encoding / Decoding

"email"

Generate Array Sequence Number

key_name
DATA

Today Timestamp

%Y-%m-%dT%H:%M:%S.%f%z
dl_insert_date

Calculator

"Amount1-Amount2","Amount1+Amount2"
"key1","key2"

Grok Pattern

Overview

Grok extracts structured data from unstructured text using predefined patterns.

This is endpoint url %{URI:endpoint_url} for mac add %{MAC:mac_address} and v4 %{IPV4:ip_address_v4} and V6 %{IPV6:ip_address_v6}

PDF Extractor

Items
@xyz.grapgh.downloadUrl

ARRAY COUNT

['bizdata_dataset_response']['data']

RAW SENTENCE GENERATOR

"Name","Commands"
Response

TIME UNITS

timestamp
"year","month","day","hour","minute","second","microsecond"

Data Chunking

['text']
chunks
1000
Data_Chunks

Extract to Array Operation

"data"
["id","content"]
["ids","documents"]

HTML Extractor

Description:

The HTML Extractor operation extracts textual and structured data from given HTML content.

Number of Parameters : 2

Parameter : Input HTML Key

Provide the key name that contains the raw HTML data we aim to extract information from.

Below is an example where we provide the key containing HTML content.

bizdata_dataset_response

Parameter : Output Data Key

Specify the key name where you want to store the extracted structured data.

Below is an example where we aim to store the extracted data in the html_text_data key.

html_text_data

Various Use Cases for the Parameters:

Case 1:

When it’s necessary to extract plain text content from an HTML string.

Input HTML Key

bizdata_dataset_response

Output Data Key

html_text_data

Example for Case 1:

Input JSON

{
  "bizdata_dataset_response": "<html><body><h1>Hello World</h1><p>This is a paragraph.</p></body></html>"
}

Result

{
  "html_text_data": "Hello World This is a paragraph."
}

File Extractor

Description:

The File Extractor operation extracts textual data from various file formats such as txt, docs, ppt, pdf, and many
others.

Number of Parameters : 1

Parameter : File Data Key

Provide the key name that contains the bytes data of the file to be extracted.

Below is an example where we provide the key containing the file data.

bizdata_dataset_response

Various Use Cases for the Parameters:

Case 1:

When you have a document represented as bytes and want to pull out its text content.

File Data Key

bizdata_dataset_response

Example for Case 1:

Input JSON

{
  "bizdata_dataset_response": "b'%PDF-1.1\\n1 0 obj\\n<<>>\\nstream\\nBT (Hello , This is a File Extractor ops.) Tj ET\\nendstream\\nendobj\\n%%EOF'"
}

Result

{

  "extracted_text": "Hello , This is a File Extractor ops.",
  "extracted_Images": [],
  "extracted_tables": []
}

JSON to Avro

Description:

The JSON to AVRO operation converts your structured JSON data into the AVRO format using a specified valid schema.

Number of Parameters : 3

Parameter : JSON Data Key

Provide the key name that contains the JSON structured data which we aim to convert into the AVRO format.

Below is an example where we provide the data key containing the JSON data.

bizdata_dataset_response

Parameter : AVRO Schema

Provide the AVRO schema in JSON format that will be used to validate and parse the JSON data into AVRO bytes.

Below is an example where we specify the schema.

{"type": "record", "name": "User", "fields": [{"name": "name", "type": "string"}, {"name":
“age”, “type”: “int”}]}

Parameter : AVRO Data Key

Specify the key name where you want to store the converted AVRO byte data.

Below is an example where we aim to store the converted data in the avro_data_key.

avro_data_key

Various Use Cases for the Parameters:

Case 1:

When you have a simple JSON record and a matching AVRO schema and want to serialize it.

JSON Data Key

bizdata_dataset_response

AVRO Schema

{

  "type": "record",
  "name": "Customer",
  "fields": [
    {"name": "CREATEDDATE", "type": "string"},
    {"name": "CUSTOMERCITY", "type": "string"},
    {"name": "CUSTOMERCOUNTRY", "type": "string"},
    {"name": "CUSTOMEREMAIL", "type": "string"},
    {"name": "CUSTOMERNAME", "type": "string"},
    {"name": "CUSTOMERPHONE", "type": "string"},
    {"name": "CUSTOMERSTATE", "type": "string"},
    {"name": "CUSTOMERZIPCODE", "type": "string"},
    {"name": "ERPCUSTOMER", "type": "string"},
    {"name": "CUSTOMER_ID", "type": "string"},
    {"name": "ID", "type": "string"}
  ]
}

AVRO Data Key

avro_data_key

Example for Case 1:

Input JSON

{
  "bizdata_dataset_response": {
    "CREATEDDATE": "15-01-2024",
    "CUSTOMERCITY": "Bengaluru",
    "CUSTOMERCOUNTRY": "India",
    "CUSTOMEREMAIL": "john.doe@email.com",
    "CUSTOMERNAME": "John Doe",
    "CUSTOMERPHONE": "9876543210",
    "CUSTOMERSTATE": "Karnataka",
    "CUSTOMERZIPCODE": "560001",
    "ERPCUSTOMER": "ERP001",
    "CUSTOMER_ID": "CUST001",
    "ID": "1"
  }
}

Result

{
  "avro_data_key": "b'Obj\\x01\\x04\\x14avro.codec\\x08null\\x16avro.schema\\xa4\\x08{\"type\": \"record\", \"name\": \"Customer\", \"fields\": [{\"name\": \"CREATEDDATE\", \"type\": \"string\"}, {\"name\": \"CUSTOMERCITY\", \"type\": \"string\"}, {\"name\": \"CUSTOMERCOUNTRY\", \"type\": \"string\"}, {\"name\": \"CUSTOMEREMAIL\", \"type\": \"string\"}, {\"name\": \"CUSTOMERNAME\", \"type\": \"string\"}, {\"name\": \"CUSTOMERPHONE\", \"type\": \"string\"}, {\"name\": \"CUSTOMERSTATE\", \"type\": \"string\"}, {\"name\": \"CUSTOMERZIPCODE\", \"type\": \"string\"}, {\"name\": \"ERPCUSTOMER\", \"type\": \"string\"}, {\"name\": \"CUSTOMER_ID\", \"type\": \"string\"}, {\"name\": \"ID\", \"type\": \"string\"}]}\\x00\\xb7\\x0e\\xecd\\x1b\\x1bZ]\\xa8 \\xddC\\xa9\\xf9j\\x01\\x02\\xc8\\x01\\x1415-01-2024\\x12Bengaluru\\nIndia$john.doe@email.com\\x10John Doe\\x149876543210\\x12Karnataka\\x0c560001\\x0cERP001\\x0eCUST001\\x021\\xb7\\x0e\\xecd\\x1b\\x1bZ]\\xa8 \\xddC\\xa9\\xf9j\\x01'"
}

Avro to JSON

Description:

The AVRO to JSON operation converts AVRO formatted byte data back into structured JSON data.

Number of Parameters : 2

Parameter : AVRO Data Key

Provide the key name that contains the AVRO byte data which we aim to parse and convert into JSON.

Below is an example where we provide the key containing AVRO data.

avro_data_key

Parameter : JSON Data Key

Specify the key name where you want to store the parsed and converted JSON data.

Below is an example where we aim to store the parsed data.

json_data_response

Various Use Cases for the Parameters:

Case 1:

When you have valid AVRO bytes and need to convert them into a readable JSON object.

AVRO Data Key

avro_data_key

JSON Data Key

json_data_response

Example for Case 1:

Input JSON

{
  "avro_data_key": "b'Obj\\x01\\x04\\x14avro.codec\\x08null\\x16avro.schema\\xa4\\x08{\"type\": \"record\", \"name\": \"Customer\", \"fields\": [{\"name\": \"CREATEDDATE\", \"type\": \"string\"}, {\"name\": \"CUSTOMERCITY\", \"type\": \"string\"}, {\"name\": \"CUSTOMERCOUNTRY\", \"type\": \"string\"}, {\"name\": \"CUSTOMEREMAIL\", \"type\": \"string\"}, {\"name\": \"CUSTOMERNAME\", \"type\": \"string\"}, {\"name\": \"CUSTOMERPHONE\", \"type\": \"string\"}, {\"name\": \"CUSTOMERSTATE\", \"type\": \"string\"}, {\"name\": \"CUSTOMERZIPCODE\", \"type\": \"string\"}, {\"name\": \"ERPCUSTOMER\", \"type\": \"string\"}, {\"name\": \"CUSTOMER_ID\", \"type\": \"string\"}, {\"name\": \"ID\", \"type\": \"string\"}]}\\x00\\xb7\\x0e\\xecd\\x1b\\x1bZ]\\xa8 \\xddC\\xa9\\xf9j\\x01\\x02\\xc8\\x01\\x1415-01-2024\\x12Bengaluru\\nIndia$john.doe@email.com\\x10John Doe\\x149876543210\\x12Karnataka\\x0c560001\\x0cERP001\\x0eCUST001\\x021\\xb7\\x0e\\xecd\\x1b\\x1bZ]\\xa8 \\xddC\\xa9\\xf9j\\x01'"
}

Result

{

  "json_data_response": {
    "CREATEDDATE": "15-01-2024",
    "CUSTOMERCITY": "Bengaluru",
    "CUSTOMERCOUNTRY": "India",
    "CUSTOMEREMAIL": "john.doe@email.com",
    "CUSTOMERNAME": "John Doe",
    "CUSTOMERPHONE": "9876543210",
    "CUSTOMERSTATE": "Karnataka",
    "CUSTOMERZIPCODE": "560001",
    "ERPCUSTOMER": "ERP001",
    "CUSTOMER_ID": "CUST001",
    "ID": "1"
  }
}

Filter Ends

Description:

This operation ends the filter for a given streaming pipeline.

Number of Parameters : 0

When Ending a Filter Condition:

Always use the Filter End operation once data transformation and data transfer are completed.

The Filter End operation finalizes the active filter condition. After Filter End, no further filtering is applied,
and the original data flow is restored.

Important:
If more than one filter condition is used within the same flow, applying Filter End after each filter block is
mandatory to avoid unintended data filtering and to ensure correct data processing.

Frequently Asked Questions

What does the Upper Case operation do?

The Upper Case operation converts the values of specified keys into uppercase format while the data is in-flight within the pipeline.

What does the Lower Case operation do?

The Lower Case operation converts the values of selected keys into lowercase format during data processing.

How does the Data Type operation work?

The Data Type operation converts string values into their respective data types such as Boolean, Float, Integer, or DateTime based on the provided configuration.

When should I use the Append operation?

Use the Append operation when you need to add new keys with static values or dynamic values derived from existing pipeline data.

What is the purpose of Title Case?

Title Case converts the values of specified keys so that each word starts with an uppercase letter.

How does Data Extractor help?

Data Extractor retrieves specific keys and their corresponding values from a JSON response for further processing.

What is the difference between JSON to String and String to JSON?

JSON to String converts structured JSON data into a string format, while String to JSON parses a string and converts it into structured JSON.

When should JSON to XML or XML to JSON be used?

These operations are used when converting data between JSON and XML formats to meet integration or system requirements.

Notes

  • Validate key names before execution.
  • Ensure correct datatype formats during conversion.
  • Test transformations using sample datasets.

Upper Case

Overview

Upper Case converts specified key values to uppercase while data is in-flight.

Number of Parameters

1

Parameter: Uppercase

Provide comma-separated keys in double quotes.

"first_name","last_name"

Lower Case

Overview

Lower Case converts specified key values to lowercase during data processing.

Number of Parameters

1

Parameter: Lowercase

"first_name","last_name"

Data Type

Overview

Data Type converts string values into Boolean, Float, Integer, or DateTime data types.

Number of Parameters

4

Boolean

"test_passed"

Float

"Amount"

Integer

"Quantity"

Date Time

"startweekdate1","%Y-%m-%d %H:%M:%S.%f%z","%Y-%m-%d %H:%M:%S",
"startweekdate2","%Y-%m-%d %H:%M:%S","%Y-%m-%d"

Goldfinch Datalake Date Format:

%Y-%m-%dT%H:%M:%S.%f%z

Append

Overview

Append adds new keys with static or dynamic values during data flow.

Number of Parameters

1

Examples

"export_flag_y":"Y","export_flag_p":"P"

"concatenate_key_name":"{%ORDERNUMBER%}|{%ORDER_TYPE%}"

Title Case

Overview

Title Case converts specified key values into title case format.

"amount","first_name"

Data Extractor

Overview

Extracts specified keys and their values from JSON responses.

"['access_token']","['feedDocumentId']"

Trim

"first_name"

JSON to String / String to JSON

"key1","key2"

JSON to XML / XML to JSON

product_data_response
data_response

Base64 Encoding / Decoding

"email"

Generate Array Sequence Number

key_name
DATA

Today Timestamp

%Y-%m-%dT%H:%M:%S.%f%z
dl_insert_date

Calculator

"Amount1-Amount2","Amount1+Amount2"
"key1","key2"

Grok Pattern

Overview

Grok extracts structured data from unstructured text using predefined patterns.

This is endpoint url %{URI:endpoint_url} for mac add %{MAC:mac_address} and v4 %{IPV4:ip_address_v4} and V6 %{IPV6:ip_address_v6}

PDF Extractor

Items
@xyz.grapgh.downloadUrl

ARRAY COUNT

['bizdata_dataset_response']['data']

RAW SENTENCE GENERATOR

"Name","Commands"
Response

TIME UNITS

timestamp
"year","month","day","hour","minute","second","microsecond"

Data Chunking

['text']
chunks
1000
Data_Chunks

Extract to Array Operation

"data"
["id","content"]
["ids","documents"]

HTML Extractor

Description:

The HTML Extractor operation extracts textual and structured data from given HTML content.

Number of Parameters : 2

Parameter : Input HTML Key

Provide the key name that contains the raw HTML data we aim to extract information from.

Below is an example where we provide the key containing HTML content.

bizdata_dataset_response

Parameter : Output Data Key

Specify the key name where you want to store the extracted structured data.

Below is an example where we aim to store the extracted data in the html_text_data key.

html_text_data

Various Use Cases for the Parameters:

Case 1:

When it’s necessary to extract plain text content from an HTML string.

Input HTML Key

bizdata_dataset_response

Output Data Key

html_text_data

Example for Case 1:

Input JSON

{
  "bizdata_dataset_response": "<html><body><h1>Hello World</h1><p>This is a paragraph.</p></body></html>"
}

Result

{
  "html_text_data": "Hello World This is a paragraph."
}

Frequently Asked Questions

What does the Upper Case operation do?

The Upper Case operation converts the values of specified keys into uppercase format while the data is in-flight within the pipeline.

What does the Lower Case operation do?

The Lower Case operation converts the values of selected keys into lowercase format during data processing.

How does the Data Type operation work?

The Data Type operation converts string values into their respective data types such as Boolean, Float, Integer, or DateTime based on the provided configuration.

When should I use the Append operation?

Use the Append operation when you need to add new keys with static values or dynamic values derived from existing pipeline data.

What is the purpose of Title Case?

Title Case converts the values of specified keys so that each word starts with an uppercase letter.

How does Data Extractor help?

Data Extractor retrieves specific keys and their corresponding values from a JSON response for further processing.

What is the difference between JSON to String and String to JSON?

JSON to String converts structured JSON data into a string format, while String to JSON parses a string and converts it into structured JSON.

When should JSON to XML or XML to JSON be used?

These operations are used when converting data between JSON and XML formats to meet integration or system requirements.

Notes

  • Validate key names before execution.
  • Ensure correct datatype formats during conversion.
  • Test transformations using sample datasets.
Updated on April 13, 2026

What are your Feelings

  • Happy
  • Normal
  • Sad

Share This Article :

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Data WranglingDeep Learning
Table of Contents
  • Upper Case
    • Overview
    • Number of Parameters
    • Parameter: Uppercase
  • Lower Case
    • Overview
    • Number of Parameters
    • Parameter: Lowercase
  • Data Type
    • Overview
    • Number of Parameters
    • Boolean
    • Float
    • Integer
    • Date Time
  • Append
    • Overview
    • Number of Parameters
    • Examples
  • Title Case
    • Overview
  • Data Extractor
    • Overview
  • Trim
  • JSON to String / String to JSON
  • JSON to XML / XML to JSON
  • Base64 Encoding / Decoding
  • Generate Array Sequence Number
  • Today Timestamp
  • Calculator
  • Grok Pattern
    • Overview
  • PDF Extractor
  • ARRAY COUNT
  • RAW SENTENCE GENERATOR
  • TIME UNITS
  • Data Chunking
  • Extract to Array Operation
  • Upper Case
    • Overview
    • Number of Parameters
    • Parameter: Uppercase
  • Lower Case
    • Overview
    • Number of Parameters
    • Parameter: Lowercase
  • Data Type
    • Overview
    • Number of Parameters
    • Boolean
    • Float
    • Integer
    • Date Time
  • Append
    • Overview
    • Number of Parameters
    • Examples
  • Title Case
    • Overview
  • Data Extractor
    • Overview
  • Trim
  • JSON to String / String to JSON
  • JSON to XML / XML to JSON
  • Base64 Encoding / Decoding
  • Generate Array Sequence Number
  • Today Timestamp
  • Calculator
  • Grok Pattern
    • Overview
  • PDF Extractor
  • ARRAY COUNT
  • RAW SENTENCE GENERATOR
  • TIME UNITS
  • Data Chunking
  • Extract to Array Operation
  • HTML Extractor
  • Hello World
    • File Extractor
    • JSON to Avro
    • Avro to JSON
    • Frequently Asked Questions
      • What does the Upper Case operation do?
      • What does the Lower Case operation do?
      • How does the Data Type operation work?
      • When should I use the Append operation?
      • What is the purpose of Title Case?
      • How does Data Extractor help?
      • What is the difference between JSON to String and String to JSON?
      • When should JSON to XML or XML to JSON be used?
    • Notes
    • Upper Case
      • Overview
      • Number of Parameters
      • Parameter: Uppercase
    • Lower Case
      • Overview
      • Number of Parameters
      • Parameter: Lowercase
    • Data Type
      • Overview
      • Number of Parameters
      • Boolean
      • Float
      • Integer
      • Date Time
    • Append
      • Overview
      • Number of Parameters
      • Examples
    • Title Case
      • Overview
    • Data Extractor
      • Overview
    • Trim
    • JSON to String / String to JSON
    • JSON to XML / XML to JSON
    • Base64 Encoding / Decoding
    • Generate Array Sequence Number
    • Today Timestamp
    • Calculator
    • Grok Pattern
      • Overview
    • PDF Extractor
    • ARRAY COUNT
    • RAW SENTENCE GENERATOR
    • TIME UNITS
    • Data Chunking
    • Extract to Array Operation
    • HTML Extractor
  • Hello World
    • File Extractor
    • JSON to Avro
    • Avro to JSON
    • Filter Ends
    • Frequently Asked Questions
      • What does the Upper Case operation do?
      • What does the Lower Case operation do?
      • How does the Data Type operation work?
      • When should I use the Append operation?
      • What is the purpose of Title Case?
      • How does Data Extractor help?
      • What is the difference between JSON to String and String to JSON?
      • When should JSON to XML or XML to JSON be used?
    • Notes
    • Upper Case
      • Overview
      • Number of Parameters
      • Parameter: Uppercase
    • Lower Case
      • Overview
      • Number of Parameters
      • Parameter: Lowercase
    • Data Type
      • Overview
      • Number of Parameters
      • Boolean
      • Float
      • Integer
      • Date Time
    • Append
      • Overview
      • Number of Parameters
      • Examples
    • Title Case
      • Overview
    • Data Extractor
      • Overview
    • Trim
    • JSON to String / String to JSON
    • JSON to XML / XML to JSON
    • Base64 Encoding / Decoding
    • Generate Array Sequence Number
    • Today Timestamp
    • Calculator
    • Grok Pattern
      • Overview
    • PDF Extractor
    • ARRAY COUNT
    • RAW SENTENCE GENERATOR
    • TIME UNITS
    • Data Chunking
    • Extract to Array Operation
    • HTML Extractor
  • Hello World
    • Frequently Asked Questions
      • What does the Upper Case operation do?
      • What does the Lower Case operation do?
      • How does the Data Type operation work?
      • When should I use the Append operation?
      • What is the purpose of Title Case?
      • How does Data Extractor help?
      • What is the difference between JSON to String and String to JSON?
      • When should JSON to XML or XML to JSON be used?
    • Notes
© Copyright 2026 Bizdata Inc. | All Rights Reserved | Terms of Use | Privacy Policy
eZintegrations-Slide-in-Icon Watch Demo
×
eZintegrations-main-image
See How Your Workflows Can Run Autonomously
Discover how eZintegrations Automates ERP, eCommerce, Documents and Enterprise Operations without Manual Intervention
Watch 2-min Demo Book a Demo