Overview
Forming a JSON object from an array of arrays is a common data transformation requirement in integration workflows. This process converts tabular-style data into structured JSON objects using column headers as keys.
In eZintegrations, Python Operations can be used to transform single-line array datasets into key-value based JSON objects for easier processing and analysis.
When to Use
Use this method when incoming data is structured as an array of arrays and needs to be converted into readable JSON records.
- Processing spreadsheet-style datasets
- Converting tabular API responses
- Preparing data for reporting and analytics
- Normalizing integration outputs
- Improving data readability
How It Works
The first row of the array is treated as column headers. Each subsequent row represents a data record.
The Python script maps each header to its corresponding value in every row and constructs JSON objects for each record.
Input Data
The Python Operation receives a dataset containing values stored as an array of arrays.
{
"bizdata_dataset": {
"values": [
[
"Order ID",
"Country",
"City",
"Customer Type",
"Item Category",
"Item",
"Quantity",
"Amount"
],
[
"42420",
"United States",
"Henderson",
"Consumer",
"Furniture",
"Bookcases",
"2",
"261.96"
],
[
"34637",
"United States",
"Henderson",
"Consumer",
"Furniture",
"Chairs",
"2",
"260"
],
[
"37892",
"United States",
"Henderson",
"Consumer",
"Furniture",
"Bookcases",
"2",
"221.96"
]
]
}
}
Python Operation Logic
The following Python script converts the array-based dataset into structured JSON objects using header mapping.
When running scripts in Python Operations, the incoming data is stored in the pycode_data variable. This variable is used to read and update the dataset.
Responsedata = []
new_data = pycode_data['values']
# Get the headers from the first row
headers = new_data[0]
# Get the data rows
data_rows = new_data[1:]
num_columns = len(headers)
result_rows = []
for row in data_rows:
result_row = {}
for i in range(num_columns):
result_row[headers[i]] = row[i]
result_rows.append(result_row)
res = {"values": result_rows}
pycode_data.update(res)
Output Data
After processing, the array-based dataset is converted into structured JSON objects with meaningful keys.
{
"bizdata_dataset": {
"values": [
{
"Order ID": "42420",
"Country": "United States",
"City": "Henderson",
"Customer Type": "Consumer",
"Item Category": "Furniture",
"Item": "Bookcases",
"Quantity": "2",
"Amount": "261.96"
},
{
"Order ID": "34637",
"Country": "United States",
"City": "Henderson",
"Customer Type": "Consumer",
"Item Category": "Furniture",
"Item": "Chairs",
"Quantity": "2",
"Amount": "260"
},
{
"Order ID": "37892",
"Country": "United States",
"City": "Henderson",
"Customer Type": "Consumer",
"Item Category": "Furniture",
"Item": "Bookcases",
"Quantity": "2",
"Amount": "221.96"
}
],
"items": []
}
}
How to Use
Follow these steps to convert array-based data into JSON objects.
- Configure the integration to receive array-of-array datasets.
- Open the Python Operation editor.
- Paste the transformation script.
- Ensure the values key contains header and data rows.
- Save and deploy the workflow.
- Test the operation with sample input.
Use Case Example
This transformation is useful for converting spreadsheet-style data into API-friendly formats.
- Input Format: Tabular array
- Output Format: Key-value JSON objects
- Usage: Reporting, analytics, and data pipelines
Troubleshooting
- Ensure the first row contains valid column headers.
- Verify that all rows have the same number of columns.
- Check for missing or null values in rows.
- Confirm that pycode_data is correctly referenced.
- Review logs if output fields are missing.
Frequently Asked Questions
What is the purpose of using the first row as headers?
The first row defines the field names used as keys in the generated JSON objects.
Can this script handle variable column counts?
No. The script assumes that all rows match the header column count.
What happens if a row has fewer values than headers?
Missing values may cause indexing errors or incomplete records.
Can this be used for large datasets?
Yes. The script processes records dynamically and supports large datasets, depending on system limits.
Does this script modify the original data?
Yes. The script updates the values key with the transformed JSON objects.
Notes
- This method assumes a consistent and well-structured dataset.
- Validate input data before applying transformations.
- Test scripts in a staging environment.
- Maintain backup copies of original datasets.