Overview
Forming a JSON object based on existing keys and values within a dataset is a common data transformation requirement in integration workflows.
In eZintegrations, Python Operations can be used to extract key-value pairs from nested JSON arrays and convert them into structured objects for simplified processing.
When to Use
Use this method when attribute data is stored as an array of objects and needs to be transformed into a flat key-value structure.
- Normalizing attribute-based datasets
- Creating lookup objects from nested data
- Simplifying complex JSON structures
- Preparing data for downstream integrations
- Improving reporting and analytics workflows
How It Works
The Python script retrieves the attributes array from the input dataset.
Each element in the array is processed to extract the attributename as the key and attributevalue as the corresponding value. These pairs are stored in a new JSON object.
The newly created object is then appended to the original dataset under the datalines key.
Input Data
The Python Operation receives a single-line JSON object containing an attributes array.
{
"id": 123,
"name": "sample",
"lastname": "dataset",
"attributes": [
{
"attributename": "item",
"attributevalue": "27",
"attribute_code": 12234
},
{
"attributename": "item2",
"attributevalue": "47",
"attribute_code": 12334
},
{
"attributename": "item1",
"attributevalue": "37",
"attribute_code": 13234
}
]
}
Python Operation Logic
When running scripts in Python Operations, incoming data is stored in the pycode_data variable. This variable is used to read and update the dataset.
The following script extracts attribute values and constructs a new JSON object dynamically.
Responsedata = []
new_data = pycode_data['attributes']
datalines = {}
for attribute in new_data:
datalines[attribute["attributename"]] = attribute["attributevalue"]
pycode_data["datalines"] = datalines
Output Data
After applying the Python script, a new datalines object is added to the original dataset.
{
"id": 123,
"name": "sample",
"lastname": "dataset",
"attributes": [
{
"attributename": "item",
"attributevalue": "27",
"attribute_code": 12234
},
{
"attributename": "item2",
"attributevalue": "47",
"attribute_code": 12334
},
{
"attributename": "item1",
"attributevalue": "37",
"attribute_code": 13234
}
],
"datalines": {
"item": "27",
"item2": "47",
"item1": "37"
}
}
How to Use
Follow these steps to generate key-value objects from attribute datasets.
- Configure the integration to receive JSON data with an attributes array.
- Open the Python Operation editor.
- Paste the transformation script.
- Ensure the attributes key exists in the input.
- Save and deploy the workflow.
- Test the transformation using sample data.
Use Case Example
This transformation is useful for simplifying attribute-based records.
- Input Format: Array of attribute objects
- Output Format: Flat key-value object
- Usage: Reporting, validation, and data enrichment
Troubleshooting
- Ensure the attributes field is present in the input data.
- Verify that attributename and attributevalue keys exist.
- Check for empty attribute arrays.
- Confirm that pycode_data is properly initialized.
- Review logs if datalines is missing.
Frequently Asked Questions
What is the purpose of the datalines object?
The datalines object provides a simplified key-value representation of attribute data for easier access.
Can duplicate attribute names be used?
No. If duplicate attributename values exist, later entries will overwrite earlier ones.
Does this script modify the original attributes array?
No. The attributes array remains unchanged. A new datalines object is added.
Can numeric values be preserved?
This script preserves values as provided. Type conversion must be handled separately if required.
Is this method suitable for large datasets?
Yes. The script processes attributes dynamically and scales based on system limits.
Notes
- This method assumes a consistent attribute structure.
- Validate input data before transformation.
- Avoid duplicate attribute names.
- Test transformations in a staging environment.