I can simplify the question by providing a link to the source data:
curl -sX GET "https://platform-explorer.pshenmic.dev/dataContract/GWRSAVFMjXx8HpQFaNJMqBV7MBgMK4br5UESsB4S31Ec/documents?page=23&limit=4" | jq -r .resultSetAt the end of the code, you can change the value of "limit", which will determine how many blocks are output. There are more than 1,000 blocks in total, and I need to extract the "label" and "timestamp" information for each block.`Inside there are identical blocks that contain the raw data for the "data" key.
{"identifier": "BVC9GYvsAwswBTfv4C2eziRZmZ6fUzrmMuvqv1VPdWG3","dataContractIdentifier": "GWRSAVFMjXx8HpQFaNJMqBV7MBgMK4br5UESsB4S31Ec","revision": 1,"txHash": "03CFE8DAB06ECCCDB1430F149041775263F85DF335046C0421449AA73A46B4DA","deleted": false,"data": "{\"label\":\"BFWookie\",\"records\":{\"identity\":\"Ea8acQwhQbbdaTLytoTiedEkTA89A8dyFpn9jTivr4Ht\"},\"preorderSalt\":\"hwebwVCxW+MwBmIqhoYIibGd7s7GOihv+MU6ae8Jj70=\",\"subdomainRules\":{\"allowSubdomains\":false},\"normalizedLabel\":\"bfw00k1e\",\"parentDomainName\":\"dash\",\"normalizedParentDomainName\":\"dash\"}","timestamp": "2024-10-23T05:40:07.923Z","isSystem": false,"owner": "Ea8acQwhQbbdaTLytoTiedEkTA89A8dyFpn9jTivr4Ht"}I need to convert the data from the "data" key into the correct format :
{"label": "BFWookie","records": {"identity": "Ea8acQwhQbbdaTLytoTiedEkTA89A8dyFpn9jTivr4Ht"},"preorderSalt": "hwebwVCxW+MwBmIqhoYIibGd7s7GOihv+MU6ae8Jj70=","subdomainRules": {"allowSubdomains": false},"normalizedLabel": "bfw00k1e","parentDomainName": "dash","normalizedParentDomainName": "dash"}I obtained the last one by processing the data from the "data" key using the following command :
echo "{\"label\":\"BFWookie\",\"records\":{\"identity\":\"Ea8acQwhQbbdaTLytoTiedEkTA89A8dyFpn9jTivr4Ht\"},\"preorderSalt\":\"hwebwVCxW+MwBmIqhoYIibGd7s7GOihv+MU6ae8Jj70=\",\"subdomainRules\":{\"allowSubdomains\":false},\"normalizedLabel\":\"bfw00k1e\",\"parentDomainName\":\"dash\",\"normalizedParentDomainName\":\"dash\"}" | jqBut how can this be done quickly during the first data processing stage?Or another option?