Instruments: Uniformity in the Instrument json structure is desired

Namaskra,

The Instruments json structure for equity (first 4 fields) is as follows:

NSE EQUITY

{
“segment”: “NSE_EQ”,
“name”: “SIEMENS LTD”,
“exchange”: “NSE”,
“isin”: “INE003A01024”, …

where as for the F&O is as follows:

{
“weekly”: false,
“segment”: “NSE_FO”,
“name”: “NIFTY”,
“exchange”: “NSE”,

If we have a standardized structure ..then the first key will be “segment” .
It would then super easy to determine the record type with first key itself.

WIth the introduction of Weekly as first key in F&O json object, we how have to check each and every record for weekly and then process the record. That is huge over head when I have to iterate over 47000 records.

It would be great if the key “Weekly” can be shifted to the bottom of the Json object so that the first key is always Segment.. so that the each record can be sorted fast on segment key.

Regards
Rathnadhar K V

I asked the same thing but they wanted million developers to iterate it 47000 times to filter their needs which they can resolve easily by sorting json keys once.

I think they didnt give much thought in the placement of keys.

the most critical key position must be on the top…
Here unfortunately they have violated that rule.

Since its Json, moving the week down dosent make any difference to those who are just working with Json style objects only.

But for critical performance there they should think carefully in establishing the standards for placing the keys. Here disturbing the most critical key index, as per the stock type…is not a professional outcome, in my opinion,

They can still change the Weekly key position, up the index. There by keeping Index [0] only for segment … that way the library that uses this will be highly temporally optimized.

Regards
Rathnadhar K V

Hi @RathnadharKV

Noted. This shouldn’t have been this way. Shall fix it soon and update back here

Thank you so much…looking forward to the fix.

1 Like

The primary reason for transitioning from CSV to JSON was to gain the flexibility to add new keys at any position without breaking functionality. In the past, we faced issues due to the rigid indexing of CSV files.

Anyone working with the JSON file should always access values using their corresponding keys. The purpose of adopting JSON was to eliminate hardcoded index dependencies. Since the JSON file is generated dynamically, the order of keys is not guaranteed.

This design choice was intentional, with the understanding that values in JSON should always be accessed by their keys.

But after generating You can sort JSON keys alphabetically.

Namaskara Pradeep,

Please, let us resolve that we generate the JSON and we can place the Keys as we intend.
So Weekly can be easily placed last. Its has minimalistic influence… now that most F&O in NSE and BSE have shifted to Monthly.

That said… the reason I asked for the earlier structure is that .. Your json fields are not constant. They depend on the type of the instrument, whether equity or derivative. I need to process these two jsons differently.

WORSE, the 48 mb has been minified. This is the reason I requested a NDJSON instead of Plain json.

If the file contained NDJSON then I can pick one JSON object at a time and process it.

Due to (thoughtless) minification, I am forced to go character by character to build the json object and process it.

There is the issue. … to know which json object structure I need to look at the first 45 characters. Weekly key has messed this up… If the weekly key is at last or any other point apart from first, I can process with just 3 character processing iterated 45000 times. Hope you get the computing difference.

It’s nothing to do with Json …as it’s pre json processing.

It would be ideal

  1. Weekly key moved to last.
  2. Segment is always fixed first.
  3. NDJSON instead of Json.

If the JSON Template was CONSTANT THROUGHOUT THE COMBINED JSON file, Yes I agree with your conviction.. but mixing the json formats and shifting keys has led to a mess in upstream processing.

When I execute an IOT (embedded) project I use CSV for data generating and transmission efficiency.

But here since we are using JSON, that efficiency cannot be achieved, but gains is in flexibility. Still the JSON object has to be designed keeping the consumption in view as the consumption side processing increases dramatically if the JSON is poorly designed.

UPSTOX Json design, I am sorry to say, has not lived up to that level of professionalism. Please follow my posts on JSON in the community to know what I am conveying.

Regards
Rathnadhar K V