Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
IntrusionCM11469263d"Without reading them in memory at once" - guess you mean reading the whole file to memory?
Usually you have low level implementations that works on an stream that gets tokenized.
Tokenized as in "reading char by char", while using a stack or similar structure to keep only the current token alive and the necessary information for validation.
Modifying gets trickier - but not impossible - tricky part is removing _nested_ structures, as you have to forward the input stream to skip till the end of the nested structure... All other content can be written from input stream to e.g. a file token by token.
JsonBoa1407263dYou were right when guessing that you would have to operate the file in chunks that are not by themselves valid JSON.
However, some string manipulations could be implemented by a seek-pointer, reading overlapping windows of several chars per chunk.
Let's say you want to replace all keys that include the word "pony" to "bronco".
You would need to make an sliding window with at least 2 times the length of the string "pony" (so, 8 chars);that slides at most the length of half the string "pony" (so, 2 chars). you would also need a regex to detect that a given window chunk contains the key declaration including the word "pony".
Finally, you would need to pipe the altered file to another, updating a pointer to avoid duplicating the overlapping part of several windows.
It works especially well for NLP.
webketje1641262d@IntrusionCM @JsonBoa thx for the detailed replies! I read the linked article. I learned something, but afaik this is all too low-level for a JS static site gen.
If I end up using any readable or writable stream, it will have to be only for the advantages other than "lower memory footprint"