I'd like to ask a question about JSON to get a feel for priorities of coders here
-
I should add that I originally wrote it in C# and then ported it to C++ Why did I write it in C#? Because I didn't know about NewtonSoft's JSON on the day I wrote it and then when i found out about it it turns out NewtonSoft's pull parser sucks and is slow. I'm glad I did. People are religious about never reinventing the wheel, but it's not always such a bad thing - it depends on the wheel.
Real programmers use butterflies
we use Newtonsoft with all of our Web APIs, etc. never had any noticeable issues with performance. I guess if you are parsing big json files then, perhaps that is an issue, but we don't do that. so....
-
PIEBALDconsult wrote:
I assume that most would not implement either of those, but instead want to have the whole entire document, because why else would you be parsing the thing anyway?
In my JSON on Fire[^] article I present several cases where you only need a little data from a much larger dataset. Consider querying any mongoDB repository online. You don't need to parse everything you get back because the data they return is very large grained/chunky. You don't get fine grained query results with it. You get kilobytes of data at least, and on an IoT device you may just not have the room. The show information for Burn Notice from tmdb.com is almost 200kB. I know that because I'm using it as a test data set.
Real programmers use butterflies
My needs are simple -- some other team sends us some number of JSON files and I need to load the data into SQL Server. In most cases, each JSON file contains one "table" of data so loading it into a table is simple. At most I may want to filter out large binary values which are of no use to us. And we trust the sender to have provided well-formed JSON -- if it isn't, we find out real fast and throw it back to them to fix. Well-formedness is one of those things you shouldn't be concerned about once you get your application to PROD. At this time, I'm consuming two sets of files from third-party products which those products also have to be able to read -- they're the configuration files for those products. The only untrustworthy set of data I consume is one which is generated by a utility I wrote, so if it's broken it's my fault and I can fix it.
-
we use Newtonsoft with all of our Web APIs, etc. never had any noticeable issues with performance. I guess if you are parsing big json files then, perhaps that is an issue, but we don't do that. so....
If you ever find yourself bulk loading JSON dumps into a database, you can do better. Hell, you could use my tiny JSON C# lib which is around here at CP somewhere.
Real programmers use butterflies
-
My needs are simple -- some other team sends us some number of JSON files and I need to load the data into SQL Server. In most cases, each JSON file contains one "table" of data so loading it into a table is simple. At most I may want to filter out large binary values which are of no use to us. And we trust the sender to have provided well-formed JSON -- if it isn't, we find out real fast and throw it back to them to fix. Well-formedness is one of those things you shouldn't be concerned about once you get your application to PROD. At this time, I'm consuming two sets of files from third-party products which those products also have to be able to read -- they're the configuration files for those products. The only untrustworthy set of data I consume is one which is generated by a utility I wrote, so if it's broken it's my fault and I can fix it.
This lib i wrote was originally in C# and I ported it. I originally designed it (the C# version) to do bulk loads of data - basically exactly what you're doing but perhaps a lot more of it.
Real programmers use butterflies
-
I should add that I originally wrote it in C# and then ported it to C++ Why did I write it in C#? Because I didn't know about NewtonSoft's JSON on the day I wrote it and then when i found out about it it turns out NewtonSoft's pull parser sucks and is slow. I'm glad I did. People are religious about never reinventing the wheel, but it's not always such a bad thing - it depends on the wheel.
Real programmers use butterflies
If people didn't constantly reinvent the wheel, we'd still be using wooden wheels several feet in diameter. :laugh: Use the right wheel for the right job. Don't try to adapt to an existing wheel if it just doesn't do the job.
-
If people didn't constantly reinvent the wheel, we'd still be using wooden wheels several feet in diameter. :laugh: Use the right wheel for the right job. Don't try to adapt to an existing wheel if it just doesn't do the job.
agreed!
Real programmers use butterflies
-
I should add that I originally wrote it in C# and then ported it to C++ Why did I write it in C#? Because I didn't know about NewtonSoft's JSON on the day I wrote it and then when i found out about it it turns out NewtonSoft's pull parser sucks and is slow. I'm glad I did. People are religious about never reinventing the wheel, but it's not always such a bad thing - it depends on the wheel.
Real programmers use butterflies
honey the codewitch wrote:
People are religious about never reinventing the wheel, but it's not always such a bad thing - it depends on the wheel.
:thumbsup::thumbsup::thumbsup:
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
I barely ever use JSON and have never written (nor am I likely to) a parser, but what I do know is that I can't answer a question like this without knowing the context. - Is it more important to be fast 100% of the time and permit errors 1% of the time, or to be 100% reliable at the cost of a few percentage points in speed? (i.e. how critical is the data, and how critical is speed? This is a pretty common trade-off) - Is the data coming from another system I / we have written, or a trusted partner, or from Joe Public? Is the data machine generated or hand-crafted?
-
If you ever find yourself bulk loading JSON dumps into a database, you can do better. Hell, you could use my tiny JSON C# lib which is around here at CP somewhere.
Real programmers use butterflies
Tell me when you make a parser for XML. I'm loading 80 GB into a database every week, and XML (or rather the built in tools) seriously isn't made for that.
Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger
-
Tell me when you make a parser for XML. I'm loading 80 GB into a database every week, and XML (or rather the built in tools) seriously isn't made for that.
Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger
will do!
Real programmers use butterflies
-
Fortunately I'm not allowed to use third-party add-ins. I am awaiting access to the JSON support built into .net 4.7 and newer to see whether or not it can do what I require.
This one? What's next for System.Text.Json? | .NET Blog[^]
TTFN - Kent
-
This one? What's next for System.Text.Json? | .NET Blog[^]
TTFN - Kent
I think so, but until I see it, I can't tell.
-
Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)
Real programmers use butterflies
I'm pretty trusting. When someone says they're going to give me JSON I assume they'll give me JSON. So I'd go for it and worry about validation when the party that should be giving me JSON isn't giving me JSON. So far that has worked pretty well. In practice, these kind of things rarely break. You either get JSON or no JSON at all, but rarely (or even never) a badly formed JSON.
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
I'm pretty trusting. When someone says they're going to give me JSON I assume they'll give me JSON. So I'd go for it and worry about validation when the party that should be giving me JSON isn't giving me JSON. So far that has worked pretty well. In practice, these kind of things rarely break. You either get JSON or no JSON at all, but rarely (or even never) a badly formed JSON.
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
I agree! :-D
Real programmers use butterflies
-
Tell me when you make a parser for XML. I'm loading 80 GB into a database every week, and XML (or rather the built in tools) seriously isn't made for that.
Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger
I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes. I load 5GB of JSON with my own parser. It takes about eight minutes. I load 80GB of JSON with my own parser -- this dataset has tripled in size over the last month. It's now taking about five hours. These datasets are in no way comparable, I'm just comparing the size-on-disk of the files. I will, of course, accept that my JSON loader is a likely bottleneck, but I have nothing else to compare it against. It seemed "good enough" two years ago when I had a year-end deadline to meet. I may also be able to configure my JSON Loader to use BulkCopy, as I do for the 5GB dataset, but I seem to recall that the data wasn't suited to it. At any rate, I'm in need of an alternative, but it can't be third-party. Next year will be different.
-
So if it wasn't, you'd like to error as soon as you catch it, even if it meant a slower parse is what I'm hearing.
Real programmers use butterflies
yes
Latest Articles:
Thread Safe Quantized Temporal Frame Ring Buffer -
why are you not using Newtonsoft? Not sure why you are re-inventing the wheel here. :confused: NuGet Gallery| Newtonsoft.Json 12.0.3[^]
Some people have to work on air gap networks, where you can not copy anything to the network. It comes configured with a couple of approved things like the operating system, and whatever comes bundled with say Visual Studio 2015, and that's it. Nothing else gets in. With good reason too, e.g. see supply chain poisoning like the recent SolarWinds incident.
-
Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)
Real programmers use butterflies
Since JSON is such a well defined construct simple parsers are very easy to write. I have a few. The nub is of course in 'a few'. It really falls into the case usage arena. If you know the data a quick regex parser will do. Regex parsers are fundamentally flawed though, and tend to fail on large data sets containing mixed characters (locale is a pain). So, well-formedness is largely there already. Two dimensional arrays only require a few lines of code. Multi dimensional arrays just a few more. Large unknown datasets across languages? Use someone else's library and save yourself time.
-
Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)
Real programmers use butterflies
Quote:
Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors
As with all input to your program, you validate on reception. All the other code that uses that input after that can then assume valid input and you can choose whatever shortcuts you want to on the assumption of valid input. Doesn't matter if the input is JSON, XML, key/value pairs from .ini files or tokens, you only validate it once on reception.
-
Fortunately I'm not allowed to use third-party add-ins. I am awaiting access to the JSON support built into .net 4.7 and newer to see whether or not it can do what I require.