Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. I'd like to ask a question about JSON to get a feel for priorities of coders here

I'd like to ask a question about JSON to get a feel for priorities of coders here

Scheduled Pinned Locked Moved The Lounge
questionjsonhelp
54 Posts 24 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S Slacker007

    why are you not using Newtonsoft? Not sure why you are re-inventing the wheel here. :confused: NuGet Gallery| Newtonsoft.Json 12.0.3[^]

    J Offline
    J Offline
    John Stewien
    wrote on last edited by
    #32

    Some people have to work on air gap networks, where you can not copy anything to the network. It comes configured with a couple of approved things like the operating system, and whatever comes bundled with say Visual Studio 2015, and that's it. Nothing else gets in. With good reason too, e.g. see supply chain poisoning like the recent SolarWinds incident.

    1 Reply Last reply
    0
    • H honey the codewitch

      Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

      Real programmers use butterflies

      A Offline
      A Offline
      Alexander Munro
      wrote on last edited by
      #33

      Since JSON is such a well defined construct simple parsers are very easy to write. I have a few. The nub is of course in 'a few'. It really falls into the case usage arena. If you know the data a quick regex parser will do. Regex parsers are fundamentally flawed though, and tend to fail on large data sets containing mixed characters (locale is a pain). So, well-formedness is largely there already. Two dimensional arrays only require a few lines of code. Multi dimensional arrays just a few more. Large unknown datasets across languages? Use someone else's library and save yourself time.

      1 Reply Last reply
      0
      • H honey the codewitch

        Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

        Real programmers use butterflies

        U Offline
        U Offline
        User 13269747
        wrote on last edited by
        #34

        Quote:

        Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors

        As with all input to your program, you validate on reception. All the other code that uses that input after that can then assume valid input and you can choose whatever shortcuts you want to on the assumption of valid input. Doesn't matter if the input is JSON, XML, key/value pairs from .ini files or tokens, you only validate it once on reception.

        1 Reply Last reply
        0
        • P PIEBALDconsult

          Fortunately I'm not allowed to use third-party add-ins. I am awaiting access to the JSON support built into .net 4.7 and newer to see whether or not it can do what I require.

          R Offline
          R Offline
          Reelix
          wrote on last edited by
          #35

          If you're allowed to upgrade to .NET 5, they effectively implemented Newtonsofts one natively with pretty much the identical syntax. Works really well, and you're not using third-party add-ins.

          -= Reelix =-

          P 1 Reply Last reply
          0
          • H honey the codewitch

            Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

            Real programmers use butterflies

            M Offline
            M Offline
            Mehdi Gholam
            wrote on last edited by
            #36

            The spec is pretty clear, so correctness and errors are clear. To be fast is another matter, see fastJSON - Smallest, Fastest Polymorphic JSON Serializer[^] and GitHub - simdjson/simdjson: Parsing gigabytes of JSON per second[^]

            Exception up = new Exception("Something is really wrong."); throw up;

            1 Reply Last reply
            0
            • H honey the codewitch

              Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

              Real programmers use butterflies

              O Offline
              O Offline
              obeobe
              wrote on last edited by
              #37

              A key question is what this parser will be used for. Is it for a hobby project or a production system? What would be the benefits of the higher performance? Will it be perceivable for human users? Will it save money by requiring less hardware? How much money? Is there an impact on the development effort? What is the impact on the resulting code in terms of maintainability? What would be the cost of choosing one option now and updating to the other option later? (is it a full rewrite? would it be simpler to go from A to B, or from B to A? etc.) What would be the code of implementing both options and letting the user (well, caller) decide which one to use? There are many things to factor in this decision. Maybe different developers will give different weights to these considerations, and inexperienced developers will overlook some or all of them, but I believe that for most developers the answer would (and should) be "it depends on the details of the situation".

              1 Reply Last reply
              0
              • H honey the codewitch

                First of all this is a hypothetical. Second, hosting the .NET CLI in C++ just to use a .NET package from C++ to parse a little JSON seems heavy handed and horribly inefficient. Plus C# won't run on arduinos.

                Real programmers use butterflies

                S Offline
                S Offline
                Stuart Dootson
                wrote on last edited by
                #38

                honey the codewitch wrote:

                hosting the .NET CLI in C++ just to use a .NET package from C++ to parse a little JSON seems heavy handed and horribly inefficient.

                If you're using C++, why not use a C++ JSON library such as [Modern JSON](https://github.com/nlohmann/json), [RapidJSON](https://rapidjson.org/) or [simdjson](https://simdjson.org/)? Or if you do develop your own library, you might be interested to look at [simdjson's 'On Demand' parsing approach...](https://github.com/simdjson/simdjson/blob/master/doc/ondemand.md)

                Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p

                H 1 Reply Last reply
                0
                • S Stuart Dootson

                  honey the codewitch wrote:

                  hosting the .NET CLI in C++ just to use a .NET package from C++ to parse a little JSON seems heavy handed and horribly inefficient.

                  If you're using C++, why not use a C++ JSON library such as [Modern JSON](https://github.com/nlohmann/json), [RapidJSON](https://rapidjson.org/) or [simdjson](https://simdjson.org/)? Or if you do develop your own library, you might be interested to look at [simdjson's 'On Demand' parsing approach...](https://github.com/simdjson/simdjson/blob/master/doc/ondemand.md)

                  Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p

                  H Offline
                  H Offline
                  honey the codewitch
                  wrote on last edited by
                  #39

                  They use too much memory and can't target IoT. of them simdjson shows the most potential but it still isn't about 71 bytes to do an episodes query off of a tmdb.com show data dump

                  Real programmers use butterflies

                  1 Reply Last reply
                  0
                  • H honey the codewitch

                    Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

                    Real programmers use butterflies

                    U Offline
                    U Offline
                    User 14060113
                    wrote on last edited by
                    #40

                    Stability over performance!

                    1 Reply Last reply
                    0
                    • H honey the codewitch

                      Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

                      Real programmers use butterflies

                      M Offline
                      M Offline
                      MGuerrieri
                      wrote on last edited by
                      #41

                      I take a function-first approach. You won't be able to parse the JSON if it's not well formed, so I would do that check first. If performance is poor, then I'd do a trace to find the bottlenecks and address them if possible. I wouldn't want to spend my time unnecessarily tracking down import errors.

                      H 1 Reply Last reply
                      0
                      • M MGuerrieri

                        I take a function-first approach. You won't be able to parse the JSON if it's not well formed, so I would do that check first. If performance is poor, then I'd do a trace to find the bottlenecks and address them if possible. I wouldn't want to spend my time unnecessarily tracking down import errors.

                        H Offline
                        H Offline
                        honey the codewitch
                        wrote on last edited by
                        #42

                        I look at it this way - and keep in mind this is purely hypothetical: Let's say you're bulk uploading parts of some JSON out of a huge dataset. Almost always that JSON is machine generated because who writes huge JSON by hand? Scanning through it quickly is important. If at some point you get a bad data dump, might it be better to roll back that update and then run a validator over the bad document that one time out 1000 when it fails, rather than paying for that validation every other 999 times?

                        Real programmers use butterflies

                        1 Reply Last reply
                        0
                        • H honey the codewitch

                          Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

                          Real programmers use butterflies

                          M Offline
                          M Offline
                          Mark Meuer
                          wrote on last edited by
                          #43

                          As a general rule, I try to follow these steps in order: 1. Make the program run right. 2. Make the program run right. 3. Make the program run right. 4. If I really need to, make it faster.

                          H P 2 Replies Last reply
                          0
                          • M Mark Meuer

                            As a general rule, I try to follow these steps in order: 1. Make the program run right. 2. Make the program run right. 3. Make the program run right. 4. If I really need to, make it faster.

                            H Offline
                            H Offline
                            honey the codewitch
                            wrote on last edited by
                            #44

                            That works to a point but certain design decisions for performance must be made up front. For example, deciding to use a pull parser as the primary way of navigation rather than an in memory tree.

                            Real programmers use butterflies

                            1 Reply Last reply
                            0
                            • R Reelix

                              If you're allowed to upgrade to .NET 5, they effectively implemented Newtonsofts one natively with pretty much the identical syntax. Works really well, and you're not using third-party add-ins.

                              -= Reelix =-

                              P Offline
                              P Offline
                              PIEBALDconsult
                              wrote on last edited by
                              #45

                              Yup, looking forward to it. Not holding my breath. It doesn't help that my boss read a blog that said that Microsoft is abandoning .net ( :sigh: ). Middle-managers will believe anything if it's in a blog. I countered with a link to Microsoft's road map for the future of .net, but the damage was already done.

                              1 Reply Last reply
                              0
                              • M Mark Meuer

                                As a general rule, I try to follow these steps in order: 1. Make the program run right. 2. Make the program run right. 3. Make the program run right. 4. If I really need to, make it faster.

                                P Offline
                                P Offline
                                PIEBALDconsult
                                wrote on last edited by
                                #46

                                4a. You always need to make it faster.

                                1 Reply Last reply
                                0
                                • H honey the codewitch

                                  Let's say you wanted to write a fast JSON parser. You could do a pull parser that does well-formedness checking Or you could do one that's significantly faster but skips well formedness checking during search/skip operations, which can lead to later error reporting or missed errors You can't make an option to choose one or the other, but you can avoid using the skip/search functions that do this in the latter case. Which do you do? Are you a stomp-the-pedal type or a defensive driver? (Seriously, this is more about getting a read of the room than anything - I want a feel for priorities)

                                  Real programmers use butterflies

                                  D Offline
                                  D Offline
                                  davecasdf
                                  wrote on last edited by
                                  #47

                                  Case 1, input good, output good - answer faster, Case 2, input bad, error message, but delayed ( is it clear ) Case 3, input bad, program sticks it's tongue out and dies Case 4, input bad, quiet wrong result so, who has to deal with 3, and can 4 happen? ( with malicious input ?) ( They ARE out to get you. )

                                  1 Reply Last reply
                                  0
                                  • P PIEBALDconsult

                                    I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes. I load 5GB of JSON with my own parser. It takes about eight minutes. I load 80GB of JSON with my own parser -- this dataset has tripled in size over the last month. It's now taking about five hours. These datasets are in no way comparable, I'm just comparing the size-on-disk of the files. I will, of course, accept that my JSON loader is a likely bottleneck, but I have nothing else to compare it against. It seemed "good enough" two years ago when I had a year-end deadline to meet. I may also be able to configure my JSON Loader to use BulkCopy, as I do for the 5GB dataset, but I seem to recall that the data wasn't suited to it. At any rate, I'm in need of an alternative, but it can't be third-party. Next year will be different.

                                    J Offline
                                    J Offline
                                    Jorgen Andersson
                                    wrote on last edited by
                                    #48

                                    PIEBALDconsult wrote:

                                    I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes.

                                    How much memory do you have? Early tests of mine ran out of memory. Or have I done something wrong? Mine takes an hour for 85GB XML, but that uses bulkcopy. Early versions without bulkcopy indicated that it would indeed take 5-6 hours.

                                    Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger

                                    P 1 Reply Last reply
                                    0
                                    • P PIEBALDconsult

                                      I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes. I load 5GB of JSON with my own parser. It takes about eight minutes. I load 80GB of JSON with my own parser -- this dataset has tripled in size over the last month. It's now taking about five hours. These datasets are in no way comparable, I'm just comparing the size-on-disk of the files. I will, of course, accept that my JSON loader is a likely bottleneck, but I have nothing else to compare it against. It seemed "good enough" two years ago when I had a year-end deadline to meet. I may also be able to configure my JSON Loader to use BulkCopy, as I do for the 5GB dataset, but I seem to recall that the data wasn't suited to it. At any rate, I'm in need of an alternative, but it can't be third-party. Next year will be different.

                                      H Offline
                                      H Offline
                                      honey the codewitch
                                      wrote on last edited by
                                      #49

                                      if you can run C++ binaries on the server this might give you better performance, especially if you're only doing loads of part of the data. JSON on Fire: JSON (C++) is a Blazing JSON Library that can Run on Low Memory Devices[^]

                                      Real programmers use butterflies

                                      1 Reply Last reply
                                      0
                                      • J Jorgen Andersson

                                        PIEBALDconsult wrote:

                                        I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes.

                                        How much memory do you have? Early tests of mine ran out of memory. Or have I done something wrong? Mine takes an hour for 85GB XML, but that uses bulkcopy. Early versions without bulkcopy indicated that it would indeed take 5-6 hours.

                                        Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger

                                        P Offline
                                        P Offline
                                        PIEBALDconsult
                                        wrote on last edited by
                                        #50

                                        I don't know what SSIS does internally, but I doubt it loads the entire XML document into memory all at once. I don't know how much RAM or how many processors the servers have. I ran the XML load on my laptop, 16GB of RAM and usage increased by only four percent.

                                        J 1 Reply Last reply
                                        0
                                        • P PIEBALDconsult

                                          I don't know what SSIS does internally, but I doubt it loads the entire XML document into memory all at once. I don't know how much RAM or how many processors the servers have. I ran the XML load on my laptop, 16GB of RAM and usage increased by only four percent.

                                          J Offline
                                          J Offline
                                          Jorgen Andersson
                                          wrote on last edited by
                                          #51

                                          Ok, then I had some other problem, I might take another look at SSIS then.

                                          Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups