Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Who here knows what a pull parser or pull parsing is?

Who here knows what a pull parser or pull parsing is?

Scheduled Pinned Locked Moved The Lounge
csharpxmljsonquestion
23 Posts 7 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Jorgen Andersson

    I've learned quite a lot from your musings in the lounge, but I've only skimmed through your technical articles on parsing, as they are way to specific for my needs. Which means my knowledge on parsing is still fairly superficial, so any reasonably easy to read breakdown on the principles (not just push vs pull) would be appreciated.

    Wrong is evil and must be defeated. - Jeff Ello Never stop dreaming - Freddie Kruger

    H Offline
    H Offline
    honey the codewitch
    wrote on last edited by
    #10

    Thanks. Here's a comment i just posted to RickZeeland which should hopefully serve as a quick explanation. I included code in it just so you could see it in all it's ugliness. :) The Lounge - Pull Parsing[^]

    Real programmers use butterflies

    1 Reply Last reply
    0
    • K Keith Barrow

      I'm pretty much with Randor on you should use it, people should have the wherewithal to look it up, or if it's an article aimed a beginners a brief description (+ more in depth links) of the terms might be appropriate. It's a technical article, so technical language is fine, and you'll help spread the terms. To answer your direct question - no I haven't heard Push/Pull parser but mostly worked it out from the context.

      KeithBarrow.net[^] - It might not be very good, but at least it is free!

      H Offline
      H Offline
      honey the codewitch
      wrote on last edited by
      #11

      Thank you. That's helpful.

      Real programmers use butterflies

      1 Reply Last reply
      0
      • H honey the codewitch

        It's usually in reference to XML parsers, but it's a generic parsing model that can apply to parsing anything. Contrast .NET's XmlTextReader (a pull parser) with a SAX XML parser (a push parser) The reason I ask is because I use the term a lot in my articles lately, and I'm trying to figure out if it might be worth it to write an article about the concept. I don't want to waste time with it if it's something most people have heard of before. It's hard for me to know because I deep dove parsing for a year and everything is familiar to me now.

        Real programmers use butterflies

        P Offline
        P Offline
        PIEBALDconsult
        wrote on last edited by
        #12

        Write a Wikipedia article. That'll make it true. I can't imagine any kind of reader/parser which doesn't tokenize by pulling.

        H 1 Reply Last reply
        0
        • P PIEBALDconsult

          Write a Wikipedia article. That'll make it true. I can't imagine any kind of reader/parser which doesn't tokenize by pulling.

          H Offline
          H Offline
          honey the codewitch
          wrote on last edited by
          #13

          That's not exactly what a pull parser is. A pull parser parses one small step at a time before returning control to the caller.

          while(reader.read()) {...}

          You call it like that, and inside the loop you check the nodeType() and the value() and such to get information about the node at the current location. Microsoft built one for XML in .NET call the XmlReader - you've probably used a derivative of it before, if not directly, then indirectly by way of another XML facility like XPath or the DOM NewtonSoft has one for JSON but I don't like it, personally.

          Real programmers use butterflies

          P 1 Reply Last reply
          0
          • H honey the codewitch

            That's not exactly what a pull parser is. A pull parser parses one small step at a time before returning control to the caller.

            while(reader.read()) {...}

            You call it like that, and inside the loop you check the nodeType() and the value() and such to get information about the node at the current location. Microsoft built one for XML in .NET call the XmlReader - you've probably used a derivative of it before, if not directly, then indirectly by way of another XML facility like XPath or the DOM NewtonSoft has one for JSON but I don't like it, personally.

            Real programmers use butterflies

            P Offline
            P Offline
            PIEBALDconsult
            wrote on last edited by
            #14

            Yeah, I do that, but it's at a higher level. So -- for instance -- when my loader finds an array of Widgets, it iterates all the Widgets in that array, loading each into the database.

            H 1 Reply Last reply
            0
            • P PIEBALDconsult

              Yeah, I do that, but it's at a higher level. So -- for instance -- when my loader finds an array of Widgets, it iterates all the Widgets in that array, loading each into the database.

              H Offline
              H Offline
              honey the codewitch
              wrote on last edited by
              #15

              Yeah, I build that kind of stuff on top of the pull parser. In my Diet JSON and a Coke article I go into that - constructing queries out of navigation and data extraction elements. You basically build queries and then feed those to the reader, and it drives the reader for you (in fact, it's more efficient than reading by calling read() yourself)

              Real programmers use butterflies

              P 1 Reply Last reply
              0
              • H honey the codewitch

                Yeah, I build that kind of stuff on top of the pull parser. In my Diet JSON and a Coke article I go into that - constructing queries out of navigation and data extraction elements. You basically build queries and then feed those to the reader, and it drives the reader for you (in fact, it's more efficient than reading by calling read() yourself)

                Real programmers use butterflies

                P Offline
                P Offline
                PIEBALDconsult
                wrote on last edited by
                #16

                I don't query or search, I simply iterate tokens until I reach the start of an array of objects I'm interested in. Then I iterate those objects. That way, I read each file only once. For the most part, each of the files I'm reading is just one array of objects and I load the whole thing into one database table. Only the most recent files I'm working with contain multiple arrays containing different types of objects -- and each type of object gets thrown at a different database table.

                H 1 Reply Last reply
                0
                • P PIEBALDconsult

                  I don't query or search, I simply iterate tokens until I reach the start of an array of objects I'm interested in. Then I iterate those objects. That way, I read each file only once. For the most part, each of the files I'm reading is just one array of objects and I load the whole thing into one database table. Only the most recent files I'm working with contain multiple arrays containing different types of objects -- and each type of object gets thrown at a different database table.

                  H Offline
                  H Offline
                  honey the codewitch
                  wrote on last edited by
                  #17

                  I made my parser with selective bulk loading of machine generated JSON in mind, which means when you search it does partial parsing and no normalization, allowing it to find what you're after FAST at the expense of some of the well formedness checking (but like i said, geared for machine generated dumps) Not that it matters in a .NET environment, but my parser also will not use memory to hold anything you didn't explicitly request which means you need bytes to scan the file, and then store your results. I often do queries with about 256 bytes of RAM to work with. It doesn't even compare field names or undecorate strings in memory - it does it right off the input source (usually a disk, a socket or a string) My latest codebase i'm working on will even allow you to stream value elements (field values and array members) so you can read massive BLOB values in the document. Gigabytes.

                  Real programmers use butterflies

                  P 1 Reply Last reply
                  0
                  • H honey the codewitch

                    I made my parser with selective bulk loading of machine generated JSON in mind, which means when you search it does partial parsing and no normalization, allowing it to find what you're after FAST at the expense of some of the well formedness checking (but like i said, geared for machine generated dumps) Not that it matters in a .NET environment, but my parser also will not use memory to hold anything you didn't explicitly request which means you need bytes to scan the file, and then store your results. I often do queries with about 256 bytes of RAM to work with. It doesn't even compare field names or undecorate strings in memory - it does it right off the input source (usually a disk, a socket or a string) My latest codebase i'm working on will even allow you to stream value elements (field values and array members) so you can read massive BLOB values in the document. Gigabytes.

                    Real programmers use butterflies

                    P Offline
                    P Offline
                    PIEBALDconsult
                    wrote on last edited by
                    #18

                    honey the codewitch wrote:

                    selective bulk loading

                    Yup.

                    honey the codewitch wrote:

                    machine generated JSON

                    Yup.

                    honey the codewitch wrote:

                    partial parsing

                    Supported.

                    honey the codewitch wrote:

                    no normalization

                    That's up to a higher level to determine.

                    honey the codewitch wrote:

                    at the expense of some of the well formedness checking

                    Basically none.

                    honey the codewitch wrote:

                    It doesn't even compare field names

                    Why would it? That's up to a higher level to determine.

                    honey the codewitch wrote:

                    undecorate strings in memory

                    Unquote? Unescape? I do that as late as possible, not until I know I want the value. Bear in mind also that the underlying reader/tokenizer (?) is not used only for JSON, but for CSV as well.

                    _____________________________________

                    Loader
                    ___________________________________
                    JSONenumerator CSVenumerator
                    ________________ _______________
                    JSONtokenizer CSVtokenizer Unquoting and unescaping happen here, as appropriate
                    ________________ __ _______________
                    STREAMtokenizer (base)
                    TextReader
                    ===================================
                    H 1 Reply Last reply
                    0
                    • P PIEBALDconsult

                      honey the codewitch wrote:

                      selective bulk loading

                      Yup.

                      honey the codewitch wrote:

                      machine generated JSON

                      Yup.

                      honey the codewitch wrote:

                      partial parsing

                      Supported.

                      honey the codewitch wrote:

                      no normalization

                      That's up to a higher level to determine.

                      honey the codewitch wrote:

                      at the expense of some of the well formedness checking

                      Basically none.

                      honey the codewitch wrote:

                      It doesn't even compare field names

                      Why would it? That's up to a higher level to determine.

                      honey the codewitch wrote:

                      undecorate strings in memory

                      Unquote? Unescape? I do that as late as possible, not until I know I want the value. Bear in mind also that the underlying reader/tokenizer (?) is not used only for JSON, but for CSV as well.

                      _____________________________________

                      Loader
                      ___________________________________
                      JSONenumerator CSVenumerator
                      ________________ _______________
                      JSONtokenizer CSVtokenizer Unquoting and unescaping happen here, as appropriate
                      ________________ __ _______________
                      STREAMtokenizer (base)
                      TextReader
                      ===================================
                      H Offline
                      H Offline
                      honey the codewitch
                      wrote on last edited by
                      #19

                      Everything you're talking about, because of your abstraction I can tell you you're loading strings into memory and operating on them in memory. Because of your higher level determining these things it's only operating on the strings after the fact. I am not. Now, for .NET that doesn't matter. For an 8kB arduino it does. Point is, our parsers are fundamentally different in that respect. Also when you said normalization is for a higher level to determine you misunderstand me. I parse no numbers, no strings, nothing, unless you actually request it. That's what I mean by no normalization. Based on what you're telling me of your architecture you are normalizing unconditionally at the parser level i suspect - am almost certain. I do not parse every field or value i encounter. I skip over most of them. they never get turned into anything in value space. literally most of the time I'm advancing like this:

                      while(m_source.currentChar()!='{some context sensitive stopping point value}') { m_source.advance(); /* moves one char */}

                      Real programmers use butterflies

                      P 1 Reply Last reply
                      0
                      • H honey the codewitch

                        Everything you're talking about, because of your abstraction I can tell you you're loading strings into memory and operating on them in memory. Because of your higher level determining these things it's only operating on the strings after the fact. I am not. Now, for .NET that doesn't matter. For an 8kB arduino it does. Point is, our parsers are fundamentally different in that respect. Also when you said normalization is for a higher level to determine you misunderstand me. I parse no numbers, no strings, nothing, unless you actually request it. That's what I mean by no normalization. Based on what you're telling me of your architecture you are normalizing unconditionally at the parser level i suspect - am almost certain. I do not parse every field or value i encounter. I skip over most of them. they never get turned into anything in value space. literally most of the time I'm advancing like this:

                        while(m_source.currentChar()!='{some context sensitive stopping point value}') { m_source.advance(); /* moves one char */}

                        Real programmers use butterflies

                        P Offline
                        P Offline
                        PIEBALDconsult
                        wrote on last edited by
                        #20

                        honey the codewitch wrote:

                        our parsers are fundamentally different

                        Yes. I suppose the biggest conceptual difference between ours is that I needed to write a fairly general loader utility which could read a "script" and perform the tasks, not write several purpose-built utilities -- one for each file to be loaded. The ability to have it support CSV (and XML) as well as JSON was an afterthought.

                        honey the codewitch wrote:

                        I parse no numbers, no strings, nothing, unless you actually request it.

                        Well, mine too. It does have to tokenize so it knows when it finds something you want it to parse, but nothing more than that until it finds a requested array. If the script being run says, "if you find the start of an array named 'Widgets', then do this with it", then the parser has to know "I just found an array named 'Widgets'".

                        honey the codewitch wrote:

                        you are normalizing unconditionally at the parser level

                        Well, I suppose so, insofar as I make values (or names) out of every token, but at that point they're just strings -- name/value pairs with a type -- they're not parsed. I throw only those strings which we want at the SQL Server and it handles any conversions to numeric or other types, the loader has no say in that. The loader has no say in data normalization either, it's just passing values as SQL parameters. Again, I want nearly every value in the file to go to the database, so of course I wind up with every value and throw them all at SQL Server. It may be a misunderstanding of terms, but in my opinion, no actual "parsing" is done until the (string) values arrive at SQL Server -- that's where the determinations of which name/value pairs go where, what SQL datatype they should be, etc. happens. The loader utility has no knowledge of any of that.

                        H 2 Replies Last reply
                        0
                        • P PIEBALDconsult

                          honey the codewitch wrote:

                          our parsers are fundamentally different

                          Yes. I suppose the biggest conceptual difference between ours is that I needed to write a fairly general loader utility which could read a "script" and perform the tasks, not write several purpose-built utilities -- one for each file to be loaded. The ability to have it support CSV (and XML) as well as JSON was an afterthought.

                          honey the codewitch wrote:

                          I parse no numbers, no strings, nothing, unless you actually request it.

                          Well, mine too. It does have to tokenize so it knows when it finds something you want it to parse, but nothing more than that until it finds a requested array. If the script being run says, "if you find the start of an array named 'Widgets', then do this with it", then the parser has to know "I just found an array named 'Widgets'".

                          honey the codewitch wrote:

                          you are normalizing unconditionally at the parser level

                          Well, I suppose so, insofar as I make values (or names) out of every token, but at that point they're just strings -- name/value pairs with a type -- they're not parsed. I throw only those strings which we want at the SQL Server and it handles any conversions to numeric or other types, the loader has no say in that. The loader has no say in data normalization either, it's just passing values as SQL parameters. Again, I want nearly every value in the file to go to the database, so of course I wind up with every value and throw them all at SQL Server. It may be a misunderstanding of terms, but in my opinion, no actual "parsing" is done until the (string) values arrive at SQL Server -- that's where the determinations of which name/value pairs go where, what SQL datatype they should be, etc. happens. The loader utility has no knowledge of any of that.

                          H Offline
                          H Offline
                          honey the codewitch
                          wrote on last edited by
                          #21

                          I'm using parsing in the traditional CS sense of imposing structure on a lexical stream based on patterns in said stream.

                          Real programmers use butterflies

                          1 Reply Last reply
                          0
                          • P PIEBALDconsult

                            honey the codewitch wrote:

                            our parsers are fundamentally different

                            Yes. I suppose the biggest conceptual difference between ours is that I needed to write a fairly general loader utility which could read a "script" and perform the tasks, not write several purpose-built utilities -- one for each file to be loaded. The ability to have it support CSV (and XML) as well as JSON was an afterthought.

                            honey the codewitch wrote:

                            I parse no numbers, no strings, nothing, unless you actually request it.

                            Well, mine too. It does have to tokenize so it knows when it finds something you want it to parse, but nothing more than that until it finds a requested array. If the script being run says, "if you find the start of an array named 'Widgets', then do this with it", then the parser has to know "I just found an array named 'Widgets'".

                            honey the codewitch wrote:

                            you are normalizing unconditionally at the parser level

                            Well, I suppose so, insofar as I make values (or names) out of every token, but at that point they're just strings -- name/value pairs with a type -- they're not parsed. I throw only those strings which we want at the SQL Server and it handles any conversions to numeric or other types, the loader has no say in that. The loader has no say in data normalization either, it's just passing values as SQL parameters. Again, I want nearly every value in the file to go to the database, so of course I wind up with every value and throw them all at SQL Server. It may be a misunderstanding of terms, but in my opinion, no actual "parsing" is done until the (string) values arrive at SQL Server -- that's where the determinations of which name/value pairs go where, what SQL datatype they should be, etc. happens. The loader utility has no knowledge of any of that.

                            H Offline
                            H Offline
                            honey the codewitch
                            wrote on last edited by
                            #22

                            PIEBALDconsult wrote:

                            Well, mine too. It does have to tokenize so it knows when it finds something you want it to parse,

                            I have other ways of finding something. I switch to a fast matching algorithm where I basically look for a quote as if the document were a flat stream of characters and not a hierarchical ordered structure of logical JSON elements. That's what I mean by partial parsing and part of what I mean by denormalized searching/scanning. It ignores swaths of the document until it finds what you want. For example

                            reader.skipToField("name",JsonReader::Forward);

                            This performs the type of flat match that I'm talking about.

                            reader.skipToField("name",JsonReader::Siblings);

                            This performs a partially flat and partially structured match, looking for name on this level of the object heirarchy.

                            reader.skipToField("name",JsonReader::Descendants);

                            This does a nearly flat match, but basically counts '{' and '}' so it knows when to stop searching. I've simplified the explanation of what I've done, but that's the gist. I also don't load strings into memory at all when comparing them. I compare one character at a time straight off the "disk" so I never know the whole field name unless it's the one I'm after.

                            Real programmers use butterflies

                            1 Reply Last reply
                            0
                            • H honey the codewitch

                              It's usually in reference to XML parsers, but it's a generic parsing model that can apply to parsing anything. Contrast .NET's XmlTextReader (a pull parser) with a SAX XML parser (a push parser) The reason I ask is because I use the term a lot in my articles lately, and I'm trying to figure out if it might be worth it to write an article about the concept. I don't want to waste time with it if it's something most people have heard of before. It's hard for me to know because I deep dove parsing for a year and everything is familiar to me now.

                              Real programmers use butterflies

                              L Offline
                              L Offline
                              Lost User
                              wrote on last edited by
                              #23

                              I actually didn't have a clue what it was, but I'm a total noob so I don't count anyway ;)

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups