Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C#
  4. Interprocess communication

Interprocess communication

Scheduled Pinned Locked Moved C#
questionjsonperformance
5 Posts 2 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • P Offline
    P Offline
    petst
    wrote on last edited by
    #1

    Hi all, I am trying to get on top of interprocess communication. Most -if not all- of the articles around handle IPC either through a mailbox mechanism or memory mapped file wrappers. The latter is probably best and by far the fastest, but I got the feeling everybody is serializing and deserializing objects to do this. In my project I have to exchange (an awful lot of) short messages between processes. There is no need for serialization, the message type and structure is known (style 4 msg length bytes, 4 transaction bytes, message payload etc). The question: would it be possible NOT to do serialization and free the CPU of this burden, but instead write bytes directly to a mem stream in one process and read that same semaphore in another process? Somehow? ;P Thanks in advance, Peter Stevens

    C 1 Reply Last reply
    0
    • P petst

      Hi all, I am trying to get on top of interprocess communication. Most -if not all- of the articles around handle IPC either through a mailbox mechanism or memory mapped file wrappers. The latter is probably best and by far the fastest, but I got the feeling everybody is serializing and deserializing objects to do this. In my project I have to exchange (an awful lot of) short messages between processes. There is no need for serialization, the message type and structure is known (style 4 msg length bytes, 4 transaction bytes, message payload etc). The question: would it be possible NOT to do serialization and free the CPU of this burden, but instead write bytes directly to a mem stream in one process and read that same semaphore in another process? Somehow? ;P Thanks in advance, Peter Stevens

      C Offline
      C Offline
      CerebralKungFu
      wrote on last edited by
      #2

      Just in case you haven't seen it, there's a really good article posted here called "A C# Framework for Interprocess Synchronization and Communication" By Christoph Ruegg (an Aug 2004 prize winner) that deals with most of the guts work you would need for such a thing. For what its worth, I'd recommend considering the long-term maintenance and growth of your solution before moving away from binary serialization. If you use a custom method to marshal your data structure you are coupling your IPC code with your message structure and changes in the message structure will likely require changes in the IPC code -- higher long-term maintenance and testing costs. If however you stick with binary serialization, changes in your message structure may not require changes to your infrastructure. Write a loop that shoots 100,000 messages across both IPC mechanisms and you will discover for certain whether you can affort to stay with serialization. But of course I concede that performance easily trumps maintenance in some projects so if this is the case for you just disregard my rambling. I hope the article has what your need, CKF

      P 1 Reply Last reply
      0
      • C CerebralKungFu

        Just in case you haven't seen it, there's a really good article posted here called "A C# Framework for Interprocess Synchronization and Communication" By Christoph Ruegg (an Aug 2004 prize winner) that deals with most of the guts work you would need for such a thing. For what its worth, I'd recommend considering the long-term maintenance and growth of your solution before moving away from binary serialization. If you use a custom method to marshal your data structure you are coupling your IPC code with your message structure and changes in the message structure will likely require changes in the IPC code -- higher long-term maintenance and testing costs. If however you stick with binary serialization, changes in your message structure may not require changes to your infrastructure. Write a loop that shoots 100,000 messages across both IPC mechanisms and you will discover for certain whether you can affort to stay with serialization. But of course I concede that performance easily trumps maintenance in some projects so if this is the case for you just disregard my rambling. I hope the article has what your need, CKF

        P Offline
        P Offline
        petst
        wrote on last edited by
        #3

        Many thanks for your replay. I know the risks of moving away from serialization (and standards) but I desperately need the speed. The type of solution is a protocol front-end handling over 100K messages per second on a quad Xeon machine. I was surprised when we were testing how far we could go in opening the message payload for inspection reasons: the performance price for a single extra string operation (such as UrlDecode) costs about 3.000 messages per second less throughput... Imagine. I've been looking at Christoph's framework, it's a great thing, but it becomes unstable when moving more than 12 Mbit/s (which is around 10.000 msg/sec). The single-process architecture, as it exists today, behaves predictably and inspects and routes up to 180 Mbit/s of message data between components. Also, we've seen some exceptions when either the producer or the consumer dissapears from the process list under load conditions above 5 Mbit/s. Maybe it's synchro-related or the custom queue implementation, I'm not sure. Anyway, thanks again for thinking along... Peter.

        C 1 Reply Last reply
        0
        • P petst

          Many thanks for your replay. I know the risks of moving away from serialization (and standards) but I desperately need the speed. The type of solution is a protocol front-end handling over 100K messages per second on a quad Xeon machine. I was surprised when we were testing how far we could go in opening the message payload for inspection reasons: the performance price for a single extra string operation (such as UrlDecode) costs about 3.000 messages per second less throughput... Imagine. I've been looking at Christoph's framework, it's a great thing, but it becomes unstable when moving more than 12 Mbit/s (which is around 10.000 msg/sec). The single-process architecture, as it exists today, behaves predictably and inspects and routes up to 180 Mbit/s of message data between components. Also, we've seen some exceptions when either the producer or the consumer dissapears from the process list under load conditions above 5 Mbit/s. Maybe it's synchro-related or the custom queue implementation, I'm not sure. Anyway, thanks again for thinking along... Peter.

          C Offline
          C Offline
          CerebralKungFu
          wrote on last edited by
          #4

          NP on the reply, nothing else to do today. Okay, so you're still in the same situation needing better IPC throughput: The simple answer to your question is yes... but I need more info; I think I've got stuff lying around that can help... What is your anticipated/desired producer-consumer configuration? Is it one producer many consumers, many to many, one to one, etc. Are all processes expected to be on the same machine (sounds like it)? Also, what is the min, typical, max size of a message? CKF

          P 1 Reply Last reply
          0
          • C CerebralKungFu

            NP on the reply, nothing else to do today. Okay, so you're still in the same situation needing better IPC throughput: The simple answer to your question is yes... but I need more info; I think I've got stuff lying around that can help... What is your anticipated/desired producer-consumer configuration? Is it one producer many consumers, many to many, one to one, etc. Are all processes expected to be on the same machine (sounds like it)? Also, what is the min, typical, max size of a message? CKF

            P Offline
            P Offline
            petst
            wrote on last edited by
            #5

            The desired producer/consumer config is one-to-one. The scope of this application (which is part of a larger solution) is a single machine. We conceive several Windows Services that make up the machine solution. A first one uses sockets to listen for incoming connections and parses streams into atomic messages according to a protocol specification. A second service should handle decryption, a third inspection, and a fourth will be a router that forwards plain-text messages to several back-end machines. Messages always have a minumum length of 4 bytes length (mapped to an Int32), 4 bytes Transaction Number, 10 bytes of flag fields, and finally the message itself. So theoretically the max length of a message is 4+4+10=18 bytes of headers plus an Int32's MaxValue which would then be 2,147,483,665 bytes. In real life this will not be the case, messages will never exceed 150 bytes, including the header. The typical size will be around 50 bytes. There is also no need to queue messages within the mem mapped file mechanism as it is already covered with our own queueing mechanism which makes use of IO completion port threading in the background. Thanks, Peter

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups