Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. .NET (Core and Framework)
  4. Async Sockets

Async Sockets

Scheduled Pinned Locked Moved .NET (Core and Framework)
helpcsharpsysadminperformanceannouncement
4 Posts 2 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Offline
    L Offline
    Lee Humphries
    wrote on last edited by
    #1

    I've just recently ported a sockets client from .Net 2.0 to .Net 3.5 ... and now it runs into reoccurring problems, of which I'm a little suspicious may be related to subtle differences between the frameworks that I'm not allowing for. Specifically the Receive Callback function is being invoked, but the socket itself is no longer connected (and probably disposed). This is in spite of the fact that data is expected to be received.

    private void ReceiveCallback(IAsyncResult ar)
    {
    SocketError sErr = SocketError.Success;
    int bytesRead = 0;
    lock (this)
    {
    try
    {
    // Retrieve the client socket from the asynchronous state object.
    Socket socket = (Socket)ar.AsyncState;

      sErr = new SocketError();
    
      // Read data from the remote device.
      // however this line frequently throws an exception
      bytesRead = socket.EndReceive(ar, out sErr);
      ...
    

    And the essentials of the exception are:

    ReceiveCallback threw an error. System.ObjectDisposedException: Cannot access a disposed object.
    Object name: 'System.Net.Sockets.Socket'.
    at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)

    The associated socket error code is 10038 - which when you read its description isn't very helpful either. Just to add to this the previous version of this client is still running and still connecting to the same server without issue. My request is if anyone can point me in the right direction as to what I'm missing or doing wrong. You can reasonably assume that I've googled every permutation of the above in my quest. Here's some possibilities: * There's some subtle distinction in how v2.0 and v3.5 sockets connect (e.g. defaults for various socket options) that the server I'm connecting doesn't like / understand. * It's a speed issue. The v2.0 code has got a whole heap of throttling tricks, that the v3.5 based version does not - I don't think this one's likely (already been playing around in this area). * It's a threading and/or locks problem on the socket itself. * I'm missing something important in using sockets - not likely, but this excuse is here for completeness, just in case you have some "traps for young players" type of insight. * Or something else.

    I just love Koalas - they go great with Bacon.

    N 1 Reply Last reply
    0
    • L Lee Humphries

      I've just recently ported a sockets client from .Net 2.0 to .Net 3.5 ... and now it runs into reoccurring problems, of which I'm a little suspicious may be related to subtle differences between the frameworks that I'm not allowing for. Specifically the Receive Callback function is being invoked, but the socket itself is no longer connected (and probably disposed). This is in spite of the fact that data is expected to be received.

      private void ReceiveCallback(IAsyncResult ar)
      {
      SocketError sErr = SocketError.Success;
      int bytesRead = 0;
      lock (this)
      {
      try
      {
      // Retrieve the client socket from the asynchronous state object.
      Socket socket = (Socket)ar.AsyncState;

        sErr = new SocketError();
      
        // Read data from the remote device.
        // however this line frequently throws an exception
        bytesRead = socket.EndReceive(ar, out sErr);
        ...
      

      And the essentials of the exception are:

      ReceiveCallback threw an error. System.ObjectDisposedException: Cannot access a disposed object.
      Object name: 'System.Net.Sockets.Socket'.
      at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)

      The associated socket error code is 10038 - which when you read its description isn't very helpful either. Just to add to this the previous version of this client is still running and still connecting to the same server without issue. My request is if anyone can point me in the right direction as to what I'm missing or doing wrong. You can reasonably assume that I've googled every permutation of the above in my quest. Here's some possibilities: * There's some subtle distinction in how v2.0 and v3.5 sockets connect (e.g. defaults for various socket options) that the server I'm connecting doesn't like / understand. * It's a speed issue. The v2.0 code has got a whole heap of throttling tricks, that the v3.5 based version does not - I don't think this one's likely (already been playing around in this area). * It's a threading and/or locks problem on the socket itself. * I'm missing something important in using sockets - not likely, but this excuse is here for completeness, just in case you have some "traps for young players" type of insight. * Or something else.

      I just love Koalas - they go great with Bacon.

      N Offline
      N Offline
      Nicholas Butler
      wrote on last edited by
      #2

      It sounds like your Socket object is being disposed! I would expect that putting a reference to it in your AsyncState would stop the GC collecting it, so are you disposing of it somewhere else? I don't have a simple answer, but you could try this: Derive from Socket and override Dispose. Put a breakpoint in there to find out when it is being disposed and by whom. Also, the code you posted has a couple of no-no's: 1) Locking on this is considered bad practice - use a private object to lock on. 2) You don't need to initialise sErr as it is passed as an out parameter. Nick

      ---------------------------------- Be excellent to each other :)

      L 2 Replies Last reply
      0
      • N Nicholas Butler

        It sounds like your Socket object is being disposed! I would expect that putting a reference to it in your AsyncState would stop the GC collecting it, so are you disposing of it somewhere else? I don't have a simple answer, but you could try this: Derive from Socket and override Dispose. Put a breakpoint in there to find out when it is being disposed and by whom. Also, the code you posted has a couple of no-no's: 1) Locking on this is considered bad practice - use a private object to lock on. 2) You don't need to initialise sErr as it is passed as an out parameter. Nick

        ---------------------------------- Be excellent to each other :)

        L Offline
        L Offline
        Lee Humphries
        wrote on last edited by
        #3

        Thanks for those suggestions Nick. As to the bad practices, they actually originally came from sample code that I was trying as a part of resolving this problem. I left them in there in case it triggered someone's memory (and they had better suggestions). I've already looked at the actual .Net sockets code so I understand some of the mechanisms involved. In particular it relates to the m_IntCleanedUp member and the code related to its handling. But it hasn't really told me what I need to know to resolve the above.

        I just love Koalas - they go great with Bacon.

        1 Reply Last reply
        0
        • N Nicholas Butler

          It sounds like your Socket object is being disposed! I would expect that putting a reference to it in your AsyncState would stop the GC collecting it, so are you disposing of it somewhere else? I don't have a simple answer, but you could try this: Derive from Socket and override Dispose. Put a breakpoint in there to find out when it is being disposed and by whom. Also, the code you posted has a couple of no-no's: 1) Locking on this is considered bad practice - use a private object to lock on. 2) You don't need to initialise sErr as it is passed as an out parameter. Nick

          ---------------------------------- Be excellent to each other :)

          L Offline
          L Offline
          Lee Humphries
          wrote on last edited by
          #4

          Thanks to Nick's suggestion

          Nick Butler wrote:

          Derive from Socket and override Dispose.

          . I managed to find the fault when I overrode Disconnect and had it dump a stack trace. The previous version of this service used WSE, the new version of this service uses WCF. In the previous version I had put a call to disconnect in the destructor to ensure that the socket got cleaned up properly. Under WCF the destructor was being invoked at an inopportune time resulting in the socket being disconnected (but not actually disposed). Various aspects to the solution, but the two main ones were: 1. Remove the 'cleanup' call to disconnect from the destructor. i.e. If you're using my service then you call the disconnect yourself. 2. Supply the decoration [OperationBehavior(TransactionScopeRequired = true)] to each method exposed by the WCF service.

          I just love Koalas - they go great with Bacon.

          1 Reply Last reply
          0
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups