Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C#
  4. Memory leak trouble with Linq to Sql and multiple threads

Memory leak trouble with Linq to Sql and multiple threads

Scheduled Pinned Locked Moved C#
databasehelpcsharplinqwindows-admin
37 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Lost User

    I've used Entity Framework and never yet found it necessary to use:

    db.Connection.Open();

    Sounds like this reference may apply to you: Entity Framework Connection Management[^]

    Quote:

    Behavior in EF6 and future versions For EF6 and future versions we have taken the approach that if the calling code chooses to open the connection by calling context.Database.Connection.Open() then it has a good reason for doing so and the framework will assume that it wants control over opening and closing of the connection and will no longer close the connection automatically. Note: This can potentially lead to connections which are open for a long time so use with care.

    J Offline
    J Offline
    JD86
    wrote on last edited by
    #11

    Anytime I'm doing multiple queries I manually open the connection. From my understanding each query (without manually opening) is in essence doing this: -> Open Connection -> Query -> Close connection -> Open Connection -> Query -> Close connection So by me calling: db.Database.Connection.Open() it should be doing this: -> Open Connection -> Query -> Query -> Close connection

    L 1 Reply Last reply
    0
    • D Dave Kreskowiak

      Take out the GC crap. It's really not helping that much and, by the description you gave, will actually hide the problem, not fix it. From what you've said, it sounds like there's some other object you have to Dispose, but haven't, before it goes out of scope or you're constantly adding to a collection. There really isn't that much code here. I don't know where that logger is going to but it may also be the problem. Get a memory profiler[^]. It'll tell you what type the objects are that are taking up all that memory and give you a clue as to what to look for.

      A guide to posting questions on CodeProject

      Click this: Asking questions is a skill. Seriously, do it.
      Dave Kreskowiak

      J Offline
      J Offline
      JD86
      wrote on last edited by
      #12

      I will look into the memory profiler. The powershell commands i'm using is the same code I'm using in a web application and haven't noticed any problems. I thought it was the timers but I think its more LINQ to SQL since I swapped out the timers for the Quartz library. The GC.Collect didn't actually help because when it ran I could see the memory go up and go down... however after a couple days i'm seeing i'm at 1.5GB now. So it is possible that 1 timer is causing all this.... the logging is log4net and it logs to a text file

      D 1 Reply Last reply
      0
      • J JD86

        I'm disposing of "db" which should close the connection shouldn't it? I can try using but I usually steer away from it because I can't really catch exceptions without putting another try/catch inside the finally which is in effect making it a nested try/catch isn't it?

        L Offline
        L Offline
        Luc Pattyn
        wrote on last edited by
        #13

        Quote:

        I'm disposing of "db" which should close the connection shouldn't it?

        I don't know as I'm not familiar with CloudPanelDbContext. And I wouldn't rely on it unless (a) it is stated explicitly in trustworthy documentation and (b) I put in a comment repeating such fact.

        Quote:

        I can try using but ...

        Yes you can nest try/catch/finally blocks, however it doesn't look good and most often there is no need to do so. Actually you often don't need an explicit finally block if all disposable objects got created with using constructs, that is the beauty of it: it is hard to get it wrong, as the code is much simpler (the compiler automatically translates it to something similar to what you have, but readability is much better, so error probability goes down). Your method, without corrections for the possible connection issues, would be simplified to:

        public void Execute(IJobExecutionContext context) {
        int processedCount = 0;
        try {
        using (CloudPanelDbContext db = new CloudPanelDbContext (Config.ServiceSettings.SqlConnectionString)) {
        db.Connection.Open();
        using (ExchActions powershell = new ExchActions()) {
        var mailboxDatabases = powershell.Get_MailboxDatabaseSizes();
        db.StatMailboxDatabaseSizes.InsertAllOnSubmit(mailboxDatabases);
        db.SubmitChanges();
        processedCount = mailboxDatabases.Count;
        mailboxDatabases = null;
        }
        }
        } catch (Exception exc) {
        logger.ErrorFormat("Failed to ...");
        }
        logger.InfoFormat("Processed a total of {0} mailbox databases", processedCount);
        }

        I'm also not familiar with Entity Framework, however assuming your CloudPanelDbContext is derived from System.Data.Entity.DbContext a quick google session told me such context knows how to deal with open connections; it seems you can open a connection and keep it associated with your context, see e.g. some of the constructors here[^]. May I suggest you read up on that topic. PS: the same webpage also describes Dispose() with "The connection to the database

        J 1 Reply Last reply
        0
        • L Luc Pattyn

          Quote:

          I'm disposing of "db" which should close the connection shouldn't it?

          I don't know as I'm not familiar with CloudPanelDbContext. And I wouldn't rely on it unless (a) it is stated explicitly in trustworthy documentation and (b) I put in a comment repeating such fact.

          Quote:

          I can try using but ...

          Yes you can nest try/catch/finally blocks, however it doesn't look good and most often there is no need to do so. Actually you often don't need an explicit finally block if all disposable objects got created with using constructs, that is the beauty of it: it is hard to get it wrong, as the code is much simpler (the compiler automatically translates it to something similar to what you have, but readability is much better, so error probability goes down). Your method, without corrections for the possible connection issues, would be simplified to:

          public void Execute(IJobExecutionContext context) {
          int processedCount = 0;
          try {
          using (CloudPanelDbContext db = new CloudPanelDbContext (Config.ServiceSettings.SqlConnectionString)) {
          db.Connection.Open();
          using (ExchActions powershell = new ExchActions()) {
          var mailboxDatabases = powershell.Get_MailboxDatabaseSizes();
          db.StatMailboxDatabaseSizes.InsertAllOnSubmit(mailboxDatabases);
          db.SubmitChanges();
          processedCount = mailboxDatabases.Count;
          mailboxDatabases = null;
          }
          }
          } catch (Exception exc) {
          logger.ErrorFormat("Failed to ...");
          }
          logger.InfoFormat("Processed a total of {0} mailbox databases", processedCount);
          }

          I'm also not familiar with Entity Framework, however assuming your CloudPanelDbContext is derived from System.Data.Entity.DbContext a quick google session told me such context knows how to deal with open connections; it seems you can open a connection and keep it associated with your context, see e.g. some of the constructors here[^]. May I suggest you read up on that topic. PS: the same webpage also describes Dispose() with "The connection to the database

          J Offline
          J Offline
          JD86
          wrote on last edited by
          #14

          ClousePanelDbContext is actually from DataContext since this is Linq to SQL and not actually Entity Framework. (System.Data.Linq.DataContext) As far as the using statement I see you are putting that in the try/catch so if what is INSIDE the using statement throws an exception will it be caught in the catch area so I can see what the exception was? It was my understand that the using statement wouldn't throw the error where you could catch it unless you put the try/catch INSIDE the using statement. But then again I could of understood that wrong.

          L 1 Reply Last reply
          0
          • J JD86

            I will look into the memory profiler. The powershell commands i'm using is the same code I'm using in a web application and haven't noticed any problems. I thought it was the timers but I think its more LINQ to SQL since I swapped out the timers for the Quartz library. The GC.Collect didn't actually help because when it ran I could see the memory go up and go down... however after a couple days i'm seeing i'm at 1.5GB now. So it is possible that 1 timer is causing all this.... the logging is log4net and it logs to a text file

            D Offline
            D Offline
            Dave Kreskowiak
            wrote on last edited by
            #15

            Could it be the timer itself? Nope. But, it will most likely be in the code the timer executes.

            A guide to posting questions on CodeProject

            Click this: Asking questions is a skill. Seriously, do it.
            Dave Kreskowiak

            1 Reply Last reply
            0
            • J JD86

              Anytime I'm doing multiple queries I manually open the connection. From my understanding each query (without manually opening) is in essence doing this: -> Open Connection -> Query -> Close connection -> Open Connection -> Query -> Close connection So by me calling: db.Database.Connection.Open() it should be doing this: -> Open Connection -> Query -> Query -> Close connection

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #16

              Here is the remarks section for your "Connection" property:

              Quote:

              You can use this property to interoperate with relational ADO.NET code. The returned connection will be closed unless it has been explicitly opened by the user.

              (Which appears to also reflect what is in the EF docs). DataContext.Connection Property (System.Data.Linq)[^]

              J 1 Reply Last reply
              0
              • L Luc Pattyn

                Hmm. Applying explicit GC operations is not supposed to make much of a difference. And actually context is the one item you are not disposing, and probably the caller of Execute() should take care of that, not Execute() itself. BTW: rather than finally, I prefer the using pattern for disposable objetcs; example:

                using (ExchActions powershell = new ExchActions()) {
                ...
                }

                No matter how the using block is exited (normally, return, exception) the newly created object will always be disposed of correctly; and you can nest them as much as needed. I see you open a DB connection but never close it. There too you should be able to apply a using construct. DB connections get pooled, i.e. recycled, very well assuming you close or dispose them in time. Otherwise the system may exhaust the default pool and try to create large numbers of connections... :)

                Luc Pattyn [My Articles] Nil Volentibus Arduum

                L Offline
                L Offline
                Lost User
                wrote on last edited by
                #17

                Hey Luc, welcome back.

                L 1 Reply Last reply
                0
                • L Lost User

                  Hey Luc, welcome back.

                  L Offline
                  L Offline
                  Luc Pattyn
                  wrote on last edited by
                  #18

                  Hi Richard, this is just a sporadic visit, nothing permanent. :)

                  Luc Pattyn [My Articles] Nil Volentibus Arduum

                  1 Reply Last reply
                  0
                  • J JD86

                    ClousePanelDbContext is actually from DataContext since this is Linq to SQL and not actually Entity Framework. (System.Data.Linq.DataContext) As far as the using statement I see you are putting that in the try/catch so if what is INSIDE the using statement throws an exception will it be caught in the catch area so I can see what the exception was? It was my understand that the using statement wouldn't throw the error where you could catch it unless you put the try/catch INSIDE the using statement. But then again I could of understood that wrong.

                    L Offline
                    L Offline
                    Luc Pattyn
                    wrote on last edited by
                    #19

                    1. Yes, a try/catch/finally construct guards everything the thread does inside the try block no matter how many using blocks, method calls, etc it performs (up to a nested try/catch f course). If any using constructs appear, the compiler transforms them to its own set of try/finally blocks (there is no USING instruction in IL!) which would then sit inside my try block; so any exception will be caught by my catch block. 2. I found this[^] stating your connections are not disposed off at all by disposing the db object. :)

                    Luc Pattyn [My Articles] Nil Volentibus Arduum

                    1 Reply Last reply
                    0
                    • L Lost User

                      Here is the remarks section for your "Connection" property:

                      Quote:

                      You can use this property to interoperate with relational ADO.NET code. The returned connection will be closed unless it has been explicitly opened by the user.

                      (Which appears to also reflect what is in the EF docs). DataContext.Connection Property (System.Data.Linq)[^]

                      J Offline
                      J Offline
                      JD86
                      wrote on last edited by
                      #20

                      Ok as a test I took out all of the following:

                      db.Connection.Open();

                      I kept my Try/Catch/Finally which calls Dispose on the DataContext for testing. Since I am not manually opening connections it should open and close after each query/update but it should handle the connections by itself. I'm going to keep an eye on memory today. If this doesn't work then I will try the USING statement. I still want to handle the connection because I don't want it opening and closing all the time since it causes a delay. Once I narrow down what it is I'll call Close() before disposing of the DataContext. I will let you all know once I have an update

                      J 1 Reply Last reply
                      0
                      • J JD86

                        Ok as a test I took out all of the following:

                        db.Connection.Open();

                        I kept my Try/Catch/Finally which calls Dispose on the DataContext for testing. Since I am not manually opening connections it should open and close after each query/update but it should handle the connections by itself. I'm going to keep an eye on memory today. If this doesn't work then I will try the USING statement. I still want to handle the connection because I don't want it opening and closing all the time since it causes a delay. Once I narrow down what it is I'll call Close() before disposing of the DataContext. I will let you all know once I have an update

                        J Offline
                        J Offline
                        JD86
                        wrote on last edited by
                        #21

                        So I thought it was looking promising but the memory maxed out again. I could see the memory going up and down but over time it just steadily cliimbed. I posted the code here: KnowMoreIT/CloudPanel-Service · GitHub[^]

                        L 1 Reply Last reply
                        0
                        • J JD86

                          So I thought it was looking promising but the memory maxed out again. I could see the memory going up and down but over time it just steadily cliimbed. I posted the code here: KnowMoreIT/CloudPanel-Service · GitHub[^]

                          L Offline
                          L Offline
                          Luc Pattyn
                          wrote on last edited by
                          #22

                          Hi again, that's a lot of code, but I like your coding style (except for too many vars) so that kept me going; I have been browsing some of the code, and have a few questions already: 1) are you running this in 32-bit or in 64-bit mode? and how much RAM does your PC have? 2) what is the estimated count (order of magnitude) of yours lists (users, messages received, messages sent, ...) ? 3) what is the frequency of the Get_MessageTrackingLogsTask? (while hoping that is where the problem is). In the end I will probably be curious to see all the intervals as collected by GetIntervalsFromConfig(). 4) please give a quantitative description instead of "the memory going up and down but over time it just steadily cliimbed". e.g. is it like a sawtooth? how much does it go down when it does? how much does it go up from one tooth to the next? what is the periodicity of it going down? at what rate does it go up on average? what is the final value? and how long does it run fine? 5) when it fails, what is the exact message you are getting? is this repetitive? how many runs did you perform with the same code (and pretty much the same data)? do you know where in the code it comes to a sudden end? 6) do you know of anything in the application domain that (essentially without changing the code) dramatically changes the up-time? e.g. can you make it fail much more rapidly by altering some parameters? 7) suggestion: if acceptable, could you also provide a log of a failing run that fits the code currently on github? :)

                          Luc Pattyn [My Articles] Nil Volentibus Arduum

                          J 1 Reply Last reply
                          0
                          • L Luc Pattyn

                            Hi again, that's a lot of code, but I like your coding style (except for too many vars) so that kept me going; I have been browsing some of the code, and have a few questions already: 1) are you running this in 32-bit or in 64-bit mode? and how much RAM does your PC have? 2) what is the estimated count (order of magnitude) of yours lists (users, messages received, messages sent, ...) ? 3) what is the frequency of the Get_MessageTrackingLogsTask? (while hoping that is where the problem is). In the end I will probably be curious to see all the intervals as collected by GetIntervalsFromConfig(). 4) please give a quantitative description instead of "the memory going up and down but over time it just steadily cliimbed". e.g. is it like a sawtooth? how much does it go down when it does? how much does it go up from one tooth to the next? what is the periodicity of it going down? at what rate does it go up on average? what is the final value? and how long does it run fine? 5) when it fails, what is the exact message you are getting? is this repetitive? how many runs did you perform with the same code (and pretty much the same data)? do you know where in the code it comes to a sudden end? 6) do you know of anything in the application domain that (essentially without changing the code) dramatically changes the up-time? e.g. can you make it fail much more rapidly by altering some parameters? 7) suggestion: if acceptable, could you also provide a log of a failing run that fits the code currently on github? :)

                            Luc Pattyn [My Articles] Nil Volentibus Arduum

                            J Offline
                            J Offline
                            JD86
                            wrote on last edited by
                            #23

                            Hey Luc! Glad to hear you like my coding style... its been completely self taught (with help from random people on the web) so never really knew LOL. 1) I am running 32-bit process. Memory is a virtual machine running 2008 R2 with 16GB of memory (4GB is used normally) 2) Here is the breakdown: Users: 4333 rows Message Tracking Logs: 1432284 rows Mailbox Sizes: 2209537 rows Database Sizes: 10520 History Stats table: 69806 rows 3) Get_MessageTrackingLogsTask normally would run every 720 minutes. To speed up the testing phase I changed it to every 60 minutes (along with the rest of the tasks) 4) The service starts out at 5MB of memory. From 6pm to 9pm tonight it has climbed to 314MB. I usually end up stopping the service around 2.8GB used. I've let it just go once and it crashed when it hit the max memory for 32-bit process) Here is intervals: GetDisabledUsers has ran 180 times (every 10 min) [Users table] GetLockedUsers has ran 180 times (Every 10 min) [Users table] GetMailboxSizes has ran 3 times (every 60 min) [Mailbox sizes table] GetMailboXDatabaseSizes has ran 6 times (every 30 min) [Database sizes table] GetMessageTrackingLogs has ran 3 times (every 60 min) [Message Tracking table] FindMissingData has ran 9 times (every 20 min) [Users table] UpdateDatabaseHistory has ran 3 times (every 60 min) [History Stats table] 5) I've only let it crash once so I don't know where it crashed. I don't see any errors and it continues to function fine till it runs out of memory. This does happen everytime I run it.. I've tried removing the db.Connection.Open() and currently (right now) i'm trying using the "using" statement. 6) I can make the memory climb by increasing the intervals. So one of my tasks is causing the issue but examining the code over and over I can't find where it possibly is doing it :-( 7) It probably would take about 2 days for it to fully climb and crash. I would setup something to capture a memory dump when it crashes but it would take a few days. One thing I can try is to increase the interval for all tasks to like 999999 minutes except for one (set at 10 min). Then I can slowly include each task until I see the memory problem. But right now it has gone from 5MB to 314MB in 3 hours. It should just be querying from Exchange/AD and dumbing the data to SQL and disposing of all of it... so technically the memory should go back down right?

                            L 1 Reply Last reply
                            0
                            • J JD86

                              Hey Luc! Glad to hear you like my coding style... its been completely self taught (with help from random people on the web) so never really knew LOL. 1) I am running 32-bit process. Memory is a virtual machine running 2008 R2 with 16GB of memory (4GB is used normally) 2) Here is the breakdown: Users: 4333 rows Message Tracking Logs: 1432284 rows Mailbox Sizes: 2209537 rows Database Sizes: 10520 History Stats table: 69806 rows 3) Get_MessageTrackingLogsTask normally would run every 720 minutes. To speed up the testing phase I changed it to every 60 minutes (along with the rest of the tasks) 4) The service starts out at 5MB of memory. From 6pm to 9pm tonight it has climbed to 314MB. I usually end up stopping the service around 2.8GB used. I've let it just go once and it crashed when it hit the max memory for 32-bit process) Here is intervals: GetDisabledUsers has ran 180 times (every 10 min) [Users table] GetLockedUsers has ran 180 times (Every 10 min) [Users table] GetMailboxSizes has ran 3 times (every 60 min) [Mailbox sizes table] GetMailboXDatabaseSizes has ran 6 times (every 30 min) [Database sizes table] GetMessageTrackingLogs has ran 3 times (every 60 min) [Message Tracking table] FindMissingData has ran 9 times (every 20 min) [Users table] UpdateDatabaseHistory has ran 3 times (every 60 min) [History Stats table] 5) I've only let it crash once so I don't know where it crashed. I don't see any errors and it continues to function fine till it runs out of memory. This does happen everytime I run it.. I've tried removing the db.Connection.Open() and currently (right now) i'm trying using the "using" statement. 6) I can make the memory climb by increasing the intervals. So one of my tasks is causing the issue but examining the code over and over I can't find where it possibly is doing it :-( 7) It probably would take about 2 days for it to fully climb and crash. I would setup something to capture a memory dump when it crashes but it would take a few days. One thing I can try is to increase the interval for all tasks to like 999999 minutes except for one (set at 10 min). Then I can slowly include each task until I see the memory problem. But right now it has gone from 5MB to 314MB in 3 hours. It should just be querying from Exchange/AD and dumbing the data to SQL and disposing of all of it... so technically the memory should go back down right?

                              L Offline
                              L Offline
                              Luc Pattyn
                              wrote on last edited by
                              #24

                              Hi, thanks for all that info. Here are 3 remarks: 1) changing from explicit Dispose() calls to using constructs will not solve the problem by itself; it merely results in simpler code that is less likely to contain errors. So it should not be a priority. 2) I have an experiment I want to suggest to you with high priority: your code contains some 16 instances of

                              List somename=new List();

                              I want you to replace those by:

                              List somename=new List(someconstant);

                              i.e. specify an initial capacity for each and every list, where someconstant is a small factor larger than your current number of items; also preferably do not use more than 2 different values. So I'd suggest 0x00400000 for Message Tracking Logs and Mailbox Sizes, and 0x00040000 for everything else; yes those are hex numbers for a reason, and they exceed 4 million and quarter of a million respectively. The idea is to have the Lists of sufficient size right away (this will save a lot of data copying while populating an currently also growing the lists), and to improve the probability the same memory chunks will be used for newer generations of those lists. This may seem mysterious, the rationale is: a List internally has an array holding the items; an array is an object; objects larger than around 80KB are kept in what is known as the "Large Object Heap". That heap used to have very poor memory management in the early days of .NET (say .NET 2.0); the issue was kept quiet by MS as long as possible, but it was (maybe or probably still is) a weakness of the .NET memory management. It has been said it got improved, however I don't know whether it has, how, and by how much. BTW: I expect behavior to be better on a 64-bit application.) If my suggested experiment behaves much better, it more or less proves the issue still exists and is relevant to your application. Warning: the memory usage may initially be higher, however the "climbing steadily" should disappear completely. 3) I discovered one thing that is wasteful, and fixing that would then probably be the final solution I'd suggest: in your scenario you first build a list of all messages (sent, received), then repetitively use LINQ to reduce that list to a smaller list, and then basically are interested in the count of that smaller list. With a little effort, you could write a method that doesn't build the big list, it could create a list of lists (one for each user) hopefully resulting in many lists e

                              J 1 Reply Last reply
                              0
                              • L Luc Pattyn

                                Hi, thanks for all that info. Here are 3 remarks: 1) changing from explicit Dispose() calls to using constructs will not solve the problem by itself; it merely results in simpler code that is less likely to contain errors. So it should not be a priority. 2) I have an experiment I want to suggest to you with high priority: your code contains some 16 instances of

                                List somename=new List();

                                I want you to replace those by:

                                List somename=new List(someconstant);

                                i.e. specify an initial capacity for each and every list, where someconstant is a small factor larger than your current number of items; also preferably do not use more than 2 different values. So I'd suggest 0x00400000 for Message Tracking Logs and Mailbox Sizes, and 0x00040000 for everything else; yes those are hex numbers for a reason, and they exceed 4 million and quarter of a million respectively. The idea is to have the Lists of sufficient size right away (this will save a lot of data copying while populating an currently also growing the lists), and to improve the probability the same memory chunks will be used for newer generations of those lists. This may seem mysterious, the rationale is: a List internally has an array holding the items; an array is an object; objects larger than around 80KB are kept in what is known as the "Large Object Heap". That heap used to have very poor memory management in the early days of .NET (say .NET 2.0); the issue was kept quiet by MS as long as possible, but it was (maybe or probably still is) a weakness of the .NET memory management. It has been said it got improved, however I don't know whether it has, how, and by how much. BTW: I expect behavior to be better on a 64-bit application.) If my suggested experiment behaves much better, it more or less proves the issue still exists and is relevant to your application. Warning: the memory usage may initially be higher, however the "climbing steadily" should disappear completely. 3) I discovered one thing that is wasteful, and fixing that would then probably be the final solution I'd suggest: in your scenario you first build a list of all messages (sent, received), then repetitively use LINQ to reduce that list to a smaller list, and then basically are interested in the count of that smaller list. With a little effort, you could write a method that doesn't build the big list, it could create a list of lists (one for each user) hopefully resulting in many lists e

                                J Offline
                                J Offline
                                JD86
                                wrote on last edited by
                                #25

                                Luc, Thank you for assisting me with this. So I'm going to update the code and push it out tomorrow morning with the pre-allocated list sizes. Also just to let you know the process is at 386MB of memory right now. I know you said don't worry about the using statement but when I started this at 6pm I already put them in place. The code on github is still showing the try/catch/finally but I can update it with what I changed if you think it is necessary. Again, I really appreciate you helping!

                                L 1 Reply Last reply
                                0
                                • J JD86

                                  Luc, Thank you for assisting me with this. So I'm going to update the code and push it out tomorrow morning with the pre-allocated list sizes. Also just to let you know the process is at 386MB of memory right now. I know you said don't worry about the using statement but when I started this at 6pm I already put them in place. The code on github is still showing the try/catch/finally but I can update it with what I changed if you think it is necessary. Again, I really appreciate you helping!

                                  L Offline
                                  L Offline
                                  Luc Pattyn
                                  wrote on last edited by
                                  #26

                                  OK, the change to usings is no problem, once it works it should be fine. yes, if anything is still the matter, I will want access to the code of which you will tell us the results later on. this will be my last message for today. CU tomorrow. :)

                                  Luc Pattyn [My Articles] Nil Volentibus Arduum

                                  J 3 Replies Last reply
                                  0
                                  • L Luc Pattyn

                                    OK, the change to usings is no problem, once it works it should be fine. yes, if anything is still the matter, I will want access to the code of which you will tell us the results later on. this will be my last message for today. CU tomorrow. :)

                                    Luc Pattyn [My Articles] Nil Volentibus Arduum

                                    J Offline
                                    J Offline
                                    JD86
                                    wrote on last edited by
                                    #27

                                    Luc, Sorry for the delay. Yesterday I discovered another issue in the code. The Quartz library i'm using would let the job fire multiple times. So it didn't stop the timer while the job was executing and restart the timer after it was done. I discovered after that the memory leak still exists and i'm trying your suggestions now. HOWEVER the memory leak is MUCH slower now because the jobs are not occurring so often. From yesterday morning to now the memory has grown to 1.5GB

                                    1 Reply Last reply
                                    0
                                    • L Luc Pattyn

                                      OK, the change to usings is no problem, once it works it should be fine. yes, if anything is still the matter, I will want access to the code of which you will tell us the results later on. this will be my last message for today. CU tomorrow. :)

                                      Luc Pattyn [My Articles] Nil Volentibus Arduum

                                      J Offline
                                      J Offline
                                      JD86
                                      wrote on last edited by
                                      #28

                                      Ok code is updated and running now. I changes the triggers that were every 60 minutes to every 30 minutes to hopefully reproduce quicker. Updated code is here: KnowMoreIT/CloudPanel-Service · GitHub[^]

                                      1 Reply Last reply
                                      0
                                      • L Luc Pattyn

                                        OK, the change to usings is no problem, once it works it should be fine. yes, if anything is still the matter, I will want access to the code of which you will tell us the results later on. this will be my last message for today. CU tomorrow. :)

                                        Luc Pattyn [My Articles] Nil Volentibus Arduum

                                        J Offline
                                        J Offline
                                        JD86
                                        wrote on last edited by
                                        #29

                                        Ok it is running with that new code is currently at 677MB. Just a few minutes ago it went up to 1.5GB and back down... however it is slowly climbing

                                        L 1 Reply Last reply
                                        0
                                        • J JD86

                                          Ok it is running with that new code is currently at 677MB. Just a few minutes ago it went up to 1.5GB and back down... however it is slowly climbing

                                          L Offline
                                          L Offline
                                          Luc Pattyn
                                          wrote on last edited by
                                          #30

                                          Hi, 1. once again I must ask you to be more specific; "slowly climbing" doesn't offer much info. 2. changing the code and changing run-time parameters too much makes it extremely hard if not impossible to understand what sporadic measurement results actually mean. 3. I won't be in tomorrow. 4. Today I have been, and still am, researching the behavior of the Large Object Heap, and probably will write a CP article about it; if so, that will take a few weeks though. In summary, while there may be a way to make it perform as required (with a lot of conditions), I am still inclined to avoid large objects as much as possible. And hence: 5. I developed a little class that would keep your huge message lists in the regular heap, hence avoiding the LOH fragmentation you are currently bound to get. It does not offer an initial capacity, I don't think that would be very useful. It has been tested with foreach and with LINQ. Here it is:

                                          using System;
                                          using System.Collections;
                                          using System.Collections.Generic;
                                          using System.Text;

                                          namespace LargeObjectsHeapTester {
                                          //
                                          // This class collects a sequence of items of type T (value or reference type)
                                          // and keeps them in the order they get added.
                                          // A list-of-lists technique is used to keep everything in the regular heap,
                                          // i.e. this class should not challenge the large object heap unless
                                          // the number of items collected gets really big (above 100 million).
                                          // To control what is inside the collection:
                                          // - use Add to add an item at the end;
                                          // - use Clear to clear the collection.
                                          // To obtain information about its content:
                                          // - use the Count property;
                                          // - use the IEnumerable interface (or the old-fashioned IEnumerable if you must)
                                          // (both foreach and LINQ rely on the IEnumerable interface when present)
                                          //
                                          // Note: value types such as structs should be small (max 8B) otherwise
                                          // the level1 lists may not fit in the regular heap.
                                          public class ListOfLists :IEnumerable, IEnumerable {
                                          public const int MAX_ITEMS_IN_LEVEL1=10000;
                                          //public static ILog Logger;
                                          private List> level0;
                                          private List currentLevel1;

                                          	// constructor
                                          	public ListOfLists() {
                                          		Clear();
                                          	}
                                          
                                          	// logging utility
                                          	private void log(string msg) {
                                          		//if(Logger!=null) Logger.log(msg);
                                          	}
                                          
                                          	// empty the collection
                                          	public void Clear() {
                                          		level0=new List\>();
                                          		currentLevel1=new List();
                                          		level0.Add(currentLevel1);
                                          	}
                                          
                                          	// add an item at the end of the collection
                                          	publ
                                          
                                          J 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups