Some thoughts on the .net CLR/CLI
-
Super Lloyd wrote:
If your code is using other precious system resource, like a Window handle, or a brush handle or a socket... you might be out of luck. You might run out those without the system realizing a system GC is needed.
As it happens, this is just what I meant. If you use things like this, the GC won't necessarily keep up with your creation of objects. You can run out of GDI handles (though maybe not window handles - for other reasons) pretty quickly and Finalize won't help you, or at least that has happened to me. One of the reasons I won't use GDI handles and such in serving web pages - at least not directly, is the unpredictability of them. What if the connection gets broken and ASP.NET or whatever halts your thread? Sure your finalizers will still run, but when? Will you have enough handles left to serve the next request? (At least if you call dispose faithfully your odds are better but still)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
OIC.. you went from finalizer are not good enough to I flat out to waste time implementing them... Sure enough finalizer won't help with those sparse handle on a web server. And hey, it's really ease to dispose of things in webservice application usually. On the other hand you will run out of handle much more slowly in a user desktop application. And also some object can be very hard to track in desktop application, making finalizer really useful. And finalizer will run in a timely fashion there. And utility class are not always static. For example string, Cursor, Bitmap, Regex, etc... (many of the 21,000+ class in the .NET framework BCL) are instantiable utility classes! ;P And I also happen to love writing my own. In fact I shared DiceSet class with you, as a free custom example! ;)
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
Super Lloyd wrote:
If your code is using other precious system resource, like a Window handle, or a brush handle or a socket... you might be out of luck. You might run out those without the system realizing a system GC is needed.
As it happens, this is just what I meant. If you use things like this, the GC won't necessarily keep up with your creation of objects. You can run out of GDI handles (though maybe not window handles - for other reasons) pretty quickly and Finalize won't help you, or at least that has happened to me. One of the reasons I won't use GDI handles and such in serving web pages - at least not directly, is the unpredictability of them. What if the connection gets broken and ASP.NET or whatever halts your thread? Sure your finalizers will still run, but when? Will you have enough handles left to serve the next request? (At least if you call dispose faithfully your odds are better but still)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
using (var x = ....)
is your friend! :)try {} finally {}
tooA new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
using (var x = ....)
is your friend! :)try {} finally {}
tooA new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
can i get an amen over here?
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
OIC.. you went from finalizer are not good enough to I flat out to waste time implementing them... Sure enough finalizer won't help with those sparse handle on a web server. And hey, it's really ease to dispose of things in webservice application usually. On the other hand you will run out of handle much more slowly in a user desktop application. And also some object can be very hard to track in desktop application, making finalizer really useful. And finalizer will run in a timely fashion there. And utility class are not always static. For example string, Cursor, Bitmap, Regex, etc... (many of the 21,000+ class in the .NET framework BCL) are instantiable utility classes! ;P And I also happen to love writing my own. In fact I shared DiceSet class with you, as a free custom example! ;)
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
I just meant i think we define utility differently. mine is narrow when it comes to C# and .NET has but a few. - and it's just a convention i use in my own personal style. i've just been using it so long that it impacts how i understand the word, if that makes sense. I'm not saying you're wrong. I responded a bit to your dice thread. i think we can get you from theory to code if you just explain the "meaning" of the dice syntax. I don't do tabletop gaming. i have friends that are into that stuff but i never was.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
I just meant i think we define utility differently. mine is narrow when it comes to C# and .NET has but a few. - and it's just a convention i use in my own personal style. i've just been using it so long that it impacts how i understand the word, if that makes sense. I'm not saying you're wrong. I responded a bit to your dice thread. i think we can get you from theory to code if you just explain the "meaning" of the dice syntax. I don't do tabletop gaming. i have friends that are into that stuff but i never was.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Nice! :D I was planning to look at it tonight.. I don't have the code here, it's personal stuff! ;) The meaning is, you can often read: roll things like "3d6+2" and I try to create an object than can roll that, i.e. a dice collection with 3 x D6 (a Dice class that roll between 1 to 6) and sum them all up and adds 2. Or maybe "D10+D4+1" which would be Dice (10) (roll between 1 to 10) + roll Dice (4) (between 1 to 4) plus 1.
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
can i get an amen over here?
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
yes! :laugh:
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
Nice! :D I was planning to look at it tonight.. I don't have the code here, it's personal stuff! ;) The meaning is, you can often read: roll things like "3d6+2" and I try to create an object than can roll that, i.e. a dice collection with 3 x D6 (a Dice class that roll between 1 to 6) and sum them all up and adds 2. Or maybe "D10+D4+1" which would be Dice (10) (roll between 1 to 10) + roll Dice (4) (between 1 to 4) plus 1.
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
That's an expression evaluator! just look at the sample in my stuff. Oh there's a bug in the parser runtimes it both is and isn't serious but it's an 8 character long fix and it still works atm =). I can reupload and wait for reapproval but i'll do that tomorrow. The question is, can you just roll while you parse? or do you NEED an object model? because if you need an object model parsing is a two step process. (like, do you need Dice and DiceSets or can you just pass an expression to an Eval function and get your answer out? because if that's good enough your code just got cut by more than half)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
That's an expression evaluator! just look at the sample in my stuff. Oh there's a bug in the parser runtimes it both is and isn't serious but it's an 8 character long fix and it still works atm =). I can reupload and wait for reapproval but i'll do that tomorrow. The question is, can you just roll while you parse? or do you NEED an object model? because if you need an object model parsing is a two step process. (like, do you need Dice and DiceSets or can you just pass an expression to an Eval function and get your answer out? because if that's good enough your code just got cut by more than half)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
the parsing doesn't return a number, it returns a
DiceSet
object which is a collection ofDice
structure BothDiceSet
andDice
have aRoll()
method and a niceToString()
implementation It might be an evaluator. But it evaluate to an object, not a simple number.A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
The folks & Unity have been working a lot on a performance-oriented subset of C#. I don't know exactly how tailored their solution is to real-time application, but since a game engine has to draw 60 frames a second without skipping too many of them, you may find something interesting there.
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
codewitch honey crisis wrote:
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background.
Windows is not a real-time OS.
codewitch honey crisis wrote:
*gasp* Delphi, which is costlier/more time consuming to write solid code with.
I find .NET more verbose than Delphi. Want realtime, check out QNX.
Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^] "If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
You will never have true realtime. There are always unexpected delays, from cache misses to virtual memory paging operations. The systems getting closest to RT are the old ones: The first machine I laid my hands on was a 1976 design 16-bit mini (i.e. the class of PDP-11): It had 16 hardware interrupt levels - that is, 16 complete sets of hardware registers, so when an interrupt occured at a higher level than the one currently executing, the register bank was switched in an the first instruction executed 900 ns after the interrupt signal was received. That was certainly impressive in 1976, but the CPU was hard logic throughout, with no nasty pipelines to be emptied, no microcode loops that had to terminate, usually you had no paging or virtual memory. That got you close to RT. You won't see anything that compares today, even without GC. In those days, people got shivers from the though of writing any system software in any high level language. Lots of people shook their heads in disbelief over Unix written in K&R C: It could never work, could never give good enough performance. I have saved a printout of a long discussion from around 1995, when NetNews was The Social Media: This one guy who insisted, at great length, that high level languages was nothing but a fad that would soon go away; they will never give high enough performance. (In 1995, he didn't get much support from the community, but he never gave in to the pressure.) Using a high level language takes you one step away from RT. Using a machine with virtual memory is another step (even if your program is fixed in memory - the OS may be busy managing memory for other processes, with interrupts disabled). Dependency on pipelines and cache hit rates is yet another step. If you require really hard RT, you must stay away from such facilities - probably from any modern CISC CPU. You probably do not have hard RT requirements; you can live with a cache miss, or the OS updating page table for other processes. You should design your code to be able to handle as large random delays as possible. Networking guys know the techniques, like window mechanisms and elastic buffers. If you do things the right way, I am not willing to believe that a modern multi-GHz six-core CPUs ability to run a VST plugin sufficiently close to RT is ruined by the CRT GC! I grew up with high level languages, but knowing that RT "had to" be done in assembly. But soon I also learned how smart optimizing compilers can be, and gave up my belief in assembly. I much longer
-
You will never have true realtime. There are always unexpected delays, from cache misses to virtual memory paging operations. The systems getting closest to RT are the old ones: The first machine I laid my hands on was a 1976 design 16-bit mini (i.e. the class of PDP-11): It had 16 hardware interrupt levels - that is, 16 complete sets of hardware registers, so when an interrupt occured at a higher level than the one currently executing, the register bank was switched in an the first instruction executed 900 ns after the interrupt signal was received. That was certainly impressive in 1976, but the CPU was hard logic throughout, with no nasty pipelines to be emptied, no microcode loops that had to terminate, usually you had no paging or virtual memory. That got you close to RT. You won't see anything that compares today, even without GC. In those days, people got shivers from the though of writing any system software in any high level language. Lots of people shook their heads in disbelief over Unix written in K&R C: It could never work, could never give good enough performance. I have saved a printout of a long discussion from around 1995, when NetNews was The Social Media: This one guy who insisted, at great length, that high level languages was nothing but a fad that would soon go away; they will never give high enough performance. (In 1995, he didn't get much support from the community, but he never gave in to the pressure.) Using a high level language takes you one step away from RT. Using a machine with virtual memory is another step (even if your program is fixed in memory - the OS may be busy managing memory for other processes, with interrupts disabled). Dependency on pipelines and cache hit rates is yet another step. If you require really hard RT, you must stay away from such facilities - probably from any modern CISC CPU. You probably do not have hard RT requirements; you can live with a cache miss, or the OS updating page table for other processes. You should design your code to be able to handle as large random delays as possible. Networking guys know the techniques, like window mechanisms and elastic buffers. If you do things the right way, I am not willing to believe that a modern multi-GHz six-core CPUs ability to run a VST plugin sufficiently close to RT is ruined by the CRT GC! I grew up with high level languages, but knowing that RT "had to" be done in assembly. But soon I also learned how smart optimizing compilers can be, and gave up my belief in assembly. I much longer
See my other reply where i said in this case i don't care about it being technically an RTOS I care about being able to play live music with it
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
codewitch honey crisis wrote:
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background.
Windows is not a real-time OS.
codewitch honey crisis wrote:
*gasp* Delphi, which is costlier/more time consuming to write solid code with.
I find .NET more verbose than Delphi. Want realtime, check out QNX.
Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^] "If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
i should have qualified that. everyone is getting on me about that. I don't care about RTOS stuff. I care about being able to play live music. Realtime enough for that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
the parsing doesn't return a number, it returns a
DiceSet
object which is a collection ofDice
structure BothDiceSet
andDice
have aRoll()
method and a niceToString()
implementation It might be an evaluator. But it evaluate to an object, not a simple number.A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
Okay, that's fine, it just means the Eval method is more like a BuildDice method
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
See my other reply where i said in this case i don't care about it being technically an RTOS I care about being able to play live music with it
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
All I am saying is: If you can't play your music, it is not the fault of the GC. I am not willing to believe that.
LOL yeah dude, it is. i've tested it. and i've tested it again by using the critical region feature of .NET 4 to suspend the GC so you can believe whatever you want. I'll believe my tests, thanks.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Show me where Finalized objects get collected before proc exit in a real world scenario. Except server apps, but if you're using unmanaged resources directly from a webserver i hate you. In application code, the GC calls finalizer before proc exit. Show me where it doesn't. Contrive a scenario even. It won't slow down dramatically until reboot. The kernel keeps an slist of kernel handles by process around. Win32 does indeed clean them when the process exits. Your HBITMAP will be around until proc exit, not until reboot. And *it would anyway* - at least in my tests, because Finalize doesn't get called until proc exit anyway.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
What are you talking? The "call Dispose for shure" pattern recommends writing a destructor, so if your object won't be disposed it will free resources at least in the finalizer. If diposed is called, the code in the finalizer is obsolet and supressed. You do this for base-classes only and for derived classes you use the simple Dispose pattern. I work a lot with hardware and unmananged ressources, my finalizers are called all the time, no one exits the application to free memory and system resources but coders forget to dispose (mostly implicit by not using a using-block)... And I talk here about backend and frontend. And: many resources will be hold by the OS until you reboot... So I can understand that in your experience it "doesn't matter", why? I can just quess you write a very specific type of software, if memory is not your problem I'm fine with that, but don't recommend that ignorance to memory-management in .NET to others...
-
LOL yeah dude, it is. i've tested it. and i've tested it again by using the critical region feature of .NET 4 to suspend the GC so you can believe whatever you want. I'll believe my tests, thanks.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
I'm not into making VSTs - but my wife uses them a lot in cubase ;) - so you say Projects like [GitHub - obiwanjacobi/vst.net: Virtual Studio Technology (VST) for .NET. Plugins and Host support.](https://github.com/obiwanjacobi/vst.net) don't work because of GC? I don't know anything about this project, was just curiouse if you know it? Btw: You know you can handle memory yourself in latest .NET (some missing Features were added)
-
I'm not into making VSTs - but my wife uses them a lot in cubase ;) - so you say Projects like [GitHub - obiwanjacobi/vst.net: Virtual Studio Technology (VST) for .NET. Plugins and Host support.](https://github.com/obiwanjacobi/vst.net) don't work because of GC? I don't know anything about this project, was just curiouse if you know it? Btw: You know you can handle memory yourself in latest .NET (some missing Features were added)
Yes. As a matter of fact. I'm saying that. It's one of the reasons I abandoned it and went back to writing them in C++ Maybe if you have an 8 core monster the lag on the GC won't kill it. But on my little I5-2700 it certainly does
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
What are you talking? The "call Dispose for shure" pattern recommends writing a destructor, so if your object won't be disposed it will free resources at least in the finalizer. If diposed is called, the code in the finalizer is obsolet and supressed. You do this for base-classes only and for derived classes you use the simple Dispose pattern. I work a lot with hardware and unmananged ressources, my finalizers are called all the time, no one exits the application to free memory and system resources but coders forget to dispose (mostly implicit by not using a using-block)... And I talk here about backend and frontend. And: many resources will be hold by the OS until you reboot... So I can understand that in your experience it "doesn't matter", why? I can just quess you write a very specific type of software, if memory is not your problem I'm fine with that, but don't recommend that ignorance to memory-management in .NET to others...
It looks like the behavior in the newer .NET is different than back when I tested this (.NET 2) So I stand corrected, as I told Super Lloyd, his tests do indeed show the finalizer being called. So my mind has already been changed on the matter. I don't use unmanaged resources directly in .NET and haven't since about 2008 or so, so it hasn't been a problem for me, and I hadn't really updated my information on the matter.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.