Some thoughts on the .net CLR/CLI
-
That's an expression evaluator! just look at the sample in my stuff. Oh there's a bug in the parser runtimes it both is and isn't serious but it's an 8 character long fix and it still works atm =). I can reupload and wait for reapproval but i'll do that tomorrow. The question is, can you just roll while you parse? or do you NEED an object model? because if you need an object model parsing is a two step process. (like, do you need Dice and DiceSets or can you just pass an expression to an Eval function and get your answer out? because if that's good enough your code just got cut by more than half)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
the parsing doesn't return a number, it returns a
DiceSet
object which is a collection ofDice
structure BothDiceSet
andDice
have aRoll()
method and a niceToString()
implementation It might be an evaluator. But it evaluate to an object, not a simple number.A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
The folks & Unity have been working a lot on a performance-oriented subset of C#. I don't know exactly how tailored their solution is to real-time application, but since a game engine has to draw 60 frames a second without skipping too many of them, you may find something interesting there.
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
codewitch honey crisis wrote:
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background.
Windows is not a real-time OS.
codewitch honey crisis wrote:
*gasp* Delphi, which is costlier/more time consuming to write solid code with.
I find .NET more verbose than Delphi. Want realtime, check out QNX.
Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^] "If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
You will never have true realtime. There are always unexpected delays, from cache misses to virtual memory paging operations. The systems getting closest to RT are the old ones: The first machine I laid my hands on was a 1976 design 16-bit mini (i.e. the class of PDP-11): It had 16 hardware interrupt levels - that is, 16 complete sets of hardware registers, so when an interrupt occured at a higher level than the one currently executing, the register bank was switched in an the first instruction executed 900 ns after the interrupt signal was received. That was certainly impressive in 1976, but the CPU was hard logic throughout, with no nasty pipelines to be emptied, no microcode loops that had to terminate, usually you had no paging or virtual memory. That got you close to RT. You won't see anything that compares today, even without GC. In those days, people got shivers from the though of writing any system software in any high level language. Lots of people shook their heads in disbelief over Unix written in K&R C: It could never work, could never give good enough performance. I have saved a printout of a long discussion from around 1995, when NetNews was The Social Media: This one guy who insisted, at great length, that high level languages was nothing but a fad that would soon go away; they will never give high enough performance. (In 1995, he didn't get much support from the community, but he never gave in to the pressure.) Using a high level language takes you one step away from RT. Using a machine with virtual memory is another step (even if your program is fixed in memory - the OS may be busy managing memory for other processes, with interrupts disabled). Dependency on pipelines and cache hit rates is yet another step. If you require really hard RT, you must stay away from such facilities - probably from any modern CISC CPU. You probably do not have hard RT requirements; you can live with a cache miss, or the OS updating page table for other processes. You should design your code to be able to handle as large random delays as possible. Networking guys know the techniques, like window mechanisms and elastic buffers. If you do things the right way, I am not willing to believe that a modern multi-GHz six-core CPUs ability to run a VST plugin sufficiently close to RT is ruined by the CRT GC! I grew up with high level languages, but knowing that RT "had to" be done in assembly. But soon I also learned how smart optimizing compilers can be, and gave up my belief in assembly. I much longer
-
You will never have true realtime. There are always unexpected delays, from cache misses to virtual memory paging operations. The systems getting closest to RT are the old ones: The first machine I laid my hands on was a 1976 design 16-bit mini (i.e. the class of PDP-11): It had 16 hardware interrupt levels - that is, 16 complete sets of hardware registers, so when an interrupt occured at a higher level than the one currently executing, the register bank was switched in an the first instruction executed 900 ns after the interrupt signal was received. That was certainly impressive in 1976, but the CPU was hard logic throughout, with no nasty pipelines to be emptied, no microcode loops that had to terminate, usually you had no paging or virtual memory. That got you close to RT. You won't see anything that compares today, even without GC. In those days, people got shivers from the though of writing any system software in any high level language. Lots of people shook their heads in disbelief over Unix written in K&R C: It could never work, could never give good enough performance. I have saved a printout of a long discussion from around 1995, when NetNews was The Social Media: This one guy who insisted, at great length, that high level languages was nothing but a fad that would soon go away; they will never give high enough performance. (In 1995, he didn't get much support from the community, but he never gave in to the pressure.) Using a high level language takes you one step away from RT. Using a machine with virtual memory is another step (even if your program is fixed in memory - the OS may be busy managing memory for other processes, with interrupts disabled). Dependency on pipelines and cache hit rates is yet another step. If you require really hard RT, you must stay away from such facilities - probably from any modern CISC CPU. You probably do not have hard RT requirements; you can live with a cache miss, or the OS updating page table for other processes. You should design your code to be able to handle as large random delays as possible. Networking guys know the techniques, like window mechanisms and elastic buffers. If you do things the right way, I am not willing to believe that a modern multi-GHz six-core CPUs ability to run a VST plugin sufficiently close to RT is ruined by the CRT GC! I grew up with high level languages, but knowing that RT "had to" be done in assembly. But soon I also learned how smart optimizing compilers can be, and gave up my belief in assembly. I much longer
See my other reply where i said in this case i don't care about it being technically an RTOS I care about being able to play live music with it
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
codewitch honey crisis wrote:
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background.
Windows is not a real-time OS.
codewitch honey crisis wrote:
*gasp* Delphi, which is costlier/more time consuming to write solid code with.
I find .NET more verbose than Delphi. Want realtime, check out QNX.
Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^] "If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
i should have qualified that. everyone is getting on me about that. I don't care about RTOS stuff. I care about being able to play live music. Realtime enough for that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
the parsing doesn't return a number, it returns a
DiceSet
object which is a collection ofDice
structure BothDiceSet
andDice
have aRoll()
method and a niceToString()
implementation It might be an evaluator. But it evaluate to an object, not a simple number.A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
Okay, that's fine, it just means the Eval method is more like a BuildDice method
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
See my other reply where i said in this case i don't care about it being technically an RTOS I care about being able to play live music with it
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
All I am saying is: If you can't play your music, it is not the fault of the GC. I am not willing to believe that.
LOL yeah dude, it is. i've tested it. and i've tested it again by using the critical region feature of .NET 4 to suspend the GC so you can believe whatever you want. I'll believe my tests, thanks.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Show me where Finalized objects get collected before proc exit in a real world scenario. Except server apps, but if you're using unmanaged resources directly from a webserver i hate you. In application code, the GC calls finalizer before proc exit. Show me where it doesn't. Contrive a scenario even. It won't slow down dramatically until reboot. The kernel keeps an slist of kernel handles by process around. Win32 does indeed clean them when the process exits. Your HBITMAP will be around until proc exit, not until reboot. And *it would anyway* - at least in my tests, because Finalize doesn't get called until proc exit anyway.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
What are you talking? The "call Dispose for shure" pattern recommends writing a destructor, so if your object won't be disposed it will free resources at least in the finalizer. If diposed is called, the code in the finalizer is obsolet and supressed. You do this for base-classes only and for derived classes you use the simple Dispose pattern. I work a lot with hardware and unmananged ressources, my finalizers are called all the time, no one exits the application to free memory and system resources but coders forget to dispose (mostly implicit by not using a using-block)... And I talk here about backend and frontend. And: many resources will be hold by the OS until you reboot... So I can understand that in your experience it "doesn't matter", why? I can just quess you write a very specific type of software, if memory is not your problem I'm fine with that, but don't recommend that ignorance to memory-management in .NET to others...
-
LOL yeah dude, it is. i've tested it. and i've tested it again by using the critical region feature of .NET 4 to suspend the GC so you can believe whatever you want. I'll believe my tests, thanks.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
I'm not into making VSTs - but my wife uses them a lot in cubase ;) - so you say Projects like [GitHub - obiwanjacobi/vst.net: Virtual Studio Technology (VST) for .NET. Plugins and Host support.](https://github.com/obiwanjacobi/vst.net) don't work because of GC? I don't know anything about this project, was just curiouse if you know it? Btw: You know you can handle memory yourself in latest .NET (some missing Features were added)
-
I'm not into making VSTs - but my wife uses them a lot in cubase ;) - so you say Projects like [GitHub - obiwanjacobi/vst.net: Virtual Studio Technology (VST) for .NET. Plugins and Host support.](https://github.com/obiwanjacobi/vst.net) don't work because of GC? I don't know anything about this project, was just curiouse if you know it? Btw: You know you can handle memory yourself in latest .NET (some missing Features were added)
Yes. As a matter of fact. I'm saying that. It's one of the reasons I abandoned it and went back to writing them in C++ Maybe if you have an 8 core monster the lag on the GC won't kill it. But on my little I5-2700 it certainly does
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
What are you talking? The "call Dispose for shure" pattern recommends writing a destructor, so if your object won't be disposed it will free resources at least in the finalizer. If diposed is called, the code in the finalizer is obsolet and supressed. You do this for base-classes only and for derived classes you use the simple Dispose pattern. I work a lot with hardware and unmananged ressources, my finalizers are called all the time, no one exits the application to free memory and system resources but coders forget to dispose (mostly implicit by not using a using-block)... And I talk here about backend and frontend. And: many resources will be hold by the OS until you reboot... So I can understand that in your experience it "doesn't matter", why? I can just quess you write a very specific type of software, if memory is not your problem I'm fine with that, but don't recommend that ignorance to memory-management in .NET to others...
It looks like the behavior in the newer .NET is different than back when I tested this (.NET 2) So I stand corrected, as I told Super Lloyd, his tests do indeed show the finalizer being called. So my mind has already been changed on the matter. I don't use unmanaged resources directly in .NET and haven't since about 2008 or so, so it hasn't been a problem for me, and I hadn't really updated my information on the matter.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. What I'd have liked to see, at the very least, is a segregated heap that wasn't collected, and an ability to suspend background collection. You can kind of hack something like it into there using the large object heap and also using .NET 4+'s ability to reserve heap and suspend GC but it's non-optimal. See, I'd really like to write VST plugins in C# for example, and while there are offerings to do so, they are not realtime. They are kinda realtime. Not good enough for live music performance. Instead I'm forced to do it in something like C++ or *gasp* Delphi, which is costlier/more time consuming to write solid code with. I'd be okay with C# code blocks (similar to unsafe) where realtime code could run but apparently that's too much to ask. Also, I love garbage collection. Don't get me wrong. I even used it in C++ ISAPI server apps (using Boehm collector) for my strings in order to avoid heap fragmentation - in the right areas it can even improve performance.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Hi! Let me share with you the techniques we have used to go around some of the drawbacks you describe. Our solution is consuming a LARGE amount of UDP traffic (MPEG2 streams) - over 200Mbps of data. It is also performing some heavy DSP processing and it taxes the garbage collector heavily. Being UDP and with no retransmission protocol available - we are very sensitive to reading the data on time if we want to keep the quality of the video streams high. Reading the UDP traffic with managed code was a disaster. Basically, we are taking advantage of the fact that an unmanaged thread is NOT suspended during garbage collection. 1. We used C++/CLI to write the multicast UDP reader code inside a non-managed class. We ensure that the CLR support is disabled for the .cpp file implementing the code that we don't want to be interrupted during garbage collection. This makes sure that the code is compiled to native code, otherwise it may get compiled to IL code and the unmanaged thread may be blocked once it transitions to IL space. Even if using the unmanaged pragma, the compiler creates managed thunks around the unmanaged code and we want to avoid IL completely for this generated code. The unmanaged thread that is reading the UDP traffic (using RIO for higher performance) runs inside this class. Note that unfortunately we can't take advantage of the .Net framework in this class, so we rely on Boost. 2. We keep the unmanaged code as simple as possible, basically we loop reading UDP packets and enqueue them. However, we make sure that we use a lockless queue for this purpose (we use a Boost lockless queue). This is vital because there will be a managed thread consuming the queue, bridging the data onto the managed world. This consumer thread will be suspendend during GC activity and we don't want the thread to be suspended while holding a lock for the queue (otherwise the unmanaged thread may block contending for the lock). Another plus is that using a lockless queue, we become immune to thread priority inversion, so we can boost the producer thread priority to the highest level possible. 3. Using C++/CLI we produce a .net friendly class. This class owns and instantiates the unmanaged class, and also implements the managed queue consumer thread. (It can seamlessly consume the Boost lockless queue and expose the unmanaged memory (the udp packets) in a managed friendly way. Now, no matter if the managed world is suspended, the unmanaged thread will keep filling the lockless que
-
I wasn't going to tell him. Anyone that uses Finalizers needs to be dragged into the street and summarily shot. With witnesses so nobody *ever* makes the same mistake. There's a special hell where they keep the guy who designed them. It's below building 8 on the microsoft campus.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
LOL! I couldn't agree more. In my mind I think they should have gone all the way with 100% automated reference counting during reference assignment (a-la Visual Basic), with a lightweight garbage collector relegated to detecting and breaking circular-references (something Visual Basic couldn't do) and of course, compacting memory. This would have offered automatic, true deterministic destruction, while preventing fragmentation. But most importantly, it would have avoided the dreaded Dispose pattern (alongside with the related Finalizer travesty) entirely. Somewhere I remember reading they avoided this route mainly for performance reasons, what a costly decision in retrospective.
-
Hi! Let me share with you the techniques we have used to go around some of the drawbacks you describe. Our solution is consuming a LARGE amount of UDP traffic (MPEG2 streams) - over 200Mbps of data. It is also performing some heavy DSP processing and it taxes the garbage collector heavily. Being UDP and with no retransmission protocol available - we are very sensitive to reading the data on time if we want to keep the quality of the video streams high. Reading the UDP traffic with managed code was a disaster. Basically, we are taking advantage of the fact that an unmanaged thread is NOT suspended during garbage collection. 1. We used C++/CLI to write the multicast UDP reader code inside a non-managed class. We ensure that the CLR support is disabled for the .cpp file implementing the code that we don't want to be interrupted during garbage collection. This makes sure that the code is compiled to native code, otherwise it may get compiled to IL code and the unmanaged thread may be blocked once it transitions to IL space. Even if using the unmanaged pragma, the compiler creates managed thunks around the unmanaged code and we want to avoid IL completely for this generated code. The unmanaged thread that is reading the UDP traffic (using RIO for higher performance) runs inside this class. Note that unfortunately we can't take advantage of the .Net framework in this class, so we rely on Boost. 2. We keep the unmanaged code as simple as possible, basically we loop reading UDP packets and enqueue them. However, we make sure that we use a lockless queue for this purpose (we use a Boost lockless queue). This is vital because there will be a managed thread consuming the queue, bridging the data onto the managed world. This consumer thread will be suspendend during GC activity and we don't want the thread to be suspended while holding a lock for the queue (otherwise the unmanaged thread may block contending for the lock). Another plus is that using a lockless queue, we become immune to thread priority inversion, so we can boost the producer thread priority to the highest level possible. 3. Using C++/CLI we produce a .net friendly class. This class owns and instantiates the unmanaged class, and also implements the managed queue consumer thread. (It can seamlessly consume the Boost lockless queue and expose the unmanaged memory (the udp packets) in a managed friendly way. Now, no matter if the managed world is suspended, the unmanaged thread will keep filling the lockless que
Yeah, that's the Microsoft recommended way I think, either that or using classic unmanaged C++ and just calling into that but marshalling can be a problem depending on performance needs, so mixed mode/managed may be the way to go. The other option would probably be to write a custom marshaller for those methods, if you really don't want to mix manage/unmanaged code in the same assembly I'd share it, because people may not be aware of the technique. As far as the UDP I'm curious if you use any sort of QOS method on your UDP streaming? uTP does, but to the opposite ends and goals as yours, although using something like it might smooth playback during network traffic spikes
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
LOL! I couldn't agree more. In my mind I think they should have gone all the way with 100% automated reference counting during reference assignment (a-la Visual Basic), with a lightweight garbage collector relegated to detecting and breaking circular-references (something Visual Basic couldn't do) and of course, compacting memory. This would have offered automatic, true deterministic destruction, while preventing fragmentation. But most importantly, it would have avoided the dreaded Dispose pattern (alongside with the related Finalizer travesty) entirely. Somewhere I remember reading they avoided this route mainly for performance reasons, what a costly decision in retrospective.
Totally agree. Machines aren't what they were. Take the hit. The code is already managed. Besides, also I'd rather something slower and regular than something faster that spikes here and there if i needed raw performance. Consistency in streaming data is usually a bit more important than raw throughput but YMMV depending on the scenario and all of course, simply my opinion. I think it applies to running code as well.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Nothing in windows is truly real time - and I never used heaps in realtime apps because of the fragmentation problem. When you are designing an inkjet printer that should just run for years on a conveyor belt with just the ink cartridge needing changing, you can't risk fragmentation or you'll miss product. Windows was never a choice for that! :laugh:
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
Only thing that allows Windows to do anything in "realtime" is that machines are far faster than the many minicomputers used for realtime work (Xerox Sigma series, Systems Engineering Labs, Modcomp, Harris, Interdata, etc.). These machines had a variety of hardware features that supported quick context switching (think multiple register sets, multiple memory maps, bit level instructions, etc.), and huge numbers of priority interrupt levels to facilitate realtime processing (at least one machine had 127 levels of priority interrupt). Usually it was a simple matter to use an interrupt to trigger a process to go do something quickly in response to an interrupt. Once the PC came out, these machines quickly died as eventually so did the VAX. market for realtime work was relegated to the embedded processor work and realtime kernels. In their infinite wisdom the NT designers didn't allow Interrupts to do ANY significant processing. They handed that processing off to something called the Dispatch Priority Level, a sort of netherworld between the interrupts and the OS.
-
I just don't have a good enough justification to use because it doesn't prevent resource leaks - win32 does that, as I said. I've never seen .NET even call finalizers 99% of the time until just before process exit. Which means your VB developer who is not using anything like using(var brush = ) is still creating a ton of handles that will remain uncollected for the lifetime of the app. And GC has no way of knowing when GDI is out of handles. So it just lets them get eaten, even with your finalizers. Until your proc exits. At which point win32 cleans up anyway. Now tell me I'm wrong about any of this?
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
I just don't have a good enough justification to use because it doesn't prevent resource leaks - win32 does that, as I said. I've never seen .NET even call finalizers 99% of the time until just before process exit. Which means your VB developer who is not using anything like using(var brush = ) is still creating a ton of handles that will remain uncollected for the lifetime of the app. And GC has no way of knowing when GDI is out of handles. So it just lets them get eaten, even with your finalizers. Until your proc exits. At which point win32 cleans up anyway. Now tell me I'm wrong about any of this?
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Fix Canon Printer Offline Windows 10 issues by following simple steps. Feel free to reach techies at Canon Support to resolve Canon Printer Offline issues.