Not Much to 'C' in C#
-
I am learning C# right now, and I like some aspects of this language, especially delegates and event handling. Of course, C++ has function objects, but delegates are (probably) easier to use. However there are several things that I dislike in C#: 1. Lack of support for generic programming. We may agree or not that templates syntax is far from perfect, but generic programming is so powerful that there is no excuse for ommiting it from the very beginning. And promisses about including generics in some future version :rolleyes: can only motivate programmers to refrain from using C# until the next version. 2. Object lifetime control. I think that garbage collectors in general are very bad idea. For those who tend to forget to free objects from the memory, reference counting may be a better solution. Of course, this is just my oppinion. :-D 3. Lack of multiple inheritance. Yes, I know the arguments from Java world, but multiple inheritance is still one of the important features of OO programming, and it should not be left out. 4. The "Java philosophy" of the language. Java (and now C#) designers seem to think that programmers are generally stupid people, and that they should not be allowed to make mistakes, so the most powerfull aspects of C++ are just left out. While I agree that some programmers (perhaps too many of them) tend to misuse the power of C++, I still think that this should be solved by better project management, and not by cutting of the best features of C++. I vote pro drink X|
I'd like to discuss a few of these: 1. Lack of support for generic programming. We may agree or not that templates syntax is far from perfect, but generic programming is so powerful that there is no excuse for ommiting it from the very beginning. **** There was just no way to get generics into the current version. They will likely show up in a future version, though not necessarily the next version. Since you can use the "object" type as a "poor man's generics" in C# and get a similar effect, generics aren't as critical in C# as C++. You do lose a few things, however: 1) You have to write casts from object to the type that you want, though this occurs less than you'd think due to the use of foreach. 2) You get run-time type checking on these casts rather than compile-time type checking. 3) You box value types in collection classes. I don't find these to be tremendously important issues. #1 and #2 are easy to get used to, and #3 isn't usually a problem, and you can get around it with effort by writing a type-specific collection, though that's a pain. **** 2. Object lifetime control. I think that garbage collectors in general are very bad idea. For those who tend to forget to free objects from the memory, reference counting may be a better solution. Of course, this is just my oppinion. :-D *** I think most C++ programmers have this attitude towards GC; they hate any loss of control. I know I did when I first started working on C#. But over time, I've realized that for the code I write, I don't care that much about exactly how memory is used, and the GC saves me tons of time and gives me programs that are more likely to be correct. Reference counting can't detect cycles, among other things. There are still some issues dealing with non-memory resources that require more thought than we'd like, but overall it's a very easy model to work in. *** 3. Lack of multiple inheritance. Yes, I know the arguments from Java world, but multiple inheritance is still one of the important features of OO programming, and it should not be left out. *** I have mixed feelings about this; from a language perspective, I'm not sure the advantages of MI are worth the increase in complexity (of the language, compilers, debugging task, etc.) Interfaces cover some of the things you'd want to do with MI. That aside, the runtime team couldn't come up with a way to do MI without penalizing the SI case. Even if they could, it's tough to decide what MI means, since different languages have different d
-
C# was a language that was suppose to be a mixture of VB and VC++, but to me it looks like VB, because of the loss of control. C# is not a very powerful language, you can't even write a OS in it. Visit Ltpb.8m.com Surf the web faster than ever: http://www.404Browser.com
C# is not a very powerful language, you can't even write a OS in it. *** You're correct, you can't write an OS in it. C# is designed to be a language that targets the .NET platform. If you don't like managed environments, then C# isn't for you. I understand your point about power; sometimes you really need it, but it's not the be-all and end-all of a computer languages. ***
-
C# is not a very powerful language, you can't even write a OS in it. *** You're correct, you can't write an OS in it. C# is designed to be a language that targets the .NET platform. If you don't like managed environments, then C# isn't for you. I understand your point about power; sometimes you really need it, but it's not the be-all and end-all of a computer languages. ***
I wonder if WinXP was written in C#. ;) Visit Ltpb.8m.com Surf the web faster than ever: http://www.404Browser.com
-
It is true that C# is nothing like VB. However, it's also true that it aimes to be an alternative to VB rather than to C++. Remember Microsoft slogan: "Ease of use of VB, with raw power of C++"? Of course, for big and complex projects, VB is hardly easier to use than C++, but that's another story. I vote pro drink X|
"Of course, for big and complex projects, VB is hardly easier to use than C++, but that's another story. " I second that!!! Anything more than 1 week's work would be better if done in C++...
-
I'd like to discuss a few of these: 1. Lack of support for generic programming. We may agree or not that templates syntax is far from perfect, but generic programming is so powerful that there is no excuse for ommiting it from the very beginning. **** There was just no way to get generics into the current version. They will likely show up in a future version, though not necessarily the next version. Since you can use the "object" type as a "poor man's generics" in C# and get a similar effect, generics aren't as critical in C# as C++. You do lose a few things, however: 1) You have to write casts from object to the type that you want, though this occurs less than you'd think due to the use of foreach. 2) You get run-time type checking on these casts rather than compile-time type checking. 3) You box value types in collection classes. I don't find these to be tremendously important issues. #1 and #2 are easy to get used to, and #3 isn't usually a problem, and you can get around it with effort by writing a type-specific collection, though that's a pain. **** 2. Object lifetime control. I think that garbage collectors in general are very bad idea. For those who tend to forget to free objects from the memory, reference counting may be a better solution. Of course, this is just my oppinion. :-D *** I think most C++ programmers have this attitude towards GC; they hate any loss of control. I know I did when I first started working on C#. But over time, I've realized that for the code I write, I don't care that much about exactly how memory is used, and the GC saves me tons of time and gives me programs that are more likely to be correct. Reference counting can't detect cycles, among other things. There are still some issues dealing with non-memory resources that require more thought than we'd like, but overall it's a very easy model to work in. *** 3. Lack of multiple inheritance. Yes, I know the arguments from Java world, but multiple inheritance is still one of the important features of OO programming, and it should not be left out. *** I have mixed feelings about this; from a language perspective, I'm not sure the advantages of MI are worth the increase in complexity (of the language, compilers, debugging task, etc.) Interfaces cover some of the things you'd want to do with MI. That aside, the runtime team couldn't come up with a way to do MI without penalizing the SI case. Even if they could, it's tough to decide what MI means, since different languages have different d
>> Reference counting can't detect cycles, among other things. This is a very poor argument against ref counting. I have been programming in C/C++ for more than 6 years, and the only occasions I've ran into a cyclic structures have been handmading lists and the like, which now you can forget about thanks to STL containers. Be honest: Have you ever had a memory leak due to a pair of mutually referencing structs? The problem with GC is not the issue of memory management itself (with regard to this I see it as good an option as ref counting). The problem is that GC forces you to abandon RAII, which, in my opinion, is the single idiom that can prevent most resource leaks amongst the tools a programmer can count on in C++. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
-
>> Reference counting can't detect cycles, among other things. This is a very poor argument against ref counting. I have been programming in C/C++ for more than 6 years, and the only occasions I've ran into a cyclic structures have been handmading lists and the like, which now you can forget about thanks to STL containers. Be honest: Have you ever had a memory leak due to a pair of mutually referencing structs? The problem with GC is not the issue of memory management itself (with regard to this I see it as good an option as ref counting). The problem is that GC forces you to abandon RAII, which, in my opinion, is the single idiom that can prevent most resource leaks amongst the tools a programmer can count on in C++. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
Yes, anyone who's written complex object frameworks has had to deal with cyclic references. It's a real issue, not an acedemic argument. Many GC frameworks have started out using reference counts for tracking object lifetimes and have very quickly run into cases where cyclic constructs caused severe memory leaks and so moved to more advanced algorithms. Regardless, as I pointed out earlier, reference counting *IS* a form of garbage collection. So if you ever use reference counting but have a problem with the concept of GC, you really need to think carefully about why. Now the real complaint you have is stated in this post: "The problem is that GC forces you to abandon RAII, which, in my opinion, is the single idiom that can prevent most resource leaks amongst the tools a programmer can count on in C++." However, this argument is not the same thing. It is quite possible for a language to support both heap based construction with lifetime managed by a GC engine and stack based construction with the lifetime managed by stack unwinding. In fact, MC++ provides this very capability. The real complaint you have with C# and other such GCed languages is that it doesn't support this concept and instead relies on finally blocks. GC is far superior for memory management, while RAII is superior for other resources that must be reclaimed in a deterministic manner. William E. Kempf
-
>> Reference counting can't detect cycles, among other things. This is a very poor argument against ref counting. I have been programming in C/C++ for more than 6 years, and the only occasions I've ran into a cyclic structures have been handmading lists and the like, which now you can forget about thanks to STL containers. Be honest: Have you ever had a memory leak due to a pair of mutually referencing structs? The problem with GC is not the issue of memory management itself (with regard to this I see it as good an option as ref counting). The problem is that GC forces you to abandon RAII, which, in my opinion, is the single idiom that can prevent most resource leaks amongst the tools a programmer can count on in C++. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
For an in-depth discussion of all the issues, take a look at: http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&P=R28572
-
Yes, anyone who's written complex object frameworks has had to deal with cyclic references. It's a real issue, not an acedemic argument. Many GC frameworks have started out using reference counts for tracking object lifetimes and have very quickly run into cases where cyclic constructs caused severe memory leaks and so moved to more advanced algorithms. Regardless, as I pointed out earlier, reference counting *IS* a form of garbage collection. So if you ever use reference counting but have a problem with the concept of GC, you really need to think carefully about why. Now the real complaint you have is stated in this post: "The problem is that GC forces you to abandon RAII, which, in my opinion, is the single idiom that can prevent most resource leaks amongst the tools a programmer can count on in C++." However, this argument is not the same thing. It is quite possible for a language to support both heap based construction with lifetime managed by a GC engine and stack based construction with the lifetime managed by stack unwinding. In fact, MC++ provides this very capability. The real complaint you have with C# and other such GCed languages is that it doesn't support this concept and instead relies on finally blocks. GC is far superior for memory management, while RAII is superior for other resources that must be reclaimed in a deterministic manner. William E. Kempf
ref counting has an advantange over GC (provided there are no cycles): it is faster, or at least its load is distributed over time, in contrast with GC, that pops up from time to time and momentarily stops the VM. Maybe this is not an issue for many, but it is for say game programmers who want sprites going smooth all the time. Anyway, I'm no ref counting fanatic :) It is quite possible for a language to support both heap based construction with lifetime managed by a GC engine and stack based construction with the lifetime managed by stack unwinding. Yes it is, but in such a situation all the magic about GC vanishes. Suppose the keyword
stack
has been defined for allocating objects on the stack. Have a look at this pseudocode:class A{...}
class B
{
A a;
}void f()
{
stack A a;
g(a);
}void g(A a)
{
B b=new B; // not stack based
b.a=a;...
}As soon as
f
goes out of scope, the objecta
will have an invalid reference to a deallocated object, and GC will crash. This is not to say that GC can not coexist with a stack based allocation scheme, but you have to be careful not to mix both worlds, and the purported aim of GC of not having to worry about memory management gets lost. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo -
ref counting has an advantange over GC (provided there are no cycles): it is faster, or at least its load is distributed over time, in contrast with GC, that pops up from time to time and momentarily stops the VM. Maybe this is not an issue for many, but it is for say game programmers who want sprites going smooth all the time. Anyway, I'm no ref counting fanatic :) It is quite possible for a language to support both heap based construction with lifetime managed by a GC engine and stack based construction with the lifetime managed by stack unwinding. Yes it is, but in such a situation all the magic about GC vanishes. Suppose the keyword
stack
has been defined for allocating objects on the stack. Have a look at this pseudocode:class A{...}
class B
{
A a;
}void f()
{
stack A a;
g(a);
}void g(A a)
{
B b=new B; // not stack based
b.a=a;...
}As soon as
f
goes out of scope, the objecta
will have an invalid reference to a deallocated object, and GC will crash. This is not to say that GC can not coexist with a stack based allocation scheme, but you have to be careful not to mix both worlds, and the purported aim of GC of not having to worry about memory management gets lost. Joaquín M López Muñoz Telefónica, Investigación y DesarrolloFirst, reference counting *IS* a form of GC, and to distinguish between the two the way you're trying to is wrong. As I've stated, many GC systems are built entirely on ref-counting resulting in identical performance. Other GC systems mix ref-counting with more complex algorithms, such as mark and sweep, which insures items are collected immediately unless there's a cycle which reduces the overhead of the mark/sweep drastically. Further, you claim problems where the collector "stops the VM". This is obviously a mistake with mixing the GC concept up with the GC implementation in Java. First, there need not be a VM ;). More importantly, sophiticated collectors insure that a process is not stopped for any length of time with various techniques including running the collection algorithms in a seperate thread (this means the only time other threads may be blocked at all is when they attempt to allocate memory while the collection algorithm is running, but there's even ways to reduce the overhead of this blocking operation). In fact, actual benchmark studies show that sophisticated GC systems often outperform manual or ref-counted systems. Now, for your argument about mixing types... for languages that support this it's a non-issue. Either you can't get a "pointer" to a stack based object, or the type of such a pointer and the type of a pointer to a heap object are different, and so assignment of the differing types results in a compile time error. There are definate issues with hybrid stack/GC systems, but the problems are generally not as sever as you think and are worth the effort of programmer discipline to avoid them. The gc_ptr<> on this site illustrates a fairly safe way to bring GC to C++ and retain it's stack based allocations as well. Since this solution is solely a library solution it does require a little runtime checking in addition to the compile time checks, making it less appealing than a language based solution, but it works. William E. Kempf
-
First, reference counting *IS* a form of GC, and to distinguish between the two the way you're trying to is wrong. As I've stated, many GC systems are built entirely on ref-counting resulting in identical performance. Other GC systems mix ref-counting with more complex algorithms, such as mark and sweep, which insures items are collected immediately unless there's a cycle which reduces the overhead of the mark/sweep drastically. Further, you claim problems where the collector "stops the VM". This is obviously a mistake with mixing the GC concept up with the GC implementation in Java. First, there need not be a VM ;). More importantly, sophiticated collectors insure that a process is not stopped for any length of time with various techniques including running the collection algorithms in a seperate thread (this means the only time other threads may be blocked at all is when they attempt to allocate memory while the collection algorithm is running, but there's even ways to reduce the overhead of this blocking operation). In fact, actual benchmark studies show that sophisticated GC systems often outperform manual or ref-counted systems. Now, for your argument about mixing types... for languages that support this it's a non-issue. Either you can't get a "pointer" to a stack based object, or the type of such a pointer and the type of a pointer to a heap object are different, and so assignment of the differing types results in a compile time error. There are definate issues with hybrid stack/GC systems, but the problems are generally not as sever as you think and are worth the effort of programmer discipline to avoid them. The gc_ptr<> on this site illustrates a fairly safe way to bring GC to C++ and retain it's stack based allocations as well. Since this solution is solely a library solution it does require a little runtime checking in addition to the compile time checks, making it less appealing than a language based solution, but it works. William E. Kempf
Please excuse the looseness of my terminology. To fix the vocabulary, and for the sake of this conversation, understand GC as "non RAII-compatible based garbage collector". What this all started from is the issue of C#'s GC not allowing for usage of RAII discipline. This is beyond all argument I think. Your ideas about adding static type checking support to the coexistence of stack based and GC managed objects (I guess in the line of C++'s
const
ness) are very interesting and indeed I think they could lead to an efficient language supporting both garbage collection and RAII on a per object basis based on the programmer's choice. But this is not C#. C# offers you a take-it-or-leave-it non-deterministic GC with which it is impossible to apply RAII. And this is, IMHO, a very bad design decision that will drive novice programmers into more resource leaks than it prevents. The same goes for Java, for that matter. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo -
The JIT doesn't have to be done with each invokation. It can be done at installation time. This approach is better than at compile time, since the JIT can do system optimizations not available to the compiler and can optimize the code for the specific CPU instruction set. Also, not all applications need this amount of optimization in any event. With fast CPUs a JIT done at first invocation is not likely to be noticable at run time even for speed critical applications. If the JIT occurred with every invocation you'd have more of an argument, but that's not what happens. William E. Kempf
Your argument has merit, but I would re-emphasize two points from my original post: 1) For processor-intensive applications, it's good to optimize at the Source-code level for the specific processor. The JIT can't go in and retroactively change the preprocessor #ifdef's. The whole point to something like FFTW and the ATLAS BLAS generators is that they analyze the system and write optimized FORTRAN or C that can have a much greater effect on speed than can optimization of generic FORTRAN or C either at the AST or the code-generation level. 2) It is not uncommon for my applications to take many hours to build with optimizations. My complaint is that if we were looking for the same level of optimization by the JIT, you could imagine an installation taking an hour or two, which would certainly be noticeable. He was allying himself to science, for what was science but the absence of prejudice backed by the presence of money? --- Henry James, The Golden Bowl
-
Please excuse the looseness of my terminology. To fix the vocabulary, and for the sake of this conversation, understand GC as "non RAII-compatible based garbage collector". What this all started from is the issue of C#'s GC not allowing for usage of RAII discipline. This is beyond all argument I think. Your ideas about adding static type checking support to the coexistence of stack based and GC managed objects (I guess in the line of C++'s
const
ness) are very interesting and indeed I think they could lead to an efficient language supporting both garbage collection and RAII on a per object basis based on the programmer's choice. But this is not C#. C# offers you a take-it-or-leave-it non-deterministic GC with which it is impossible to apply RAII. And this is, IMHO, a very bad design decision that will drive novice programmers into more resource leaks than it prevents. The same goes for Java, for that matter. Joaquín M López Muñoz Telefónica, Investigación y DesarrolloThe thread started out solely about C#, but it's weaved in and out of discussion of .NET and GC in general as well. There was no way for me to determine you were strictly speaking C# here... and I made it fairly clear that I personally wasn't going to give up C++ for C# because of this (at least for some things). BTW, your final assertion is simply wrong about C#. Lack of RAII isn't going to lead to more resource leaks than it prevents. There aren't going to be any resource leaks because objects have finalizers to insure this. The only issue is with deterministic reclamation. If I create an object that locks a file, for instance, I know that eventually that file's going to be unlocked because the finalizer ensures that. Unfortunately, I don't know when it will do so, and in fact it may retain the lock much longer than it needs to and cause all kinds of performance problems as other objects try to create their own lock. Or if an object obtains a resource for which there are a limited number, such as window handles, the program may exhaust these resources and not have a way to "automagically" reclaim any not being used any more. The nice thing about the design of .NET, though, is that MS thought about these issue and gave us some help. There's an IDispose interface, for example, that gives us a common routine for releasing external resources. In C++ this allows an auto_ptr<> like concept (which is provided, but I don't recall the name) to make use of RAII to insure the external resources are released in a deterministic manner. This does require some programmer discipline, in that you can't use an object that's been disposed of even if you retain references to it else where, but this is a small issue handled with exceptions and programmer discipline. If you read the article posted else where in this thread (http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&P=R28572) you'll see that they are even considering language support of this style for C#. William E. Kempf
-
Your argument has merit, but I would re-emphasize two points from my original post: 1) For processor-intensive applications, it's good to optimize at the Source-code level for the specific processor. The JIT can't go in and retroactively change the preprocessor #ifdef's. The whole point to something like FFTW and the ATLAS BLAS generators is that they analyze the system and write optimized FORTRAN or C that can have a much greater effect on speed than can optimization of generic FORTRAN or C either at the AST or the code-generation level. 2) It is not uncommon for my applications to take many hours to build with optimizations. My complaint is that if we were looking for the same level of optimization by the JIT, you could imagine an installation taking an hour or two, which would certainly be noticeable. He was allying himself to science, for what was science but the absence of prejudice backed by the presence of money? --- Henry James, The Golden Bowl
- You don't optimize for a specific processor, you optimize for the IL ASM code. The JIT then optimizes this ASM code for the specific processor on which it's run. This insures a single compile results in optimum performance for all processors, which is something you can't get from traditional compilation systems. There are a few higher level optimizations you might miss this way (that have to be hand coded... the compiler isn't going to be able to do these), but you gain some other optimizations not possible with static compilations. There's a trade off and the performance differences aren't going to be easily quantifiable and have to be determined in a case by case basis. In the end, though, I'm willing to bet that the VAST majority of applications can and will run at very good speeds under .NET. 2) Most of the optimizations you're speaking of occur from static generation of code produced by templates. The exact same thing can (and will) occur in .NET when it has generics. Such optimizations will still happen at compile time, not run time. William E. Kempf
-
The thread started out solely about C#, but it's weaved in and out of discussion of .NET and GC in general as well. There was no way for me to determine you were strictly speaking C# here... and I made it fairly clear that I personally wasn't going to give up C++ for C# because of this (at least for some things). BTW, your final assertion is simply wrong about C#. Lack of RAII isn't going to lead to more resource leaks than it prevents. There aren't going to be any resource leaks because objects have finalizers to insure this. The only issue is with deterministic reclamation. If I create an object that locks a file, for instance, I know that eventually that file's going to be unlocked because the finalizer ensures that. Unfortunately, I don't know when it will do so, and in fact it may retain the lock much longer than it needs to and cause all kinds of performance problems as other objects try to create their own lock. Or if an object obtains a resource for which there are a limited number, such as window handles, the program may exhaust these resources and not have a way to "automagically" reclaim any not being used any more. The nice thing about the design of .NET, though, is that MS thought about these issue and gave us some help. There's an IDispose interface, for example, that gives us a common routine for releasing external resources. In C++ this allows an auto_ptr<> like concept (which is provided, but I don't recall the name) to make use of RAII to insure the external resources are released in a deterministic manner. This does require some programmer discipline, in that you can't use an object that's been disposed of even if you retain references to it else where, but this is a small issue handled with exceptions and programmer discipline. If you read the article posted else where in this thread (http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&P=R28572) you'll see that they are even considering language support of this style for C#. William E. Kempf
BTW, your final assertion is simply wrong about C#. Lack of RAII isn't going to lead to more resource leaks than it prevents. There aren't going to be any resource leaks because objects have finalizers to insure this. My final assertion is just right, my dear William. Object finalizers are called only when reclaimed by the GC. So imagine the scenario in which you have run out of some sort of resources (say Windows DCs) but there's still plenty of memory. The DCs won't be returned cause the GC sees no need to act, and the program will likely terminate. In fact, one could find the funny situation that the more memory a system has the more likely a program not caring about
finally
clauses is to run out of other type of resources! This I call resource leaking. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo -
BTW, your final assertion is simply wrong about C#. Lack of RAII isn't going to lead to more resource leaks than it prevents. There aren't going to be any resource leaks because objects have finalizers to insure this. My final assertion is just right, my dear William. Object finalizers are called only when reclaimed by the GC. So imagine the scenario in which you have run out of some sort of resources (say Windows DCs) but there's still plenty of memory. The DCs won't be returned cause the GC sees no need to act, and the program will likely terminate. In fact, one could find the funny situation that the more memory a system has the more likely a program not caring about
finally
clauses is to run out of other type of resources! This I call resource leaking. Joaquín M López Muñoz Telefónica, Investigación y DesarrolloActually, in .NET the GC *IS* called in this situation, and your program won't simply terminate. There's even infrastructure in place to help you insure this is the case for all limited resources. In any event, there's no leak here, because the resource will be reclaimed. William E. Kempf