Gnome and .NET
-
Miguel's response http://mail.gnome.org/archives/gnome-devel-list/2002-February/msg00042.html Interesting reading. Michael :-)
-
Miguel's response http://mail.gnome.org/archives/gnome-devel-list/2002-February/msg00042.html Interesting reading. Michael :-)
thanks for the link, fascinating stuff (even though some went way over my head. Bonobo? WTF is Bonobo?) These paragraphs really caught my attention:
There is a point in your life when you realize that you have written enough destructors, and have spent enough time tracking down a memory leak, and you have spend enough time tracking down memory corruption, and you have spent enough time using low-level insecure functions, and you have implemented way too many linked lists The .NET Framework is really about productivity: even if Microsoft pushes these technologies for creating Web Services, the major benefit of these is increased programmer productivity. Evolution took us two years to develop and at its peak had 17 engineers working on the project. I want to be able to deliver four times as many free software applications with the same resources, and I believe that this is achievable with these new technologies. My experience so far has been positive, and I have first hands experience on the productivity benefits that these technologies bring to the table. For instance, our C# compiler is written in C#. A beautiful piece of code.
If you, a C++ coder, don't get the point then sorry to say it but I don't think you ever will and I won't carry on saying how I think letting the language handle things like that (memory management etc.) is not a bad thing. It won't make you a poorer programmer. What Miguel wrote there is exactly my sentement. I think I will put it in my sig and on an FAQ page somewhere on the net and just refer to it when people argue with me :-D regards, Paul Watson Bluegrass Cape Town, South Africa "The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge Sonork ID: 100.9903 Stormfront
-
Miguel's response http://mail.gnome.org/archives/gnome-devel-list/2002-February/msg00042.html Interesting reading. Michael :-)
I am not the GNOME foundation or control GNOME like Linus controls his kernel, I am just its founder and a contributor He sounds bitter for some reason :-) Nish Sonork ID 100.9786 voidmain www.busterboy.org If you don't find me on CP, I'll be at Bob's HungOut
-
Miguel's response http://mail.gnome.org/archives/gnome-devel-list/2002-February/msg00042.html Interesting reading. Michael :-)
Miguel wrote: My only intention is to write applications using the CLI as a development platform, which is really not very exciting for a news paper to report: "Programmer to use new compiler, new garbage collector, news at 11". Which pretty much sums it all up when you think about it. hehe, who ever thought we'd reach a stage where you could have sensationalist tech headlines?? Keep it up Miguel!!!! err, if he reads CP that is :-) Regards Senkwe Just another wannabe code junky
-
I am not the GNOME foundation or control GNOME like Linus controls his kernel, I am just its founder and a contributor He sounds bitter for some reason :-) Nish Sonork ID 100.9786 voidmain www.busterboy.org If you don't find me on CP, I'll be at Bob's HungOut
Nish [BusterBoy] wrote: He sounds bitter for some reason He seems to be hitting a wrong note and attracting others against him. He owns a company that seems to have a vision of doing .NET on Linux. But his statements with regards to GNOME seems to be rather CLI, which he is failing to state clearly. Building CLI support into OS, or better still desktop manager does not make it a .NET. Best regards, Paul. Paul Selormey, Bsc (Elect Eng), MSc (Mobile Communication) is currently Windows open source developer in Japan.
-
thanks for the link, fascinating stuff (even though some went way over my head. Bonobo? WTF is Bonobo?) These paragraphs really caught my attention:
There is a point in your life when you realize that you have written enough destructors, and have spent enough time tracking down a memory leak, and you have spend enough time tracking down memory corruption, and you have spent enough time using low-level insecure functions, and you have implemented way too many linked lists The .NET Framework is really about productivity: even if Microsoft pushes these technologies for creating Web Services, the major benefit of these is increased programmer productivity. Evolution took us two years to develop and at its peak had 17 engineers working on the project. I want to be able to deliver four times as many free software applications with the same resources, and I believe that this is achievable with these new technologies. My experience so far has been positive, and I have first hands experience on the productivity benefits that these technologies bring to the table. For instance, our C# compiler is written in C#. A beautiful piece of code.
If you, a C++ coder, don't get the point then sorry to say it but I don't think you ever will and I won't carry on saying how I think letting the language handle things like that (memory management etc.) is not a bad thing. It won't make you a poorer programmer. What Miguel wrote there is exactly my sentement. I think I will put it in my sig and on an FAQ page somewhere on the net and just refer to it when people argue with me :-D regards, Paul Watson Bluegrass Cape Town, South Africa "The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge Sonork ID: 100.9903 Stormfront
Paul Watson wrote: There is a point in your life when you realize that you have written enough destructors, and have spent enough time tracking down a memory leak, and you have spend enough time tracking down memory corruption, and you have spent enough time using low-level insecure functions, and you have implemented way too many linked lists I'd agree with these statements except 1) Instead of destructors, in C# we have try/catch/finally. Is this that much of an improvement? 2) If you're still dealing with low-level insecure functions and doing your own memory management in C++, you deserve all the hell you get. Modern C++ is vectors, strings, lists, and shared_ptr's -- not new[], delete[] and old-style pointers. Old-school C++ is asking for trouble, no doubt about it. But why are developers basing their case for memory management on C++ coding practices from the 70's and 80's? 3) The .NET API is where I think a lot of people are seeing increased productivity. Well, the fact is that ANY class library that raises coding to a higher level of abstraction is going to help productivity. I sure as heck don't want to write networking libraries when I can use ACE. I don't want to develop my own graph structures, threading libraries or tokenizers when I can use Boost. This really doesn't have anything to do with C# or .NET -- it has to do with raising levels of abstraction. Yes, I agree with one point – you must do a little more thinking before you embark on a C++ project. You might have to research what class libraries suit your needs. You might have to set some coding standards that prevent people from using low-level API or C functions. You might have to use some discipline. This is what good software engineers do. CodeGuy The WTL newsgroup: over 1100 members! Be a part of it. http://groups.yahoo.com/group/wtl
-
Paul Watson wrote: There is a point in your life when you realize that you have written enough destructors, and have spent enough time tracking down a memory leak, and you have spend enough time tracking down memory corruption, and you have spent enough time using low-level insecure functions, and you have implemented way too many linked lists I'd agree with these statements except 1) Instead of destructors, in C# we have try/catch/finally. Is this that much of an improvement? 2) If you're still dealing with low-level insecure functions and doing your own memory management in C++, you deserve all the hell you get. Modern C++ is vectors, strings, lists, and shared_ptr's -- not new[], delete[] and old-style pointers. Old-school C++ is asking for trouble, no doubt about it. But why are developers basing their case for memory management on C++ coding practices from the 70's and 80's? 3) The .NET API is where I think a lot of people are seeing increased productivity. Well, the fact is that ANY class library that raises coding to a higher level of abstraction is going to help productivity. I sure as heck don't want to write networking libraries when I can use ACE. I don't want to develop my own graph structures, threading libraries or tokenizers when I can use Boost. This really doesn't have anything to do with C# or .NET -- it has to do with raising levels of abstraction. Yes, I agree with one point – you must do a little more thinking before you embark on a C++ project. You might have to research what class libraries suit your needs. You might have to set some coding standards that prevent people from using low-level API or C functions. You might have to use some discipline. This is what good software engineers do. CodeGuy The WTL newsgroup: over 1100 members! Be a part of it. http://groups.yahoo.com/group/wtl
:confused: CodeGuy wrote: 1) Instead of destructors, in C# we have try/catch/finally. Is this that much of an improvement? What exactly has instance termination to do with exception handling?!... CodeGuy wrote: 3) The .NET API is where I think a lot of people are seeing increased productivity. Well, the fact is that ANY class library that raises coding to a higher level of abstraction is going to help productivity. I sure as heck don't want to write networking libraries when I can use ACE. I don't want to develop my own graph structures, threading libraries or tokenizers when I can use Boost. This really doesn't have anything to do with C# or .NET -- it has to do with raising levels of abstraction. Yes, but the abstraction layer in the C# language itself is levels higher than C++, there is e.g built in support for the event pattern. And you simply cannot compare the use of class encapsulation on destruction of internal instance pointers to garbage collection, even though you have all the handles to heap objects collected one place, you still have to initialize termination using delete. Jan "It could have been worse, it could have been ME!"
-
:confused: CodeGuy wrote: 1) Instead of destructors, in C# we have try/catch/finally. Is this that much of an improvement? What exactly has instance termination to do with exception handling?!... CodeGuy wrote: 3) The .NET API is where I think a lot of people are seeing increased productivity. Well, the fact is that ANY class library that raises coding to a higher level of abstraction is going to help productivity. I sure as heck don't want to write networking libraries when I can use ACE. I don't want to develop my own graph structures, threading libraries or tokenizers when I can use Boost. This really doesn't have anything to do with C# or .NET -- it has to do with raising levels of abstraction. Yes, but the abstraction layer in the C# language itself is levels higher than C++, there is e.g built in support for the event pattern. And you simply cannot compare the use of class encapsulation on destruction of internal instance pointers to garbage collection, even though you have all the handles to heap objects collected one place, you still have to initialize termination using delete. Jan "It could have been worse, it could have been ME!"
Hi Jan -- I was referring to two things about the memory management comment. One is that using smart pointers eliminates the need for the new[]/delete[] pair in modern C++. Number two is that need for destructors is not totally eliminated in C# -- you still have to have some way of closing file handles, database connections, etc. deterministically. From what I've seen, this is accomplished with the try/catch/finally clauses (specifically the finally clause) in C#. I happen to think this is a misuse of exception handling and that the use of destructors in C++ is much clearer. Your point on the Event pattern is well taken. However, I'm not sure that I can criticize C++ too strongly on this since it was never designed specifically for Windows programming. I'm not sure I understand your last point. :confused: CodeGuy The WTL newsgroup: over 1100 members! Be a part of it. http://groups.yahoo.com/group/wtl
-
Hi Jan -- I was referring to two things about the memory management comment. One is that using smart pointers eliminates the need for the new[]/delete[] pair in modern C++. Number two is that need for destructors is not totally eliminated in C# -- you still have to have some way of closing file handles, database connections, etc. deterministically. From what I've seen, this is accomplished with the try/catch/finally clauses (specifically the finally clause) in C#. I happen to think this is a misuse of exception handling and that the use of destructors in C++ is much clearer. Your point on the Event pattern is well taken. However, I'm not sure that I can criticize C++ too strongly on this since it was never designed specifically for Windows programming. I'm not sure I understand your last point. :confused: CodeGuy The WTL newsgroup: over 1100 members! Be a part of it. http://groups.yahoo.com/group/wtl
The reason the try/catch/finally is used is so that you can close the handle when you're done with it. If your code throws an exception in the try block the finally will still execute. This is no different from C++ if your object was on the heap instead of the stack, you'd have to use the catch to close your database handle since the destructor wouldn't run. This pattern is also used as a means of disposing of resources when you're done with them; to get around the need for destructors (btw the Finalize method gets called when an object is consumed by the GC) MS has come up with the Dispose pattern.
class myClass : IDispose
{
public FileStream file;
public myClass()
{
file = new FileStream(@"C:\myfile.txt");
}public override void Dispose()
{
if( file != null )
{
file.Close();
file.Dispose();
file = null;
}base.Dispose(); GC.SuppressFinalize(this, true);
}
IDispose.Dispose(bool disposing)
{
if( disposing )
{
Dispose();
}
}
}If you have a COM background you can sorta think of Dispose as Release, minus the ref counting; though if you needed ref counting you could build it in yourself :) There is also the using statement which handles dispose for you
using( myClass mc = new myClass() )
{
// use the file
}This compiles down to (I can't find my reference for this, but I know it goes down to something very similar)
try
{
myClass mc = new myClass()
// use the file
}
finally
{
IDispose id = (IDispose) mc;
id.Dispose(true);
}I'm gonna stop now while there is still a chance someone might actually read this far :-O James Sonork ID: 100.11138 - Hasaki "Not be to confused with 'The VD Project'. Which would be a very bad pr0n flick. :-D" - Michael P Butler Jan. 18, 2002
-
The reason the try/catch/finally is used is so that you can close the handle when you're done with it. If your code throws an exception in the try block the finally will still execute. This is no different from C++ if your object was on the heap instead of the stack, you'd have to use the catch to close your database handle since the destructor wouldn't run. This pattern is also used as a means of disposing of resources when you're done with them; to get around the need for destructors (btw the Finalize method gets called when an object is consumed by the GC) MS has come up with the Dispose pattern.
class myClass : IDispose
{
public FileStream file;
public myClass()
{
file = new FileStream(@"C:\myfile.txt");
}public override void Dispose()
{
if( file != null )
{
file.Close();
file.Dispose();
file = null;
}base.Dispose(); GC.SuppressFinalize(this, true);
}
IDispose.Dispose(bool disposing)
{
if( disposing )
{
Dispose();
}
}
}If you have a COM background you can sorta think of Dispose as Release, minus the ref counting; though if you needed ref counting you could build it in yourself :) There is also the using statement which handles dispose for you
using( myClass mc = new myClass() )
{
// use the file
}This compiles down to (I can't find my reference for this, but I know it goes down to something very similar)
try
{
myClass mc = new myClass()
// use the file
}
finally
{
IDispose id = (IDispose) mc;
id.Dispose(true);
}I'm gonna stop now while there is still a chance someone might actually read this far :-O James Sonork ID: 100.11138 - Hasaki "Not be to confused with 'The VD Project'. Which would be a very bad pr0n flick. :-D" - Michael P Butler Jan. 18, 2002
Hi James, Thanks for the response. What I wanted to get across is that it's very hard to make the argument that C# is more productive than modern C++ when you throw finalization into the equation. Having Dispose methods, try/catch/finally and all the SuppressFinalize items in your code introduces a significant layer of complexity itself, and it is not at all clear to me that this is significantly more robust than the C++ approach*. Also, since we mentioned exception handling, it's clear to me that only the C++ community has a good grasp on dealing with exceptions. (I'm referring to Sutter's work in Exceptional C++ and More Exceptional C++.) It would certainly be a good idea for the Java and C# developers of the world to look at these two books for correct approaches to exception handling, as I think this is one of the worst-handled aspects of both languages. CodeGuy The WTL newsgroup: over 1100 members! Be a part of it. http://groups.yahoo.com/group/wtl * Except for handling exceptions propagating from COM methods. This is where the C#/CLR approach shines.