Am I wrong?
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
C is lower-level (closer to the hardware) than Java and C#. I don't know anything about Delphi so I can't comment on that. For someone learning programming there are two schools of though: start with the higher-level concepts using a higher-level language, and gradually delve deeper into understanding how they are implemented and what is going on at a machine level, if necessary. These days, the deeper delving is really not all that necessary, unless you are charged with writing very high-performance code. The second school of though is to start at the low level and learn up, gradually abstracting away the lower-level concepts with higher level ones. This more closely traces the evolution of computers and languages, and if you're really serious, probably provides the "best" understanding of the whole ecosystem, but is a much higher learning curve. As for my opinion, if I were to recommend a path to someone I would probably choose the first method, of learning the high-level concepts first (probably with a dynamic language like JavaScript), and delving deeper where one is interested. It really depends on how "serious" the subject is about learning computer languages. If they're darn serious, learning lower-level-up will provide the best understanding, but if they're not sure about it, starting at the top is the best way to discover if they have a passion for programming or not.
Sad but true: 4/3 of Americans have difficulty with simple fractions. There are 10 types of people in this world: those who understand binary and those who don't. {o,o}.oO( Check out my blog! ) |)””’) http://pihole.org/ -”-”-
-
This thread again? Here's how this thread goes: some agree, some disagree, everyone mentions their pet language causing this thread to recurse. Your friend is an idiot, as is anyone who says "I don't need to learn [some fundamental concept]".
The concept of "fundamental concepts" vary with time. When I learned Basic, Fortran, Pascal, Cobol and the assembly languages of four different architectures + MIX (ref Donald), "fundamental concepts" included how to handle 1-complement vs. 2-complement, order of bits, octets, halfwords and words (like some PDP-11 OS structures with the high order halfword first but the high byte in each halfword last ... or was it the other way around?), advantages and disadvantages of a hidden upper bit in the mantissa of float formats... Kids of today could (or couldn't) care less about normalized mantissas, BCD nibbles amd the question of when minus zero is equal to plus zero. And, I must admit, today I don't care that much myself. I do remember that such understanding used to be essential, but it isn't anymore. Nowaday, I handle integer values withoout worrying about their binary representation (if I do, it is because I use them for something else than integer numerical values, which is bad practice in any case!). I handle sets of objects without concerns about next-pointers: I add objects to the set, remove objects, traverse the set etc, without ever seeing a next-pointer. What I do see, is whether the set is ordered, objects accessible by keys etc. Sure, knowing what goes on one level below the one you work at is essential. In the days of 1-complement machines it could help you understand why sometimes 0 != 0. Today, when a foreach loop was abruptly terminated, my old familiarity with next-pointers was a great help in pinpointing the problem to the replacement of one object in a DOM structure with a new versison - the replacement was done in a code snippet that didn't know that the old version was the current one in a foreach iteration, replacing it with one with a null next-pointer. That is an implemmentation anomaly, just like 0 != 0 is an implementation anomaly. 2-complement fixed the latter - a list implementation mantainng the list by a separate link structure rather than embedding the next link in the object itself would have fixe the former. Like 1-complement died out with time, object embedded next pointers might die out with time. Then, understanding the use of next pointers might become as irrelevant as understanding the difference between 1- and 2-complement. Sometimes I am frustrated by our younger programmers and their lack of understanding of fundamental concepts. And then, when I think it over, I more and more conclude: "Actually, they do not need it for anything at all, given the tools w
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
You're wrong for thinking that there is a better language to learn that any other. But for a first timer I would choose C++, C# or Java.
CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
Cody227 wrote:
I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code...
Incorrect. Most developers write code for business applications. Most of the code they write is specific to solving business needs. So understanding the business and the application is how one writes "decent code". It might however help to know some aspect of low level hardware but that is not only less likely that it was in the past and even less likely to be possible. At least depending on what "low level" really means. For example if you are running your newest server on a cloud server you certainly need to know what '16 gig of memory' means but it is absolutely useless to know the kind of memory. And if one ends up writing code for windows, Macintosh, iPhone, Android, Linux with even some old main frame adaptor code then attempting to learn everything is impossible.
Cody227 wrote:
But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
Business application performance problems almost never resolve to low level language problems. They are almost always architecture and design problems. Even more so when in less structured team environments (which is the norm and not the exception.)
Cody227 wrote:
And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings.
And are you are expert in every possible technological API that you might reasonable encounter in the modern world? How are your iPhone skills? Done much real time programming for embedded software on a raid driver card? What about interfacing to the cash drawer on a PC? Or really creating an XML/XSD that actually does support international data and not just claiming that it does? What about optimizing a Oracle database and a MS SQL Server database? And how does one set up a geographically redundant data center (and what are the trade offs with hosting yourself or the various cloud possibilities?) The vast, vast array of technologies means it is impossible to be an expert in all but a few. And one is likely to do more damage to the career by even attempting to span several rather than sticking with a few (for example embedded real time drivers versus standard web business applications.
-
The concept of "fundamental concepts" vary with time. When I learned Basic, Fortran, Pascal, Cobol and the assembly languages of four different architectures + MIX (ref Donald), "fundamental concepts" included how to handle 1-complement vs. 2-complement, order of bits, octets, halfwords and words (like some PDP-11 OS structures with the high order halfword first but the high byte in each halfword last ... or was it the other way around?), advantages and disadvantages of a hidden upper bit in the mantissa of float formats... Kids of today could (or couldn't) care less about normalized mantissas, BCD nibbles amd the question of when minus zero is equal to plus zero. And, I must admit, today I don't care that much myself. I do remember that such understanding used to be essential, but it isn't anymore. Nowaday, I handle integer values withoout worrying about their binary representation (if I do, it is because I use them for something else than integer numerical values, which is bad practice in any case!). I handle sets of objects without concerns about next-pointers: I add objects to the set, remove objects, traverse the set etc, without ever seeing a next-pointer. What I do see, is whether the set is ordered, objects accessible by keys etc. Sure, knowing what goes on one level below the one you work at is essential. In the days of 1-complement machines it could help you understand why sometimes 0 != 0. Today, when a foreach loop was abruptly terminated, my old familiarity with next-pointers was a great help in pinpointing the problem to the replacement of one object in a DOM structure with a new versison - the replacement was done in a code snippet that didn't know that the old version was the current one in a foreach iteration, replacing it with one with a null next-pointer. That is an implemmentation anomaly, just like 0 != 0 is an implementation anomaly. 2-complement fixed the latter - a list implementation mantainng the list by a separate link structure rather than embedding the next link in the object itself would have fixe the former. Like 1-complement died out with time, object embedded next pointers might die out with time. Then, understanding the use of next pointers might become as irrelevant as understanding the difference between 1- and 2-complement. Sometimes I am frustrated by our younger programmers and their lack of understanding of fundamental concepts. And then, when I think it over, I more and more conclude: "Actually, they do not need it for anything at all, given the tools w
Good statement. Context is key. Of course, its better to know these things, but the question of usability comes up. If I don't know what a normalized Mantissa is does that mean I am stupid? Can't code? OTOH - when you know these things, it sometimes can save the day. Is that a reason to stop learning how to program on the Phone, and pick up Assembly? I dont think so.
Where there's smoke, there's a Blue Screen of death.
-
The concept of "fundamental concepts" vary with time. When I learned Basic, Fortran, Pascal, Cobol and the assembly languages of four different architectures + MIX (ref Donald), "fundamental concepts" included how to handle 1-complement vs. 2-complement, order of bits, octets, halfwords and words (like some PDP-11 OS structures with the high order halfword first but the high byte in each halfword last ... or was it the other way around?), advantages and disadvantages of a hidden upper bit in the mantissa of float formats... Kids of today could (or couldn't) care less about normalized mantissas, BCD nibbles amd the question of when minus zero is equal to plus zero. And, I must admit, today I don't care that much myself. I do remember that such understanding used to be essential, but it isn't anymore. Nowaday, I handle integer values withoout worrying about their binary representation (if I do, it is because I use them for something else than integer numerical values, which is bad practice in any case!). I handle sets of objects without concerns about next-pointers: I add objects to the set, remove objects, traverse the set etc, without ever seeing a next-pointer. What I do see, is whether the set is ordered, objects accessible by keys etc. Sure, knowing what goes on one level below the one you work at is essential. In the days of 1-complement machines it could help you understand why sometimes 0 != 0. Today, when a foreach loop was abruptly terminated, my old familiarity with next-pointers was a great help in pinpointing the problem to the replacement of one object in a DOM structure with a new versison - the replacement was done in a code snippet that didn't know that the old version was the current one in a foreach iteration, replacing it with one with a null next-pointer. That is an implemmentation anomaly, just like 0 != 0 is an implementation anomaly. 2-complement fixed the latter - a list implementation mantainng the list by a separate link structure rather than embedding the next link in the object itself would have fixe the former. Like 1-complement died out with time, object embedded next pointers might die out with time. Then, understanding the use of next pointers might become as irrelevant as understanding the difference between 1- and 2-complement. Sometimes I am frustrated by our younger programmers and their lack of understanding of fundamental concepts. And then, when I think it over, I more and more conclude: "Actually, they do not need it for anything at all, given the tools w
Hello, this is my 1st post here. Looks like a great forum. I really like 7989122's reply, for a few reasons: 1: I think sometimes it's easy to be too reliant on hindsight and deploy it without regard for the environment and its inhabitants. e.g. when I learnt Latin it was helpful and gave me a more fundamental understanding of English. But I learnt to make basic baby sounds first, then picked up English then Latin. So I learnt some language origins and building blocks in reverse order because that's what was the go at the time and all I was capable of. 2: the reply doesn't use the term "idiot". I can't see any positive disposition created by using this word. I suspect it's source could be of self indulgence. But I don't know, I choose to ignore it's intent. 3: as a trainee Citizen Developer who began 1 year ago learning using Small Basic and now just started with C#, if I find that I need to or it's conducive to achieving my goals then i'll learn some C. Learning with Small Basic has been great fun and we have the opportunity to make our own controls, optimise our code, consider efficient resource use and devise crafty work arounds. And once again it's fun. Whilst this question and discussion occurs often, I think it's helpful for those learning. Thanks for the post.
-
BillWoodruff wrote:
For some people thinking recursively is very natural (they feel at home in LISP).
I learned LISP when I was taking graduate courses in artificial intelligence back in the late 80's. My best description of the experience was removing the top of your skull, rotating your brain counter-blockwise 90°, and reattaching your skull.
Software Zen:
delete this;
:) Amen, Brother Wheeler ! I went through a phase of LISPmania. At one point I spent ten days figuring out how to write a three-line method that took two ints as parameters and created a 2d array in memory. I forget, now, whether it was more than doubly-recursive. After successfully deprogramming myself from the "cult of 'car and 'cdr," in sesshins at the Berkeley Zen Center, and by binging on cheap Chinese take-out with extra MSG, and Jolt Cola, until bulimic ... I realized that, in the future, it would take me as much time to revivify my understanding of that three line solution, as it took me to develop it ... and I moved on to ... PostScript, which ... few people ever appreciate this ... is a very LISP-like language with an post-fix notation "front-end," and explicit stacks, wired-up to a monster-great graphics model/rendering-engine. I do believe that a period of total immersion in an "alternate programming universe," like LISP, Prolog, or, even, PostScript, can be a valuable part of a programmer's education ... if they have a strong base in a strongly-typed language to begin with. But, I am very influenced by the work of the anthropologist of education, George Spindler, at Stanford, on the utility of "discontinuities" in education and socialization as catalysts for cognitive devlopment, and acculturation. I think frequently getting your own mental chassis torn-down to the point you become all too aware of what nuts the bolts are, and then, re-assembled, is downright salubrious :) Merry, Merry, Bill
“I'm an artist: it's self evident that word implies looking for something all the time without ever finding it in full. It is the opposite of saying : 'I know all about it. I've already found it.' As far as I'm concerned, the word means: 'I am looking. I am hunting for it. I am deeply involved.'” Vincent Van Gogh
-
:) Amen, Brother Wheeler ! I went through a phase of LISPmania. At one point I spent ten days figuring out how to write a three-line method that took two ints as parameters and created a 2d array in memory. I forget, now, whether it was more than doubly-recursive. After successfully deprogramming myself from the "cult of 'car and 'cdr," in sesshins at the Berkeley Zen Center, and by binging on cheap Chinese take-out with extra MSG, and Jolt Cola, until bulimic ... I realized that, in the future, it would take me as much time to revivify my understanding of that three line solution, as it took me to develop it ... and I moved on to ... PostScript, which ... few people ever appreciate this ... is a very LISP-like language with an post-fix notation "front-end," and explicit stacks, wired-up to a monster-great graphics model/rendering-engine. I do believe that a period of total immersion in an "alternate programming universe," like LISP, Prolog, or, even, PostScript, can be a valuable part of a programmer's education ... if they have a strong base in a strongly-typed language to begin with. But, I am very influenced by the work of the anthropologist of education, George Spindler, at Stanford, on the utility of "discontinuities" in education and socialization as catalysts for cognitive devlopment, and acculturation. I think frequently getting your own mental chassis torn-down to the point you become all too aware of what nuts the bolts are, and then, re-assembled, is downright salubrious :) Merry, Merry, Bill
“I'm an artist: it's self evident that word implies looking for something all the time without ever finding it in full. It is the opposite of saying : 'I know all about it. I've already found it.' As far as I'm concerned, the word means: 'I am looking. I am hunting for it. I am deeply involved.'” Vincent Van Gogh
That matches my experience. While I don't remember much of the LISP I learned at the time (it was 25 years ago), I do remember how the experience seemed to broaden my approach to things in more 'traditional' languages. I actually used some of the AI techniques later on. My employer never knew, but I had a rule-based parser in 'C' that would find U.S., Canadian, and U.K. Royal Mail postal information in free-form text and create the appropriate bar code.
Software Zen:
delete this;
-
Cody227 wrote:
I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code...
Incorrect. Most developers write code for business applications. Most of the code they write is specific to solving business needs. So understanding the business and the application is how one writes "decent code". It might however help to know some aspect of low level hardware but that is not only less likely that it was in the past and even less likely to be possible. At least depending on what "low level" really means. For example if you are running your newest server on a cloud server you certainly need to know what '16 gig of memory' means but it is absolutely useless to know the kind of memory. And if one ends up writing code for windows, Macintosh, iPhone, Android, Linux with even some old main frame adaptor code then attempting to learn everything is impossible.
Cody227 wrote:
But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
Business application performance problems almost never resolve to low level language problems. They are almost always architecture and design problems. Even more so when in less structured team environments (which is the norm and not the exception.)
Cody227 wrote:
And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings.
And are you are expert in every possible technological API that you might reasonable encounter in the modern world? How are your iPhone skills? Done much real time programming for embedded software on a raid driver card? What about interfacing to the cash drawer on a PC? Or really creating an XML/XSD that actually does support international data and not just claiming that it does? What about optimizing a Oracle database and a MS SQL Server database? And how does one set up a geographically redundant data center (and what are the trade offs with hosting yourself or the various cloud possibilities?) The vast, vast array of technologies means it is impossible to be an expert in all but a few. And one is likely to do more damage to the career by even attempting to span several rather than sticking with a few (for example embedded real time drivers versus standard web business applications.
jschell wrote:
And are you are expert in every possible technological API that you might reasonable encounter in the modern world?
I never claimed to be an expert or even intermediate. Besides, that was not a problem regarding the API, the problem here is that beginners who learn a very high-level language first do not know what data really is. (It wasn't even explained in various C/C++ books) In fact everything in memory is made of the same binary code and you can not tell what it is. It could be a picture, a string, a number or even executable code. The typechecking is only a way to help us remember what we want to do with this piece of memory. For an experienced coder like you that might be obvious, but for the beginner it's not. IMHO knowing that is very important no matter what language you choose (not HTML though xD) because it allows you to bypass typechecking if you need to. In my example you could use this knowledge to split the pixel-array into pieces and cast them to a pascal string so you can send them with WinSock and after receiving rejoin them together. Ofcourse you would also need to know that the first byte in a pascal-string determines it's length and that it wouldn't work with null-terminated strings. (which is kinda important too because many WinAPI functions use null terminated wchar strings). A basic knowledge about stack, heap, stackframes, pointers and things like that might be very useful as well (for example when recursive functions cause a SO or unsafe functions like gets() cause unexplainable behaviour). It also explains why you shouldn't put very big data on the stack and why you should call by reference when using functions which need that data.
-
jschell wrote:
And are you are expert in every possible technological API that you might reasonable encounter in the modern world?
I never claimed to be an expert or even intermediate. Besides, that was not a problem regarding the API, the problem here is that beginners who learn a very high-level language first do not know what data really is. (It wasn't even explained in various C/C++ books) In fact everything in memory is made of the same binary code and you can not tell what it is. It could be a picture, a string, a number or even executable code. The typechecking is only a way to help us remember what we want to do with this piece of memory. For an experienced coder like you that might be obvious, but for the beginner it's not. IMHO knowing that is very important no matter what language you choose (not HTML though xD) because it allows you to bypass typechecking if you need to. In my example you could use this knowledge to split the pixel-array into pieces and cast them to a pascal string so you can send them with WinSock and after receiving rejoin them together. Ofcourse you would also need to know that the first byte in a pascal-string determines it's length and that it wouldn't work with null-terminated strings. (which is kinda important too because many WinAPI functions use null terminated wchar strings). A basic knowledge about stack, heap, stackframes, pointers and things like that might be very useful as well (for example when recursive functions cause a SO or unsafe functions like gets() cause unexplainable behaviour). It also explains why you shouldn't put very big data on the stack and why you should call by reference when using functions which need that data.
Cody227 wrote:
I never claimed to be an expert or even intermediate.
However the point is that your example is one single API. And there are a vast number of them. Even as a general concept it will still fail to help in a vast number of APIs/technologies.
Cody227 wrote:
In fact everything in memory is made of the same binary code and you can not tell what it is. It could be a picture, a string, a number or even executable code....
"In fact" ...I have been coding for 40 years and started before Java/C# were even ideas and first encountered C++ before there was even the idea of a standard for it. And I have written assembly, created some drivers and written code that wrote/read directly to memory, disk drives and other hardware. I only work on server side applications and even now a great deal of my time is spent interacting with externals systems which have a very wide array APIs. So I don't need a tutorial on what the problem is that you are discussing. But perhaps because of the breadth of of my experience I understand how impossible it is to grasp every possible API variation. The vagaries of different types seldom matters. And keep in mind that my focus/expertise tends to require that I must use far more of these than the average developer. To be fair what matters far more is that it is very likely that a single API/technology is very likely to be inadequately documented. And thus even if one has an idea of how a specific method of an API should be used doesn't guarantee success. Matter of fact is it possible that two methods within one API might differ internally on something that should be the same.
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
BASIC and assembly were the first languages I used to program, and I thoroughly enjoyed both (though they could be frustrating at times). I reminded myself recently about assembly and binary arithmetic and related topics, simply because I find them enjoyable. I then went on to use C++, then C# (non-Web based), then Java, PHP and JavaScript et al. I enjoyed C++, and still use it on occasion, and got annoyed when I was 'coerced' to switch to C#, but after a while I preferred C#, though I never ever really got into ASP.NET, I just don't like it. Then I preferred Java, as I could think more about the purpose of the code I was writing and less about syntax or what pointers were doing. Then I went back to playing with assembly, because I enjoy it. However, having done all that I still feel like a Noob, because I have very little commercial experience (not my aim to get much either). So my point is - knowing about binary arithmetic and registers etc. is fun, but it isn't that much use today, except perhaps for the odd creative use of binary operators. That said, a keen programmer will surely learn about these things eventually, just out of curiosity - surely?