Am I wrong?
-
Exactly, right tool for the right job!
VS2010/Atmel Studio 6.1 ToDo Manager Extension Relax...We're all crazy it's not a competition!
If your employer confronts you with different jobs requiring different tools at different times - on short notice - then you will need to prepare a generalist skill set in advance. If there is pressure to get a product into test or out the door, you simply won't have the time to pick it up, even though you may be a very fast learner. This sort of meltdown can be exacurbated by principals going on holiday, long term secondment, or elsewhere. Under these circumstances you are likely to be the only spar holding the burning plane's wings on as the bosses try and guide it down to a safe landing. Example - C# is a great language for General Purpose and rapid prototype; but being able to mix and match with interop win32 dll's either from the API or that you have written yourself is a great skill, which is picked up by learning C/C++ techniques and the Interop techniques in advance. Even old skills like ATL/COM are useful for RPC's and eventing if you are crossing process and OS boundaries. Learning the basics in advance and in your own time would definitely make you an asset, at least in industrial, robotic and engineering contexts. And let's not forget, the madder the company the more fun there is. :-O
-
You seem to believe you are asking a question with a concrete answer. You can also ask is it better to learn the saxophone or the veena or should you wear blue on Thursdays.
Peter Wasser Art is making something out of nothing and selling it. Frank Zappa
I don't always wear blue on Thursdays, but I do like to be smart. I've started 'Dress Up Thursdays' where I turn up in a suit or shirt and tie and jacket. Purely to increase the signal-noise ratio with 'Dress Down Friday'. :-D
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
It all depends on what you are trying to learn and what level you're starting from. C isn't a great language for absolute beginners because it has quite a weird and confusing syntax - there are more instructional languages to teach the basic concepts of programming and ease someone into it. However, if you understand the basics and want to get to grip with lower level things, then C is at the simpler end of the C-style-language spectrum and it does allow access to very low level stuff that is gradually hidden away as you move to the C# and Java end of things. But if you really want to go low level, then you need to write some assembler, so that when you write your high level code you understand what it is the computer is actually doing, and why. If you want to learn to be a better programmer, then there is no "one" language. Learn at least a bit about as many as you can. Functional programming is a good example of languages that will make you think about the same problem in a very different way, and when you come back to a procedural language you'll write better code because you have a wider view (lateral thinking) of how the problem can be approached. Lastly, I find the best way to learn is to "just do it". Don't read or re-use someone else's solution, but actually sit down and write the whole program yourself. Want to read an XML file? Then write a simple XML parser. The next time you use an XML parser from a library you'll understand what it has to do and why it's so slow. You'll know how you can re-phrase your XML data layout to make the files faster to load, smaller to transfer, and easier to manipulate. As well as this, if you do it, you'll remember it; if you read it a lot of it may just fade away, unused.
-
There is no "better than any other" language. You must pick the language that you understand best and can do the job. Delphi is apparently great for UI stuff (I have only used it a tiny bit) but is a pain otherwise. Java sucks on the PC, you are better off using C# or even, *gasp*, VB than Java - however, on mobile devices it may be a lot better fit. For what you describe, I would definately use C and C++. C is as close to the metal you can get without using assembler, C++ gives you objects, classes and all the good stuff. When you say zero-terminated strings are "slow", how do you mean? They are one of the fastest implementations of string handling you can get, generally.
- I would love to change the world, but they won’t give me the source code.
Quote:
even, *gasp*, VB than Java
I thought the lounge was supposed to be KSS safe! go now, we don't want another one of those types here!!
-
I think it "all depends" on how you define the words "better," and "learn." And, it depends on your cognitive style: yes, people have different cognitive styles; some are more innately top-down thinkers; others are more innately bottom-up thinkers. For some people thinking recursively is very natural (they feel at home in LISP). People's ability to visualize/conceptualize complex processes may vary in terms of the relative salience of "visual-thinking" and "abstract thinking." And, context: where you are; the circumstances you are in; the limits, or requirements, of the task at hand, and the time to achieve it in. imho, what is a more interesting question to ask is: what type of education would best prepare people for careers as professional programmers? And, yes, that takes us right into the briar-patch (in England, would that be a "sticky wicket" ?), of what a "professional programmer" is, these days. I'll never forget when I worked at Adobe, as a PostScript shaman, on Illustrator, PhotoShop, Acrobat, Multiple-Master Font Technology, etc.; one day, I talked to the young genius, Mark Hamburg (who was awarded the Gordon Moore Prize by his peers for his remarkable work on PhotoShop's evolution). I asked Mark what he had majored in at college; he replied he'd majored in Mathematics; I asked him if he had considered Computer Science. Mark said, essentially, that he had already read, and understood, all of Donald Knuth's books, while he was in High School, and felt he had nothing more to learn in that area. Unfortunately, most of us are not Mark Hamburgs (and will never be) ! My own journey in the last thirty years has been from 6502, and 6809, assembly language, to Pascal, and Basic, to HyperCard, to LISP, to PostScript, to Visual Basic and VBA, finally to C#. For me C# is perfect: just terse enough, just high-level enough. Rant: If only MS had put some energy into giving WinForms a high-level vector-based 2d retained-mode graphics/drawing engine, instead of going off the deep-end into the-next-greatest-thing-frenzy with WPF and SilverLight ! But, all the languages I have studied, and used, have helped me become the obscure non-entity I am today :)
“I'm an artist: it's self evident that word implies looking for something all the time without ever finding it in full. It is the opposite of saying : 'I know all about it. I've already found it.' As far as I'm concerned, the word means: 'I am looking. I am hunting for it. I am deeply involved.'”
BillWoodruff wrote:
For some people thinking recursively is very natural (they feel at home in LISP).
I learned LISP when I was taking graduate courses in artificial intelligence back in the late 80's. My best description of the experience was removing the top of your skull, rotating your brain counter-blockwise 90°, and reattaching your skull.
Software Zen:
delete this;
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
I believe C/C++ (if not assembly) should be the first language to be learned. This will help the aspiring programmer grasp some concepts that he may miss in high level languages like C# and JAVA. The concept of pointers, stack and heap are too important to be overlooked and may seem like they don't exist in high level languages and that can make a real difference when building reliable high performance applications. Abstracting these concepts is good as long as you understand them.
To alcohol! The cause of, and solution to, all of life's problems - Homer Simpson ---- Our heads are round so our thoughts can change direction - Francis Picabia
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
I used to think like that some years ago. But now I see that basic skills about CS are more relevant then mastering on language or another. Algorithm analysis is a good starting point. Learn how to build more efficient, faster and readable code is the foundation of good code. Know the basic structures like linked lists, trees, graphs and solve basic problems like sorting, recursion etc can sound a bit redundant considering the huge number of built-in libraries and frameworks available today, but I think that such skills can help you to sove a set of new computational problems you may find ahead in your career as developer.
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
-
This thread again? Here's how this thread goes: some agree, some disagree, everyone mentions their pet language causing this thread to recurse. Your friend is an idiot, as is anyone who says "I don't need to learn [some fundamental concept]".
an idiot, as is anyone who says "I don't need to learn [some fundamental concept]". And yet you and many others would deny that you need to learn category theory and functional programming as in Haskell and Scala. I've programmed in asm on several computers, I programmed for decades in C, but I'll tell you that the people who learn functional programming will do better in the long run than those who get mired in C.
-
There is no "better than any other" language. You must pick the language that you understand best and can do the job. Delphi is apparently great for UI stuff (I have only used it a tiny bit) but is a pain otherwise. Java sucks on the PC, you are better off using C# or even, *gasp*, VB than Java - however, on mobile devices it may be a lot better fit. For what you describe, I would definately use C and C++. C is as close to the metal you can get without using assembler, C++ gives you objects, classes and all the good stuff. When you say zero-terminated strings are "slow", how do you mean? They are one of the fastest implementations of string handling you can get, generally.
- I would love to change the world, but they won’t give me the source code.
When you say zero-terminated strings are "slow", how do you mean? They are one of the fastest implementations of string handling you can get, generally. That's stunningly ignorant. strlen is O(n) and is needed for most operations strings. To speed them up you need a type containing the length along with a char buffer or a pointer to one. And then add a refcount and copy-on-write to avoid unnecessary copying.
-
an idiot, as is anyone who says "I don't need to learn [some fundamental concept]". And yet you and many others would deny that you need to learn category theory and functional programming as in Haskell and Scala. I've programmed in asm on several computers, I programmed for decades in C, but I'll tell you that the people who learn functional programming will do better in the long run than those who get mired in C.
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
C is lower-level (closer to the hardware) than Java and C#. I don't know anything about Delphi so I can't comment on that. For someone learning programming there are two schools of though: start with the higher-level concepts using a higher-level language, and gradually delve deeper into understanding how they are implemented and what is going on at a machine level, if necessary. These days, the deeper delving is really not all that necessary, unless you are charged with writing very high-performance code. The second school of though is to start at the low level and learn up, gradually abstracting away the lower-level concepts with higher level ones. This more closely traces the evolution of computers and languages, and if you're really serious, probably provides the "best" understanding of the whole ecosystem, but is a much higher learning curve. As for my opinion, if I were to recommend a path to someone I would probably choose the first method, of learning the high-level concepts first (probably with a dynamic language like JavaScript), and delving deeper where one is interested. It really depends on how "serious" the subject is about learning computer languages. If they're darn serious, learning lower-level-up will provide the best understanding, but if they're not sure about it, starting at the top is the best way to discover if they have a passion for programming or not.
Sad but true: 4/3 of Americans have difficulty with simple fractions. There are 10 types of people in this world: those who understand binary and those who don't. {o,o}.oO( Check out my blog! ) |)””’) http://pihole.org/ -”-”-
-
This thread again? Here's how this thread goes: some agree, some disagree, everyone mentions their pet language causing this thread to recurse. Your friend is an idiot, as is anyone who says "I don't need to learn [some fundamental concept]".
The concept of "fundamental concepts" vary with time. When I learned Basic, Fortran, Pascal, Cobol and the assembly languages of four different architectures + MIX (ref Donald), "fundamental concepts" included how to handle 1-complement vs. 2-complement, order of bits, octets, halfwords and words (like some PDP-11 OS structures with the high order halfword first but the high byte in each halfword last ... or was it the other way around?), advantages and disadvantages of a hidden upper bit in the mantissa of float formats... Kids of today could (or couldn't) care less about normalized mantissas, BCD nibbles amd the question of when minus zero is equal to plus zero. And, I must admit, today I don't care that much myself. I do remember that such understanding used to be essential, but it isn't anymore. Nowaday, I handle integer values withoout worrying about their binary representation (if I do, it is because I use them for something else than integer numerical values, which is bad practice in any case!). I handle sets of objects without concerns about next-pointers: I add objects to the set, remove objects, traverse the set etc, without ever seeing a next-pointer. What I do see, is whether the set is ordered, objects accessible by keys etc. Sure, knowing what goes on one level below the one you work at is essential. In the days of 1-complement machines it could help you understand why sometimes 0 != 0. Today, when a foreach loop was abruptly terminated, my old familiarity with next-pointers was a great help in pinpointing the problem to the replacement of one object in a DOM structure with a new versison - the replacement was done in a code snippet that didn't know that the old version was the current one in a foreach iteration, replacing it with one with a null next-pointer. That is an implemmentation anomaly, just like 0 != 0 is an implementation anomaly. 2-complement fixed the latter - a list implementation mantainng the list by a separate link structure rather than embedding the next link in the object itself would have fixe the former. Like 1-complement died out with time, object embedded next pointers might die out with time. Then, understanding the use of next pointers might become as irrelevant as understanding the difference between 1- and 2-complement. Sometimes I am frustrated by our younger programmers and their lack of understanding of fundamental concepts. And then, when I think it over, I more and more conclude: "Actually, they do not need it for anything at all, given the tools w
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
You're wrong for thinking that there is a better language to learn that any other. But for a first timer I would choose C++, C# or Java.
CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...
-
Am I wrong thinking that C is a better language for one to learn as Java or Delphi? I have a friend who always says C and C++ are hard to learn, outdated and impractical. I can understand that he thinks Java is better as it was his first language, but I seriously can not understand why he thinks Delphi is better than C. After Basic it's one of the worst languages that I've seen so far. I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code... Altough I've never never written a single program in ASM I feel like I would've never understood coding without it and seeing what my friend texts me sometimes I might be right. Only a few days ago he stated that he wouldn't need to learn pointers because Java "doesn't use them" and "var parameters in Delphi aren't pointers". And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings. But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
Cody227 wrote:
I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code...
Incorrect. Most developers write code for business applications. Most of the code they write is specific to solving business needs. So understanding the business and the application is how one writes "decent code". It might however help to know some aspect of low level hardware but that is not only less likely that it was in the past and even less likely to be possible. At least depending on what "low level" really means. For example if you are running your newest server on a cloud server you certainly need to know what '16 gig of memory' means but it is absolutely useless to know the kind of memory. And if one ends up writing code for windows, Macintosh, iPhone, Android, Linux with even some old main frame adaptor code then attempting to learn everything is impossible.
Cody227 wrote:
But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
Business application performance problems almost never resolve to low level language problems. They are almost always architecture and design problems. Even more so when in less structured team environments (which is the norm and not the exception.)
Cody227 wrote:
And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings.
And are you are expert in every possible technological API that you might reasonable encounter in the modern world? How are your iPhone skills? Done much real time programming for embedded software on a raid driver card? What about interfacing to the cash drawer on a PC? Or really creating an XML/XSD that actually does support international data and not just claiming that it does? What about optimizing a Oracle database and a MS SQL Server database? And how does one set up a geographically redundant data center (and what are the trade offs with hosting yourself or the various cloud possibilities?) The vast, vast array of technologies means it is impossible to be an expert in all but a few. And one is likely to do more damage to the career by even attempting to span several rather than sticking with a few (for example embedded real time drivers versus standard web business applications.
-
The concept of "fundamental concepts" vary with time. When I learned Basic, Fortran, Pascal, Cobol and the assembly languages of four different architectures + MIX (ref Donald), "fundamental concepts" included how to handle 1-complement vs. 2-complement, order of bits, octets, halfwords and words (like some PDP-11 OS structures with the high order halfword first but the high byte in each halfword last ... or was it the other way around?), advantages and disadvantages of a hidden upper bit in the mantissa of float formats... Kids of today could (or couldn't) care less about normalized mantissas, BCD nibbles amd the question of when minus zero is equal to plus zero. And, I must admit, today I don't care that much myself. I do remember that such understanding used to be essential, but it isn't anymore. Nowaday, I handle integer values withoout worrying about their binary representation (if I do, it is because I use them for something else than integer numerical values, which is bad practice in any case!). I handle sets of objects without concerns about next-pointers: I add objects to the set, remove objects, traverse the set etc, without ever seeing a next-pointer. What I do see, is whether the set is ordered, objects accessible by keys etc. Sure, knowing what goes on one level below the one you work at is essential. In the days of 1-complement machines it could help you understand why sometimes 0 != 0. Today, when a foreach loop was abruptly terminated, my old familiarity with next-pointers was a great help in pinpointing the problem to the replacement of one object in a DOM structure with a new versison - the replacement was done in a code snippet that didn't know that the old version was the current one in a foreach iteration, replacing it with one with a null next-pointer. That is an implemmentation anomaly, just like 0 != 0 is an implementation anomaly. 2-complement fixed the latter - a list implementation mantainng the list by a separate link structure rather than embedding the next link in the object itself would have fixe the former. Like 1-complement died out with time, object embedded next pointers might die out with time. Then, understanding the use of next pointers might become as irrelevant as understanding the difference between 1- and 2-complement. Sometimes I am frustrated by our younger programmers and their lack of understanding of fundamental concepts. And then, when I think it over, I more and more conclude: "Actually, they do not need it for anything at all, given the tools w
Good statement. Context is key. Of course, its better to know these things, but the question of usability comes up. If I don't know what a normalized Mantissa is does that mean I am stupid? Can't code? OTOH - when you know these things, it sometimes can save the day. Is that a reason to stop learning how to program on the Phone, and pick up Assembly? I dont think so.
Where there's smoke, there's a Blue Screen of death.
-
The concept of "fundamental concepts" vary with time. When I learned Basic, Fortran, Pascal, Cobol and the assembly languages of four different architectures + MIX (ref Donald), "fundamental concepts" included how to handle 1-complement vs. 2-complement, order of bits, octets, halfwords and words (like some PDP-11 OS structures with the high order halfword first but the high byte in each halfword last ... or was it the other way around?), advantages and disadvantages of a hidden upper bit in the mantissa of float formats... Kids of today could (or couldn't) care less about normalized mantissas, BCD nibbles amd the question of when minus zero is equal to plus zero. And, I must admit, today I don't care that much myself. I do remember that such understanding used to be essential, but it isn't anymore. Nowaday, I handle integer values withoout worrying about their binary representation (if I do, it is because I use them for something else than integer numerical values, which is bad practice in any case!). I handle sets of objects without concerns about next-pointers: I add objects to the set, remove objects, traverse the set etc, without ever seeing a next-pointer. What I do see, is whether the set is ordered, objects accessible by keys etc. Sure, knowing what goes on one level below the one you work at is essential. In the days of 1-complement machines it could help you understand why sometimes 0 != 0. Today, when a foreach loop was abruptly terminated, my old familiarity with next-pointers was a great help in pinpointing the problem to the replacement of one object in a DOM structure with a new versison - the replacement was done in a code snippet that didn't know that the old version was the current one in a foreach iteration, replacing it with one with a null next-pointer. That is an implemmentation anomaly, just like 0 != 0 is an implementation anomaly. 2-complement fixed the latter - a list implementation mantainng the list by a separate link structure rather than embedding the next link in the object itself would have fixe the former. Like 1-complement died out with time, object embedded next pointers might die out with time. Then, understanding the use of next pointers might become as irrelevant as understanding the difference between 1- and 2-complement. Sometimes I am frustrated by our younger programmers and their lack of understanding of fundamental concepts. And then, when I think it over, I more and more conclude: "Actually, they do not need it for anything at all, given the tools w
Hello, this is my 1st post here. Looks like a great forum. I really like 7989122's reply, for a few reasons: 1: I think sometimes it's easy to be too reliant on hindsight and deploy it without regard for the environment and its inhabitants. e.g. when I learnt Latin it was helpful and gave me a more fundamental understanding of English. But I learnt to make basic baby sounds first, then picked up English then Latin. So I learnt some language origins and building blocks in reverse order because that's what was the go at the time and all I was capable of. 2: the reply doesn't use the term "idiot". I can't see any positive disposition created by using this word. I suspect it's source could be of self indulgence. But I don't know, I choose to ignore it's intent. 3: as a trainee Citizen Developer who began 1 year ago learning using Small Basic and now just started with C#, if I find that I need to or it's conducive to achieving my goals then i'll learn some C. Learning with Small Basic has been great fun and we have the opportunity to make our own controls, optimise our code, consider efficient resource use and devise crafty work arounds. And once again it's fun. Whilst this question and discussion occurs often, I think it's helpful for those learning. Thanks for the post.
-
BillWoodruff wrote:
For some people thinking recursively is very natural (they feel at home in LISP).
I learned LISP when I was taking graduate courses in artificial intelligence back in the late 80's. My best description of the experience was removing the top of your skull, rotating your brain counter-blockwise 90°, and reattaching your skull.
Software Zen:
delete this;
:) Amen, Brother Wheeler ! I went through a phase of LISPmania. At one point I spent ten days figuring out how to write a three-line method that took two ints as parameters and created a 2d array in memory. I forget, now, whether it was more than doubly-recursive. After successfully deprogramming myself from the "cult of 'car and 'cdr," in sesshins at the Berkeley Zen Center, and by binging on cheap Chinese take-out with extra MSG, and Jolt Cola, until bulimic ... I realized that, in the future, it would take me as much time to revivify my understanding of that three line solution, as it took me to develop it ... and I moved on to ... PostScript, which ... few people ever appreciate this ... is a very LISP-like language with an post-fix notation "front-end," and explicit stacks, wired-up to a monster-great graphics model/rendering-engine. I do believe that a period of total immersion in an "alternate programming universe," like LISP, Prolog, or, even, PostScript, can be a valuable part of a programmer's education ... if they have a strong base in a strongly-typed language to begin with. But, I am very influenced by the work of the anthropologist of education, George Spindler, at Stanford, on the utility of "discontinuities" in education and socialization as catalysts for cognitive devlopment, and acculturation. I think frequently getting your own mental chassis torn-down to the point you become all too aware of what nuts the bolts are, and then, re-assembled, is downright salubrious :) Merry, Merry, Bill
“I'm an artist: it's self evident that word implies looking for something all the time without ever finding it in full. It is the opposite of saying : 'I know all about it. I've already found it.' As far as I'm concerned, the word means: 'I am looking. I am hunting for it. I am deeply involved.'” Vincent Van Gogh
-
:) Amen, Brother Wheeler ! I went through a phase of LISPmania. At one point I spent ten days figuring out how to write a three-line method that took two ints as parameters and created a 2d array in memory. I forget, now, whether it was more than doubly-recursive. After successfully deprogramming myself from the "cult of 'car and 'cdr," in sesshins at the Berkeley Zen Center, and by binging on cheap Chinese take-out with extra MSG, and Jolt Cola, until bulimic ... I realized that, in the future, it would take me as much time to revivify my understanding of that three line solution, as it took me to develop it ... and I moved on to ... PostScript, which ... few people ever appreciate this ... is a very LISP-like language with an post-fix notation "front-end," and explicit stacks, wired-up to a monster-great graphics model/rendering-engine. I do believe that a period of total immersion in an "alternate programming universe," like LISP, Prolog, or, even, PostScript, can be a valuable part of a programmer's education ... if they have a strong base in a strongly-typed language to begin with. But, I am very influenced by the work of the anthropologist of education, George Spindler, at Stanford, on the utility of "discontinuities" in education and socialization as catalysts for cognitive devlopment, and acculturation. I think frequently getting your own mental chassis torn-down to the point you become all too aware of what nuts the bolts are, and then, re-assembled, is downright salubrious :) Merry, Merry, Bill
“I'm an artist: it's self evident that word implies looking for something all the time without ever finding it in full. It is the opposite of saying : 'I know all about it. I've already found it.' As far as I'm concerned, the word means: 'I am looking. I am hunting for it. I am deeply involved.'” Vincent Van Gogh
That matches my experience. While I don't remember much of the LISP I learned at the time (it was 25 years ago), I do remember how the experience seemed to broaden my approach to things in more 'traditional' languages. I actually used some of the AI techniques later on. My employer never knew, but I had a rule-based parser in 'C' that would find U.S., Canadian, and U.K. Royal Mail postal information in free-form text and create the appropriate bar code.
Software Zen:
delete this;
-
Cody227 wrote:
I was alyways convinced that you need to know how a computer works at low-level to be able to write decent code...
Incorrect. Most developers write code for business applications. Most of the code they write is specific to solving business needs. So understanding the business and the application is how one writes "decent code". It might however help to know some aspect of low level hardware but that is not only less likely that it was in the past and even less likely to be possible. At least depending on what "low level" really means. For example if you are running your newest server on a cloud server you certainly need to know what '16 gig of memory' means but it is absolutely useless to know the kind of memory. And if one ends up writing code for windows, Macintosh, iPhone, Android, Linux with even some old main frame adaptor code then attempting to learn everything is impossible.
Cody227 wrote:
But then there are things that really annoy me about C like null-terminated strings. They are so damn slow!
Business application performance problems almost never resolve to low level language problems. They are almost always architecture and design problems. Even more so when in less structured team environments (which is the norm and not the exception.)
Cody227 wrote:
And at the same time he asked how to send an array of pixels with WinSock because the appropriate delphi-function only accepts strings.
And are you are expert in every possible technological API that you might reasonable encounter in the modern world? How are your iPhone skills? Done much real time programming for embedded software on a raid driver card? What about interfacing to the cash drawer on a PC? Or really creating an XML/XSD that actually does support international data and not just claiming that it does? What about optimizing a Oracle database and a MS SQL Server database? And how does one set up a geographically redundant data center (and what are the trade offs with hosting yourself or the various cloud possibilities?) The vast, vast array of technologies means it is impossible to be an expert in all but a few. And one is likely to do more damage to the career by even attempting to span several rather than sticking with a few (for example embedded real time drivers versus standard web business applications.
jschell wrote:
And are you are expert in every possible technological API that you might reasonable encounter in the modern world?
I never claimed to be an expert or even intermediate. Besides, that was not a problem regarding the API, the problem here is that beginners who learn a very high-level language first do not know what data really is. (It wasn't even explained in various C/C++ books) In fact everything in memory is made of the same binary code and you can not tell what it is. It could be a picture, a string, a number or even executable code. The typechecking is only a way to help us remember what we want to do with this piece of memory. For an experienced coder like you that might be obvious, but for the beginner it's not. IMHO knowing that is very important no matter what language you choose (not HTML though xD) because it allows you to bypass typechecking if you need to. In my example you could use this knowledge to split the pixel-array into pieces and cast them to a pascal string so you can send them with WinSock and after receiving rejoin them together. Ofcourse you would also need to know that the first byte in a pascal-string determines it's length and that it wouldn't work with null-terminated strings. (which is kinda important too because many WinAPI functions use null terminated wchar strings). A basic knowledge about stack, heap, stackframes, pointers and things like that might be very useful as well (for example when recursive functions cause a SO or unsafe functions like gets() cause unexplainable behaviour). It also explains why you shouldn't put very big data on the stack and why you should call by reference when using functions which need that data.