Book Recommendation
-
Why not start agnostic to any technology? If this person is really interested in software development as a discipline get a set of "The Art of Computer Programming" by Donald Kunth. Might as well just jump right in. :-}
Why not go down to the basics and buy a box of transistors and a soldering iron, putting together your own machine? When my technical university started teaching computers around 197s, they actually had one computer (NORD-1) delivered as components. The professors though it a good idea that the students got hands on experience in building a computer, even though the architecture (down to the printed circuit boards) were pre-defined. Soldering it together was still considered a valuable learning experience. (I am not quite sure about the technology at that time - I believe it was a mixture of discrete transistors etc. and small scale integration chips, like the 74xxx series). The oldest machine at my university, a GIER, had one side panel that was the control logic as a matrix of feritte cores, directly accessible so you could "microcode" it by pulling the conductors through or outside each core, changing the effect of each instruction code. Something like that could be very useful for a novice that really wants to get to the roots of programming :-)
-
Which books would you recommend to someone who is novice to programming and have interest to dive in to the world of software development. I am specifically asking for .NET Technologies.
-
The world is not split in two: End users and builders/developers/engineers. You find end users at all levels. A programmer is the end user of a compiler. Or an operating system. Or an IDE. This holds particularly true for a novice programmer. An engineer / developer creating an IDE, or a new compiler, is an end user of some of his tools. He definitiely is an end user of the CPU. The CPU architect is an end user of logic gates. ... And so on. An end user is anyone that uses any technology without being involved in the creation or modification of that technology. When you play the role of an end user, you don't have to know anything about how things are implemented, but sometimes it is of great help. The question is where to draw the line: As a car end user you should know the difference between an electric motor and a combustion motor, but you need not know the details of different kinds of suspension. A programmer needs to know the difference between source code, a compiled library and an executable, but will a novice programmer need to know the details of a stack frame? Lots of things that mattered thirty years ago to "end users" doesn't matter today. Then, you certainly should know how to replace the spark plugs and headlight bulbs. With electric cars and LED headlights, that knowledge is about as useful as knowing how to shoe a horse. When did you last experience a flat tire? When was the last time the cooling agent in your engine was boiling and you should know that you must let it cool down before you remove the lid to add some cold water from that mountain creek runnning along the road? (That wasn't uncommon when I was a boy, but I haven't seen it for at least thirty years now.) There are similar things in programming. As a student, I learned about 1- and 2-complement, about normalised and un-normalised and hidden bit floating point. What use is there of that knowledge today? Even that stack frame static link is more or less of historic interest only. RS232 pinouts are history. Rotary dial analog phones are history. But once upon a time, even end users would have to know lots of these things. I think that the "semi-old guys" tend to be ones being most insistent on recently-abandoned technology being essential for the upcoming crop of engineers / developers / programmers. Those old enough to have seen four or five generations of technology pass by, are more relaxed and can more easily accept that yet another technology is turning into obsolence. Sometimes, all we wait for is
I agree with much of what you say; however, I disagree with your premise that a developer doesn't need to know HOW something works. Frameworks are created and abandoned with such intense frequency today that without understanding the basics of those frameworks, it is impossible to know how to proceed forward with the maintenance of software. Far too many developers seem to believe that the software life-cycle is write brand new, leave it and move to a new project. Instead, most software lives a long time with many changes needed through the years. Unless those initial developers and the maintenance engineers who come along have a mutual understanding of HOW coding works, the changes are doomed to fail. Our industry is the current equivalent of urban development: tear down whatever currently exists and build new, over and over. That process keeps the money flowing and builders happy UNTIL there is no money to flow when the entire infrastructure breaks down. At that point, those who understand the basics survive, and those who do not become part of the unemployed masses.
-
I have observed the same thing, but much stronger, in networking. 9 out of 10 Comp.Sci graduates believe that TCP/IP is networking. If you try to introduce them to e.g. connect ID (rather than the full IP address and TCP port no), to end-to-end routing at the physical layer, out-of-band signalling or different addressing schemes, they give you a blank stare: That's not the way it is done! You see it in all sorts of software: Whatever concept or abstraction you try to introduce, a farir share of programmes will answer "Oh, but we don't need that, we will just so so-and-so using our old tools". People will always be stuck in their old habits, at least until they have been forced to work with five or six alternate ways of doing things. But one way must be the first! It is far better to make C# and Visual Studio your first, much better than assembly language (or even K&R C), vi and gcc. The major disadvantage is that if you are later forced to work in K&R C using vi as you "IDE", it feels like moving from a modern apartment into a stone age cave. The first language / environment you learn is like your first sweetheart - you'll carry joyful memories from that time for the rest of your life. I started (serious) progrmming in Pascal, and 30+ years later, I still miss some of its features in today's languages. Similarly, you must expect people who start out with WPF / VS to have sweet memories of that when they are forced to switch to vi (and I won't blame them :-)). I don't think that is a good enough reason for making a poorer choice for a beginner's toolset.
-
The world is not split in two: End users and builders/developers/engineers. You find end users at all levels. A programmer is the end user of a compiler. Or an operating system. Or an IDE. This holds particularly true for a novice programmer. An engineer / developer creating an IDE, or a new compiler, is an end user of some of his tools. He definitiely is an end user of the CPU. The CPU architect is an end user of logic gates. ... And so on. An end user is anyone that uses any technology without being involved in the creation or modification of that technology. When you play the role of an end user, you don't have to know anything about how things are implemented, but sometimes it is of great help. The question is where to draw the line: As a car end user you should know the difference between an electric motor and a combustion motor, but you need not know the details of different kinds of suspension. A programmer needs to know the difference between source code, a compiled library and an executable, but will a novice programmer need to know the details of a stack frame? Lots of things that mattered thirty years ago to "end users" doesn't matter today. Then, you certainly should know how to replace the spark plugs and headlight bulbs. With electric cars and LED headlights, that knowledge is about as useful as knowing how to shoe a horse. When did you last experience a flat tire? When was the last time the cooling agent in your engine was boiling and you should know that you must let it cool down before you remove the lid to add some cold water from that mountain creek runnning along the road? (That wasn't uncommon when I was a boy, but I haven't seen it for at least thirty years now.) There are similar things in programming. As a student, I learned about 1- and 2-complement, about normalised and un-normalised and hidden bit floating point. What use is there of that knowledge today? Even that stack frame static link is more or less of historic interest only. RS232 pinouts are history. Rotary dial analog phones are history. But once upon a time, even end users would have to know lots of these things. I think that the "semi-old guys" tend to be ones being most insistent on recently-abandoned technology being essential for the upcoming crop of engineers / developers / programmers. Those old enough to have seen four or five generations of technology pass by, are more relaxed and can more easily accept that yet another technology is turning into obsolence. Sometimes, all we wait for is
I qualify as an "old timer" and am still working albeit about as far away from my earliest software work as could be. I wouldn't use VI unless my life depended on it not because of any aversion to full screen editors. I'd rather use notepad. Now as to "old timers" and abandoned technology, even though I spent first 15-20 years of my career programming ASM on various machines writing everything from OSs to devices driver, to compilers, etc. do I use ASM today, or prefer it? I use the most efficient tool appropriate to the task at hand. As for teaching ASM, I do wonder where the ASM programmers will come from to write the inevitable code that cannot be written in C (or whatever high level language you choose). Somewhat amazing to me that a computer science major can graduate w/o understanding how a computer works, at least at the basic level of ASM. Black Hats cannot be the ONLY folks that understand ASM or we're all in a lot of trouble.
-
I qualify as an "old timer" and am still working albeit about as far away from my earliest software work as could be. I wouldn't use VI unless my life depended on it not because of any aversion to full screen editors. I'd rather use notepad. Now as to "old timers" and abandoned technology, even though I spent first 15-20 years of my career programming ASM on various machines writing everything from OSs to devices driver, to compilers, etc. do I use ASM today, or prefer it? I use the most efficient tool appropriate to the task at hand. As for teaching ASM, I do wonder where the ASM programmers will come from to write the inevitable code that cannot be written in C (or whatever high level language you choose). Somewhat amazing to me that a computer science major can graduate w/o understanding how a computer works, at least at the basic level of ASM. Black Hats cannot be the ONLY folks that understand ASM or we're all in a lot of trouble.
The only programmers who really need to know the machine code (whether considered as binary instruction codes or symbolic assembly language) will be compiler writers. Knowing all the details of the instruction set, adressing modes, status bits etc. is highly specialized knowledgde, needed by very few others. It is like the huge matrix models managed by meteorology software, implementing trancendental functions for a math library or the light model of a 3D graphics package. We didn't learn meteorology or FEM algorithms in college; those who need it, learn it at work (or maybe they study meteorology in college and learn programming at work). We did learn the transistor design for dynamic and static RAM cells - never needed it! We did learn the series expansions for trigonometric functions - never needed it. We did assembler programming exercises, and I did need it for about five years, but not for the last 30 years. We did learn nine (or was it eleven) different disk scheduling algorithms, made completely irrelevant by disks with megabytes of cache and virtualized track/sector numbers. We learned lots of ways to manage a heap; I used the knowledge for twentyfive years, fully convinced that nothing could beat explicit malloc/free. Then I read about CLR garbage collection (in "CLR via C#"), and had to admit that "Oops, I never though of that ... and that and...". No doubt: CLR garbage collection is a lot smarter than any memory handling I have coded myself. Any modern compiler makes optimizations that you never would have thought of. My company develops processing modules for embedded systems: I don't think there is a single assembly instruction anywhere in our code. Even our in-house CPU extensions, on-chip "peripherals" (like BT radio, encryption unit, sensor interfaces etc.) are managed through general C library functions. I am not sad that young programmers no longer learn the transistor design of a flip-flop, how to use Newton's method when implementing a math function library or to judge FCFS against elevator disk scheduling. Such knowledge doesn't help you write values to RAM in a better way, to aim that flame thrower in the right direction with higher precision or reliability, or to sort the queue of two entries in the most efficient way before sending the request to the disk. Programmers still need an understanding of a lot of hardware aspects: Word length / numeric range and limited FP precision is one prime example. But that is on the architectural level, not implementation.
-
If that option is available, it might be a very good option. Lots of people live too far away from a college. Even if there is a local college, the course may be taught at times when you cannot leave your ordinary work. Or admission to the college requries that certain formalities are in place, e.g. that the course is available only to full time students. Finally (this might be a bigger problem in Europe than in the US): Some colleges/universities fiercely cling to the idea that Windows or anything else coming from MS is toy software - Real, Professional Software is Linux based (and with a command line interface, not a GUI). All basic courses are based on Linux and open-source software; Windows software/tools are introduced only as one of several options in courses for specializing in end user application development. Lots of newly educated bachelors and masters spend years of frustration when entering the working life, realizing how much toy software is out there, and how difficult it is to enlighten people about the blessings of Linux and command line interfaces. If your local college is of that sort, you can go there to learn Linux and C (and possibly python), but you may search in vain for C#, VS and dotNet related courses.
-
Quote:
The first language / environment you learn is like your first sweetheart - you'll carry joyful memories from that time for the rest of your life.
My 'first' was BASIC and no, I don't.
Now that you mention it... I should have qualified it: The first serious language... (I started with BASIC, too, when the language was so basic that variables were named A - Z, A0 - Z0 up to A9 - Z9. 286 numeric variables maximum, and 26 string varibables A$ - Z$. You are right: That doesn't bring up any joyful memories. In fact, I had suppressed that memory entirely.
-
I agree with much of what you say; however, I disagree with your premise that a developer doesn't need to know HOW something works. Frameworks are created and abandoned with such intense frequency today that without understanding the basics of those frameworks, it is impossible to know how to proceed forward with the maintenance of software. Far too many developers seem to believe that the software life-cycle is write brand new, leave it and move to a new project. Instead, most software lives a long time with many changes needed through the years. Unless those initial developers and the maintenance engineers who come along have a mutual understanding of HOW coding works, the changes are doomed to fail. Our industry is the current equivalent of urban development: tear down whatever currently exists and build new, over and over. That process keeps the money flowing and builders happy UNTIL there is no money to flow when the entire infrastructure breaks down. At that point, those who understand the basics survive, and those who do not become part of the unemployed masses.
I certainly think you should know the workings of the layer you build your software on, directly below your layer, but not ten layers down. But: You should distinguish between architecture and implementation: The data structures, interactions between functions etc. are essential. If your understanding of the layer below you breaks down if the 32 bit CPU is replaced by a 36 bit CPU (are they still made?), then you have spent your resources wrongly. Or, if the layer below is re-implemented in a different language, but offering the same call interface. I am sceptical to the current trend of googling to find the call interface documentation, and start using it without knowning anything about the architecture below. If I complain, nine out of ten times someone suggests: But it is open software - you can download it and see how it works! ... No, the implementation is NOT the architecture! When you ask for an architetural drawing and is given a house, and told: You can make your own drawing of this house, can't you? then you are wasting my time. You rarely find software "architectural drawings" by googling, that which is independent of coding / implementation. I see that as a big problem. Even more I am outright scared over how large fraction of young software developers, those educated after google, appears to think it is perfectly fine. If it works, there is nothing to worry about. If not, you google for a quick fix. Ask them why that fix cured the problem, and they shrug: "Don't know, but it works now. Good enough for me!" That is not a good approach for writing robust software. And lots of software written today is not robust.
-
If that option is available, it might be a very good option. Lots of people live too far away from a college. Even if there is a local college, the course may be taught at times when you cannot leave your ordinary work. Or admission to the college requries that certain formalities are in place, e.g. that the course is available only to full time students. Finally (this might be a bigger problem in Europe than in the US): Some colleges/universities fiercely cling to the idea that Windows or anything else coming from MS is toy software - Real, Professional Software is Linux based (and with a command line interface, not a GUI). All basic courses are based on Linux and open-source software; Windows software/tools are introduced only as one of several options in courses for specializing in end user application development. Lots of newly educated bachelors and masters spend years of frustration when entering the working life, realizing how much toy software is out there, and how difficult it is to enlighten people about the blessings of Linux and command line interfaces. If your local college is of that sort, you can go there to learn Linux and C (and possibly python), but you may search in vain for C#, VS and dotNet related courses.
Thanks for the thoughts, actually the guys is already working as a government employee as a Police Officer, but he is interested to learn the software development and wants to pursue his career in it. So i was thinking to give him the direction towards .NET Technologies, i would suggest him this course to be taken online as self-paced option is provided by few web sites online.
-
I personally found "CLR via C# 4th Edition" very helpfull.
"Coming soon"
Yeah, that's a good one, as i myself had read first few chapters but not able to finish the book yet :)
-
Why not go down to the basics and buy a box of transistors and a soldering iron, putting together your own machine? When my technical university started teaching computers around 197s, they actually had one computer (NORD-1) delivered as components. The professors though it a good idea that the students got hands on experience in building a computer, even though the architecture (down to the printed circuit boards) were pre-defined. Soldering it together was still considered a valuable learning experience. (I am not quite sure about the technology at that time - I believe it was a mixture of discrete transistors etc. and small scale integration chips, like the 74xxx series). The oldest machine at my university, a GIER, had one side panel that was the control logic as a matrix of feritte cores, directly accessible so you could "microcode" it by pulling the conductors through or outside each core, changing the effect of each instruction code. Something like that could be very useful for a novice that really wants to get to the roots of programming :-)
Well... it depends on if someone has that much time, what i wanted was to quickly get him up to speed to start understanding about c# and start writing small programs in C#, the time constraints applied to that guy and me too.
-
Now that you mention it... I should have qualified it: The first serious language... (I started with BASIC, too, when the language was so basic that variables were named A - Z, A0 - Z0 up to A9 - Z9. 286 numeric variables maximum, and 26 string varibables A$ - Z$. You are right: That doesn't bring up any joyful memories. In fact, I had suppressed that memory entirely.
-
Don{t mind if I do! Please pass the syrup Regards, Walt
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
-
Don{t mind if I do! Please pass the syrup Regards, Walt
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software