Interview questions - best way to learn the answers
-
Marc Clifton wrote:
The point being, I don't really think it's a lack of understanding that I can't spit out the definition of polymorphism, it's more related to what domain (that word again) I live in and where I choose to focus my attention.
That's it in a nutshell, it's where you focus your attention. Programmers work on two levels. On one hand we produce concrete functioning software that contains specific lines of code, classes etc. It's something you can debug through and see the cogs turning. On the other hand we spend a lot of time in our head, as Fred Brook's said we build castles in the air, out of air. To a great extent the design patterns movement emerged to capture this abstract world. There's an old saying that there's no problem that can't be solved with another layer of abstraction. That applies as much to the way you think about programming as it does to the layers of abstraction in your code. When you want to learn something you can do so by learning from real concrete examples. In your words you focus on that domain. Other people like your friend prefer to try and generalise problems and solutions and think about them in more theoretical abstract ways. I think most people are inclined to one camp or the other, but you can get great benefits by forcing yourself to look at the world from the view of the other camp. As a programmer you will find yourself shifting from concrete to abstract in your code constantly, so why not develop that skill and apply it to the way you learn? -Richard
Hit any user to continue.
>That's it in a nutshell, it's where you focus your attention. >Programmers work on two levels. On one hand we produce concrete functioning software that contains specific lines of code, classes etc. I'm not buying it. If you can't explain why this concrete pseudo-snippet works, you don't understand polymorphism or you haven't used it. This sort of thing happens in OO code all the time. Now maybe someone has never programmed in OO, which is fine. Or maybe they have what is called an "object based" system only, which is fine. But in terms of OO programming, the following is not exactly "domain" knowledge or "where I focus my attention". class B : A class C : A A b = new B A c = new C foo(b) foo(c) void foo(A a) ...
-
The way I see it, you learn karate so you need never use it. Learn the patterns because they are conscience raising, but not because they solve all problems magically.
>The way I see it, you learn karate so you need never use it. Wow, what a bad analogy :-)
-
I consider myself a good developer, fellow developers and managers as well as clients have told me the same. I code to standards and make sure it is done correctly. So why is it that in an interview when asked a question about code I get stumped and not able to answer it correctly? Am I the only one that does this? Can you BE a great developer without being able to tell you the definition of polymorphism or the like? I know I can do the work, very well. So what can I do to learn the definitions of things? I am thinking of making cue cards and going from there. They have helped me in the past. What do you think? What is the best way for you to learn? Also, do you know definitions and meanings of everything you do? Thanks
anticipate what questions may be asked, so i prepare in advance and get some really good answers put together... i try to do as much research as possible. find out about what sw team i am joining, what they do, what technologies they use, what their strengths and weaknesses may be, and try to find out as much as i can about some of the people i am going to interview with. however, in another life i had to interview someone for a mechanic's position in a coca-cola plant i worked in -- i was a plant engineer before becoming a software engineer.... this guy was facinating to listen to... i asked him to "tell me a story about a time you had to troubleshoot and repair something really significant"... needless to say he recanted a facinating experience he had troubleshooting and repairing an ammonia-based A/C or refrigeration unit that had mechanical controls. the guy was really an instrumentation technician. i always remember sitting down with him, how relaxed he was, and how he was so proud of himself for doing something nobody else in his shop dared to do. he was much younger and less experienced than his colleages. i could identify with his love for troubleshooting and fixing things.... so i too take that same approach if i am ever interviewed... over the years, i like many of you have accumulated many "aaahaaa" moments where you solve some really cool "puzzle"... in fact, sometimes i tell myself afterward in a loud voice ... "dude you are the bomb!"... just remember all those "good times"... and tell them in a very relaxed way... like you are telling a really good [truthful] story... that's really what they want to hear...
David
-
Great topic, great debate. Trust me, your not the only person who has developed for 10 years only to get stumped by not preparing for a tech interview! We are cocky, we are arrogant and sometimes we are lazy...hehe A while back I failed epically - same problems you had, guy asked me to define reflection, polymorphism, etc. I had been coding SQL all day and was like....ok I was not prepared for this sorry I fail!! I was expecting question tied to data grids, ajax, add/edit forms, SQL etc - things more relevant to the job I was applying for...go figure! lol! Looking back, I cannot say I blame them for the theoretical questions though. They have no way of knowing how good you are. So these questions stem conversation that can help them determine that. With that being said, all tech interviewers now a days, it seems they are asking the exact same questions for the most part, it's quite humorous actually.... 1.) Polymorphism 2.) Encapsulation 3.) Reflection 4.) Inheritance 5.) String Builder versus String 6.) Static versus non-static (instanced) 7.) Constructor (what is it?) Just brush up on those concepts, like all of us, you use them everyday - just do not talk about them, and you will be fine. :D -T
>1.) Polymorphism >2.) Encapsulation >3.) Reflection >4.) Inheritance >5.) String Builder versus String >6.) Static versus non-static (instanced) >7.) Constructor (what is it?) OK this is a good example for discussion, because out of those, 5) stands out as the oddball. There are a million items like 5) that a lot of programmers might not know. However the other 6, anyone who calls themselves a professional developer should be able to cobble together some sort of description for (3 might not be OO per se, but anyway...) You need to look at it from a hiring managers point of view. There are TONS of developers out there who are coding really bad OO. And it turns out almost 100% of the time, those are the exact same programmers who don't understand the above concepts. They use inheritance for composition (hey it works, right?) They use static methods and casting when they should use class methods and polymorphism (hey it works, right?) They get programs to function properly, but are actually steaming piles of unmaintainable dog doo. And one of the main reasons is they do not, in fact, understand the above terms. On the other hand, hiring mgrs who ask mostly questions about things like 5) are the ones who don't get it. You can easily put together questions or pieces from a large library (such as .NET) that any particular programmer won't know. (StringBuilder vs String is kind of an easy one, but of course you can get more obscure than that.)
-
Also never forget, you are not powerless in an interview... Imagine being asked for a definition of Polymorphism and beginning your answer with.. "Well there are a few types of polymorphism...In terms of OO programming in .Net it usually means different types of object being able to react to the same messages, by implementing a common interface." Now, some interviewers will put a tick on their sheet and move on to the next question. But some interviewers will pick up on fact that you said there were different types of Polymorphism. And they'll ask you about that. Now a few things have happened. One, you've moved the discussion onto a topic that you know something about (assuming in this case you know something about Polymorphism). Two, you've got the interviewer listening to you and interacting rather than simply reading off questions and hearing pre-fab answers. It might even turn into a lively discussion, and it hammers home the idea that you can hold your own when a discussion goes off script. Finally, you learned something about the interviewer. This is likely to be someone you'll be working with. Will chats with this person be interesting? How do they react if your opinion doesn't match theirs exactly? Always remember, most interviewers desperately want you to be a witty, likeable genius who'll fit right in and be perfect for the role. Also, never forget that you are interviewing them too, in desperation to "Pass" or "Win" the interview some people lose sight of whether the job even sounds right for them. -Rd
Hit any user to continue.
Another good answer. Regarding humor, once I know the interviewer is hearing me and knows that I'm at least competent, I might say something like "for example, sometimes my wife is sweet as honey, and sometimes she's a shrew and I never know which one is going to show up at home each night but I'm always supposed to treat her the same!" That's a pretty reasonable definition of polymorphism, that you couldn't come up with if you didn't understand it. Or if you want a better analogy, talk about customers coming into a store - some are men, some are women, some are nice, some are cranky, yet they all have the same "interface" and you use the same protocal talking to them. Again, the OP never said "textbook definition". We are talking about understanding and explaining - that's what the interview is about.
-
Well, no, I don't know the definitions of all that I use. But I'm a little stumped by your example. I expected that you were getting some obtuse algorithmic question and you just froze in the headlights for a minute. But I didn't expect you to say something like polymorphism. That's a pretty easy one. So I'm a little confused about your definition of "good" or "great" developer. Unless you're a hardcore assembly language programmer, or you "get things done" without your manager knowing what you're doing and it's all hacks, I'm wondering what kind of things you're programming.
I agree it is pretty easy and although I can explain in general what polymorphism is, the definition itself I didn't know. The interview asked me for the definition and I plainly said that I didn't know the textbook definition, I know what it is thou. Your thoughts are exactly why I posted the question. Would me not knowing how to explain polymorphism really prevent me from being a great developer? In the 13 years I have done this, I have created a wide variety of things used in both windows and web environments. Code reviews tell managers what the code looks like and what it does, all great reviews and never had anyone come off like 'What is this?'. I make sure that my code is written to standard, whether the 'standard' be microsoft standard or the company's way. They do differ at times. But 'hacks' I never do, I know enough to know what is the right way to do it. However, I have had to redo a lot of code that others have done. I think that has become my specialty in this career is the ability to go through anyone's code whether VB or C# and fix the bug or rewrite if needed. That is usually the first thing I have to do when going to a client site and get handed an existing application that is broke. If you would like specifics on the things I have developed I'd be happy to give you a list hehe.
-
The trick there is to avoid YAGNI (You ain't going to need it). Right now I'm in the middle of ripping some code out of a solution that I over-engineered a while back. I'm finding that YAGNI sometimes doesn't go far enough. It kind of suggests, you probably won't need this, but there's no harm in having it. I'm finding cases where YABOWI (You are better off without it). Particularly when you're handing over code to someone for maintenance. Also it's fun to see how much code you can rip out of a solution without the users losing any functionality. -Richard
Hit any user to continue.
Richard A. Dalton wrote:
Also it's fun to see how much code you can rip out of a solution without the users losing any functionality.
Reminds me pulling chips out of HAL in 2001 A Space Odyssey. :) I've found a happy medium in declarative programming, using predominantly XML to describe the "what", and code to describe the "how". But even the how is separated out into a declarative portion, describing the sequence, or flow, so that the actual imperative code becomes small chunks of autonomous behaviors that can be easily tested (well, in theory at least). The end result is really nice, you get a very concise definition of what is necessary, the activities involved in the task, and then the implementors of each activity. And it passes the "if I remove one piece of information, does it actually fail like it should" test, so it demonstrates that nothing superfluous to the task is going on. :-D Marc
-
I agree it is pretty easy and although I can explain in general what polymorphism is, the definition itself I didn't know. The interview asked me for the definition and I plainly said that I didn't know the textbook definition, I know what it is thou. Your thoughts are exactly why I posted the question. Would me not knowing how to explain polymorphism really prevent me from being a great developer? In the 13 years I have done this, I have created a wide variety of things used in both windows and web environments. Code reviews tell managers what the code looks like and what it does, all great reviews and never had anyone come off like 'What is this?'. I make sure that my code is written to standard, whether the 'standard' be microsoft standard or the company's way. They do differ at times. But 'hacks' I never do, I know enough to know what is the right way to do it. However, I have had to redo a lot of code that others have done. I think that has become my specialty in this career is the ability to go through anyone's code whether VB or C# and fix the bug or rewrite if needed. That is usually the first thing I have to do when going to a client site and get handed an existing application that is broke. If you would like specifics on the things I have developed I'd be happy to give you a list hehe.
>I agree it is pretty easy and although I can explain in general what polymorphism is, the definition itself I didn't know. The interview >asked me for the definition and I plainly said that I didn't know the textbook definition, I know what it is thou. Your thoughts are exactly >why I posted the question. Would me not knowing how to explain polymorphism really prevent me from being a great developer? OK I guess we are discussing semantics here. My opinion is if you can explain what it is, then you know the definition - that is your definition. I very much doubt he asked "What is the textbook definition of polymorphism?" If not, then he wants it in your own words. There are some interviewers who are fishing for specific keywords or whatever, and those are the crappy interviewers. Normally they just want to hear you speak intelligently and competently, and that rarely has anything at all to do with textbook definitions, except that you shouldn't say anything that obviously contradicts a textbook definition. If you understand basically what polymorphism is, that would be virtually impossible to do. My guess is you are reading too much into what they are asking you. They just want to know if you're one of those guys who use inheritance to implement composition :-) (As I read recently in a blog, someone said in an interview that you can inherit Car from ParkingGarage, because ParkingGarages hold cars. If you inerherit Cars from Vehicles and give some reasonable answer as to why, you'll do fine on an inheritance question.) If I were asked to give a definition, I would say it's the ability for code to handle objects of different types interchangeably. That works as long as they have some behavior/interface in common that the code expects. I seriously doubt that's the textbook definition because I haven't looked it up, but it shows I have at least a rough understanding of the idea. We can go from there if he wants to delve. One thing to keep in mind is that if the interviewer likes you, he might continue to ask deeper questions to see what he has on his hands. Don't assume that if you get to a question that stumps you, that you've failed. Some candidates might never get to those questions. You might have already gotten past the threshold for "pass" (maybe he's looking for at least a 7 on a scale from 1 to 10). He might just want to know if you're actually an 8, 9 or 10 while he's got you there. So don't panic or get nervous.
-
I consider myself a good developer, fellow developers and managers as well as clients have told me the same. I code to standards and make sure it is done correctly. So why is it that in an interview when asked a question about code I get stumped and not able to answer it correctly? Am I the only one that does this? Can you BE a great developer without being able to tell you the definition of polymorphism or the like? I know I can do the work, very well. So what can I do to learn the definitions of things? I am thinking of making cue cards and going from there. They have helped me in the past. What do you think? What is the best way for you to learn? Also, do you know definitions and meanings of everything you do? Thanks
For fun I looked this up on Wikipedia (edited): Polymorphism is the ability of one type, A, to appear as and be used like another type, B. In strongly typed languages, polymorphism usually means that type A somehow derives from type B, or type C implements an interface that represents type B. In weakly typed languages types are implicitly polymorphic. The strongly/weakly typed is good to throw in there - that hadn't occurred to me. Webster's says: the quality or state of existing in or assuming different forms Poly means "multiple", and morph means "form" or "transform" depending on whether it's a noun or verb. If you said "the ability of different types to be used in the same way", that sounds real simple and non-textbookish.
-
For fun I looked this up on Wikipedia (edited): Polymorphism is the ability of one type, A, to appear as and be used like another type, B. In strongly typed languages, polymorphism usually means that type A somehow derives from type B, or type C implements an interface that represents type B. In weakly typed languages types are implicitly polymorphic. The strongly/weakly typed is good to throw in there - that hadn't occurred to me. Webster's says: the quality or state of existing in or assuming different forms Poly means "multiple", and morph means "form" or "transform" depending on whether it's a noun or verb. If you said "the ability of different types to be used in the same way", that sounds real simple and non-textbookish.
The easiest way to explain it for most of us (who see it and use it every day), You create a class. It's type can be: 1.) unique 2.) an inherited (base) class 3.) a collection of either 1 and 2 plus a collection of interfaces In a nutshell this is Polymorphism (being able to reuse code) Example: 1.) public class MyUniqueClass { } 2.) public class MyCustomWebPage: System.Web.UI.Page { //You create a new class, and your inheriting from the base class System.Web.UI.Page (so you can utilize it's properties, methods, these have already been written, so why write them again etc.) } 3.) public class MyIssue : System.ServiceModel.DomainServices.Client.Entity, IEditableObject, INotifyPropertyChanged { //You create a new class, and your inheriting from the base class System.ServiceModel.DomainServices.Client.Entity AND you want to also utilize 2 interfaces, IEditableObject and INotifyPropertyChanged } *Remember you can only inherit 1 base class. BUT you can inherit as many interfaces as you like. This is important and something they will more than likely ask you.* This is often a good trick question they might ask you: "How many bases classes can you inherit from a single class?" Answer: 1 "How many interfaces can you inherit from a single class?" Answer: infinite Also, if you inherit any interfaces, you have to code in their abstract methods (or else you code will not compile - I am sure you have experienced this while coding). This is something they might ask you as well, which would stem a good conversation. -T
modified on Tuesday, November 9, 2010 1:10 PM
-
The easiest way to explain it for most of us (who see it and use it every day), You create a class. It's type can be: 1.) unique 2.) an inherited (base) class 3.) a collection of either 1 and 2 plus a collection of interfaces In a nutshell this is Polymorphism (being able to reuse code) Example: 1.) public class MyUniqueClass { } 2.) public class MyCustomWebPage: System.Web.UI.Page { //You create a new class, and your inheriting from the base class System.Web.UI.Page (so you can utilize it's properties, methods, these have already been written, so why write them again etc.) } 3.) public class MyIssue : System.ServiceModel.DomainServices.Client.Entity, IEditableObject, INotifyPropertyChanged { //You create a new class, and your inheriting from the base class System.ServiceModel.DomainServices.Client.Entity AND you want to also utilize 2 interfaces, IEditableObject and INotifyPropertyChanged } *Remember you can only inherit 1 base class. BUT you can inherit as many interfaces as you like. This is important and something they will more than likely ask you.* This is often a good trick question they might ask you: "How many bases classes can you inherit from a single class?" Answer: 1 "How many interfaces can you inherit from a single class?" Answer: infinite Also, if you inherit any interfaces, you have to code in their abstract methods (or else you code will not compile - I am sure you have experienced this while coding). This is something they might ask you as well, which would stem a good conversation. -T
modified on Tuesday, November 9, 2010 1:10 PM
>You create a class. >It's type can be: >1.) unique >2.) an inherited (base) class >3.) a collection of either 1 and 2 + a collection of interfaces >In a nutshell that is Polymorphism No, it's not. Don't mean to sound harsh, but it's simply not :-) Most definitions of Object Oriented Programming include 3 things 1 - classes/object/encapsulation 2 - inheritance 3 - polymorphism You have simply mentioned inheritance, not explained polymorphism. Polymorphism can not occur until runtime. You can see code that is polymorphic, or you can see code that might potentially be polymorphic, but objects changing forms, or representing different forms at different times, is strictly a dynamic issue. It is how the inherited classes and interfaces are used at runtime that distinguishes polymorphism.
-
>You create a class. >It's type can be: >1.) unique >2.) an inherited (base) class >3.) a collection of either 1 and 2 + a collection of interfaces >In a nutshell that is Polymorphism No, it's not. Don't mean to sound harsh, but it's simply not :-) Most definitions of Object Oriented Programming include 3 things 1 - classes/object/encapsulation 2 - inheritance 3 - polymorphism You have simply mentioned inheritance, not explained polymorphism. Polymorphism can not occur until runtime. You can see code that is polymorphic, or you can see code that might potentially be polymorphic, but objects changing forms, or representing different forms at different times, is strictly a dynamic issue. It is how the inherited classes and interfaces are used at runtime that distinguishes polymorphism.
Through inheritance, a class can be used as more than one type; it can be used as its own type, any base types, or any interface type if it implements interfaces. This is called polymorphism. In C#, every type is polymorphic...has nothing to do with run time, code changing during run time (dll getting dynamically created) is known as "reflection" correct?
-
The easiest way to explain it for most of us (who see it and use it every day), You create a class. It's type can be: 1.) unique 2.) an inherited (base) class 3.) a collection of either 1 and 2 plus a collection of interfaces In a nutshell this is Polymorphism (being able to reuse code) Example: 1.) public class MyUniqueClass { } 2.) public class MyCustomWebPage: System.Web.UI.Page { //You create a new class, and your inheriting from the base class System.Web.UI.Page (so you can utilize it's properties, methods, these have already been written, so why write them again etc.) } 3.) public class MyIssue : System.ServiceModel.DomainServices.Client.Entity, IEditableObject, INotifyPropertyChanged { //You create a new class, and your inheriting from the base class System.ServiceModel.DomainServices.Client.Entity AND you want to also utilize 2 interfaces, IEditableObject and INotifyPropertyChanged } *Remember you can only inherit 1 base class. BUT you can inherit as many interfaces as you like. This is important and something they will more than likely ask you.* This is often a good trick question they might ask you: "How many bases classes can you inherit from a single class?" Answer: 1 "How many interfaces can you inherit from a single class?" Answer: infinite Also, if you inherit any interfaces, you have to code in their abstract methods (or else you code will not compile - I am sure you have experienced this while coding). This is something they might ask you as well, which would stem a good conversation. -T
modified on Tuesday, November 9, 2010 1:10 PM
>Remember you can only inherit 1 base class. BUT you can inherit as many interfaces as you like. This is important and something they will >more than likely ask you.* >This is often a good trick question they might ask you: >"How many bases classes can you inherit from a single class?" >Answer: 1 In OO terms (and especially since you didn't specify a language), this isn't true. Some definitions of OO *require* that multiple inheritance be allowed. (C++ allows it, BTW). Some definitions don't. But simply saying that multiple inheritance isn't allowed doesn't make any sense unless you give more context than you have. C++, Perl, Python and Eiffel support multiple inheritance. C#, Smalltalk, and Java do not. Smalltalk does not support multiple interface inheritance.
-
>Remember you can only inherit 1 base class. BUT you can inherit as many interfaces as you like. This is important and something they will >more than likely ask you.* >This is often a good trick question they might ask you: >"How many bases classes can you inherit from a single class?" >Answer: 1 In OO terms (and especially since you didn't specify a language), this isn't true. Some definitions of OO *require* that multiple inheritance be allowed. (C++ allows it, BTW). Some definitions don't. But simply saying that multiple inheritance isn't allowed doesn't make any sense unless you give more context than you have. C++, Perl, Python and Eiffel support multiple inheritance. C#, Smalltalk, and Java do not. Smalltalk does not support multiple interface inheritance.
-
Through inheritance, a class can be used as more than one type; it can be used as its own type, any base types, or any interface type if it implements interfaces. This is called polymorphism. In C#, every type is polymorphic...has nothing to do with run time, code changing during run time (dll getting dynamically created) is known as "reflection" correct?
>Through inheritance, a class can be used as more than one type The key phrase there is "can be used". You have no "usage" in your example. All you have is inheritance. You don't have polymorphism until the types get used as different types. I know where you got that definition, and typical of Microsoft it is somewhat poorly worded. But technically, because it says "can be used", I wouldn't call it incorrect. But your original "nutshell" definition is incorrect. I'm not giving you a hard time, I just wouldn't want you to answer that way in an interview.
-
>Through inheritance, a class can be used as more than one type The key phrase there is "can be used". You have no "usage" in your example. All you have is inheritance. You don't have polymorphism until the types get used as different types. I know where you got that definition, and typical of Microsoft it is somewhat poorly worded. But technically, because it says "can be used", I wouldn't call it incorrect. But your original "nutshell" definition is incorrect. I'm not giving you a hard time, I just wouldn't want you to answer that way in an interview.
-
I consider myself a good developer, fellow developers and managers as well as clients have told me the same. I code to standards and make sure it is done correctly. So why is it that in an interview when asked a question about code I get stumped and not able to answer it correctly? Am I the only one that does this? Can you BE a great developer without being able to tell you the definition of polymorphism or the like? I know I can do the work, very well. So what can I do to learn the definitions of things? I am thinking of making cue cards and going from there. They have helped me in the past. What do you think? What is the best way for you to learn? Also, do you know definitions and meanings of everything you do? Thanks
Sorry to say, but I can remember all those definitions. Furthermore, teams I have worked with know what patterns are, and do talk about them occasionally. Knowing what things are called is the first step in being able to communicate about them. It means you understand the nugget of information and can abstract it mentally and look at it and think about it. There is an actual difference between being able to write a virtual function and knowning what polymorphism is. When I am a hiring manager, I look for this difference in a senior person, because not everybody gets it. My advice to you is, if you can't remember what (for instance) polymorphism is, then your training is not complete. You need to study oo design, not just memorize definitions. It's not just for interviewing, but for the good of your whole career.
-
Through inheritance, a class can be used as more than one type; it can be used as its own type, any base types, or any interface type if it implements interfaces. This is called polymorphism. In C#, every type is polymorphic...has nothing to do with run time, code changing during run time (dll getting dynamically created) is known as "reflection" correct?
>code changing during run time (dll getting dynamically created) is known as "reflection" correct? Well, I think you're jumbling several different issues together. Reflection is a different issue. Often, if you're using reflection in .NET, you're doing "anti-OO". You're finding out stuff about the object type when polymorphism would not require you to do that. (DLL creation is a third topic altogether). Here are 2 code snippet examples in C#. The first is an exmaple of polymorphism. The second is an example of using reflection to do a similar thing. abstract class Animal { public virtual void Move( ) { Console.WriteLine("An animal is moving"); } } class Bird : Animal { public override void Move( ) { Console.WriteLine("A bird is flying"); } } class Fish : Animal { public override void Move( ) { Console.WriteLine("A fish is swimming"); } } // An object of type Animal can hold an Animal, Bird or any // other subtype. Classes are static, objects are dynamic, // so using bird1 in this way can be considered polymorphism. Animal bird1 = new Bird(); Fish fish1 = new Fish(); // this by itself is not polymorphism foo(bird1); foo(fish1); void foo( Animal animal ) { animal.Move(); // Polymorphism // Where this goes is dynamically determined at runtime, // and if you only were shown only the foo function, you // could not tell where this goes } Or... // Reflection // You are asking the object to "look at itself" (reflect) to determine // what type it is from its metadata. if (animal.GetType() == typeof(Bird)) { Console.WriteLine("A bird is flying"); } else if (animal.GetType() == typeof(Fish)) { Console.WriteLine("A fish is swimming"); }
-
I consider myself a good developer, fellow developers and managers as well as clients have told me the same. I code to standards and make sure it is done correctly. So why is it that in an interview when asked a question about code I get stumped and not able to answer it correctly? Am I the only one that does this? Can you BE a great developer without being able to tell you the definition of polymorphism or the like? I know I can do the work, very well. So what can I do to learn the definitions of things? I am thinking of making cue cards and going from there. They have helped me in the past. What do you think? What is the best way for you to learn? Also, do you know definitions and meanings of everything you do? Thanks
We mentioned earlier interviewers that were fishing for specific answers. Here is one you might run into. For C++ and C#, sometimes they want to hear the word "virtual" when speaking about OO or polymorphism. Because this is the keyword that enables polymorphism. If you do not have the keyword virtual in your code, then you do not have polymorphism. (Some would argue that you could, though, simply by having derived class objects being assigned to base class variables, even if virtual methods are not overridden anywhere. Others would argue that you would have to have polymorphic *behavior*, and that requires overridden methods.) If you see the word virtual, then you know there is at least potential for polymorphism. You would have to inspect the code to see if that method is ever overridden. Polymorphism doesn't necessarily exist simply because of the virtual keyword. In math that's called a "necessary but not sufficent condition". Without the virtual keyword, there's no polymorphism. With it, there might be polymorphism.
-
Chris Losinger wrote:
in my 17 years of programming, i have never had a discussion with a co-worker about a "pattern".
Switch to Java (or SmallTalk if it still exists) and you'll have plenty of such discussions.
Nemanja Trifunovic wrote:
Switch to Java (or SmallTalk if it still exists) and you'll have plenty of such discussions.
Not in my experience. Even on the java forums (the Sun ones) discussion of patterns is low. And even lower still when one removes the questions from inexperienced users.