Developers Who Test (and Don't)
-
urlonz wrote:
What is this "requirements" you speak of.
:laugh: It's funny...as long as it is someone else trying to hit a unknown target. Sales: Users want a thing. Dev: Uh, can you describe it to me? Let's meet and gather some requirements. Sales: Can't you just build it? Are you stupid or something? You aren't really a dev are you? I asked Bill-Bob in Tech support who knows Python and he said he can build it for me in 2 days. Dev: Billy-Bob is a genius!! Go for it. :laugh:
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
Me: can you provide instructions on how to test your code? Current Co-worker: I can't think in that kind of mind set. Me: :confused: co-worker also refuses to answer most emails - they don't want to be pinned to a paper trail. It really sucks on so many levels.
-
I also find that if you want to make testable code, stay as "functional" in your programming as much as possible. Anecdote: I recently had to rework a monolithic module. The final result was 4 independent modules (separation of concerns) that were pure functional (inputs, outputs, no side effects) which generated almost the same information, but it was a lot easier to test/debug/capture intermediate results/replay/prove/etc. The functional/separated modules actually were a lot cleaner and faster (less iterations) since one data structure in the old monolithic was split into separate,focused structures in the new modules. After the refactor was complete, then the new requirements were added with clearly observable/diff-able results in outputs of the affected modules.
Excellent commentary. My primary customer makes a lot of different industrial machines. One of the primary engineering tasks is testability - how do we know this thing actually works? Keeps the electrical and mechanical engineers busy. No reason why the same principles could not be applied to the software.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
-
Me: can you provide instructions on how to test your code? Current Co-worker: I can't think in that kind of mind set. Me: :confused: co-worker also refuses to answer most emails - they don't want to be pinned to a paper trail. It really sucks on so many levels.
-
The next two words out of Dev's Boss' mouth should be: 1) "You're" and 2) should start with a "f" and end with an "ed".
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
Gives you a whole new perspective on driverless car software huh? Speed: 60 kph Terrain: mountain road on a bridge over a chasm Altitude: 2000 meters above sea level Forward Sensors: 5 workers on the road 10 meters ahead Reverse Sensors: tractor trailer approaching at 65 kph 20 meters back Plow into the workers (5 killed)? Veer off the bridge? (1 killed: passenger)? Things that make you go hmmmmmm.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
urlonz wrote:
What is this "requirements" you speak of.
:laugh: It's funny...as long as it is someone else trying to hit a unknown target. Sales: Users want a thing. Dev: Uh, can you describe it to me? Let's meet and gather some requirements. Sales: Can't you just build it? Are you stupid or something? You aren't really a dev are you? I asked Bill-Bob in Tech support who knows Python and he said he can build it for me in 2 days. Dev: Billy-Bob is a genius!! Go for it. :laugh:
How many times have I had this conversation. Real-life conversation between me and a member of sales as we sat down to lunch one day: Sales: Hey, how is the Project X thing going along? Dev: What's Project X? Sales: You know, Project X. We've been selling it over the last month. Dev: Never heard of it. Sales: Geez, man. We have a client coming onboard next week. Dev: I guess you should explain to me just what you've been selling. Sales: I don't have time for that, but you need to get on that right away.
-
Kevin Marois wrote:
I think that there's a stigma associated with Testing that the primary purpose of a developer is to write code, and testing isn't code.
Yes, but the stigma of producing terrible code that literally crashes in production is far worse. The best kind of testing (and software development) occurs when a developer (no matter the level) literally thinks:
I _own_ this software and it represents me.
However, in corporate environments -- yes I work in one too -- this does not occur for many reasons: * dev is ignored anyways * dev has so many layers of management no one ever really talks to dev anyways * project is boring * project is doomed for other reasons anyways * there were no actual requirments created anyways, so anything could be accepted (screw it) * people in charge who are driving the project don't know anything about actual software dev anyways Too many more to list here.
raddevus wrote:
The best kind of testing (and software development) occurs when a developer (no matter the level) literally thinks:
I _own_ this software and it represents me.
I disagree. I don't consider myself the owner of any of the code I write. Author, yes, owner, no. I've seen too many cases where having devs think they own code leads to egos being attached to the code, which creates worse roadblocks to getting it fixed than anything else. I just test the code I write so whenever there's a problem, I know it'll be in someone else's code instead of mine. Helps me relax at night :). Oh, and test data? A useful tool, but not a sufficient one. It misses sooo much, and all the bugs that cause catastrophic failures are almost always in untested parts of the code. As devs, its our job to not leave parts of the code untested. Period.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
It's about the environment. Having someone leave after 2 days is preferable. When I was 19, my first full-time position was working with Insurance Calculations for Premiums. Finding out that I COULD BE FINED for errors in my calculations CHANGED me for the better. I would add diagnostic output to the current code (based on a setting)... And it allowed me to see the internals of ANY Premium Calculation change. The worse part was the ratings books were always being updated, and our #1 change was to the ratings system that adjusted "some" premiums. So, I would make my changes, capture last nights run. Restore/rerun using my new calculation, and DIFF the 2 sets of data. Highlight, and MAKE MANAGEMENT sign off that the new calcs were good, and nothing else was affected... From those days forward. Having some semblance of testable code (at least unit tests) was key. And my biggest goofs were thinking something was such an OBVIOUSLY SIMPLE solution that it did NOT need any "coverage" testing...
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
Over a 35-year career, I have worked in two shops where they did good unit tests. And these tests caught practically all bugs. Integrations were a breeze. Merging changes, no problems. Both these shops had psychopathic managers that made life a horror. One was a startup that burned brightly for a year and went dark. The other was a zero-documentation, high tech debt nightmare. My conclusion is that a conservation law may be at work limiting the amount of goodness in any workplace. I am still working on an experiment to reveal the exact nature of this fundamental law.
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
I don't tend to write test cases; I write industrial automation code and to try to write tests for every possible interaction with the I/O would be a nightmare, so I do simulations with virtual and physical (dummy) hardware in my office. it's easier to simulate real time issues and faults with switches, knobs and various sensors. This finds 99% of all issues before they go out to the field, then on-site startups tend to find the remaining 1% of the edge cases. As the sole programmer and tester (and installer, and trainer) for a decent sized company I have to manage my time very carefully. And yes I have to go to each facility for startups; way too much traveling.
-
Gives you a whole new perspective on driverless car software huh? Speed: 60 kph Terrain: mountain road on a bridge over a chasm Altitude: 2000 meters above sea level Forward Sensors: 5 workers on the road 10 meters ahead Reverse Sensors: tractor trailer approaching at 65 kph 20 meters back Plow into the workers (5 killed)? Veer off the bridge? (1 killed: passenger)? Things that make you go hmmmmmm.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
How many times have I had this conversation. Real-life conversation between me and a member of sales as we sat down to lunch one day: Sales: Hey, how is the Project X thing going along? Dev: What's Project X? Sales: You know, Project X. We've been selling it over the last month. Dev: Never heard of it. Sales: Geez, man. We have a client coming onboard next week. Dev: I guess you should explain to me just what you've been selling. Sales: I don't have time for that, but you need to get on that right away.
-
I just started reading a very interesting book (Developer Testing: Building Quality Into Software (amazon)[^]) and the intro reminded me of a time I had a similar exchange with a dev.
Jeff Langr said:
One developer, however, quit two days later without saying a word to me. I was told that he said something to the effect that “I’m never going to write a test, that’s not my job as a programmer.” I was initially concerned that I’d been too eager (though I’d never insisted on anything, just attempted to educate). I no longer felt guilty after seeing the absolute nightmare that was his code, though.
Back in the day when I was in QA, I approached a developer about a recent change he'd made to the code. Me: "Hey, can I get the data you used to test your changes?" Dev: "What data?" Me: "Well, you know. The data you used to test after you made the changes and did the build? I figure I can use it as a starting place for data I can send through to insure the changes work." Dev: "Oh. I didn't run any tests. That's for you to do. I built the thing and put it out there. Now, go test it." Me: :wtf: :~
I've owned business, managed, sold those and written code. There is a fine line between releasing perfect code and missing a window of opportunity. So many developers forget that they are employed because their software solves a problem and when it fails to solve something they look back and ask "Why did I get laid off?" I've found the trick to fixing bugs (and you all have them, and if you don't wait 5 minutes and you will) is to be nimble and fixing them quickly. 6 Month releases are a joke and cause users to struggle through potential nightmares and are ONLY around to justify QA groups and terrible managers. Some applications require test procedures which are always evolving and NEVER will be perfect because applications and their solutions evolve rapidly. If a developer is always being pushed for features and has little time to drive the entire application through it's paces I could see how someone would respond, "have someone test the product". There is always two sides to a story and if both aren't heard and weighed well that's crappy management or a market that demands it faster than you can deliver. You think Facebook was perfect the first release or even the second?
-
raddevus wrote:
The best kind of testing (and software development) occurs when a developer (no matter the level) literally thinks:
I _own_ this software and it represents me.
I disagree. I don't consider myself the owner of any of the code I write. Author, yes, owner, no. I've seen too many cases where having devs think they own code leads to egos being attached to the code, which creates worse roadblocks to getting it fixed than anything else. I just test the code I write so whenever there's a problem, I know it'll be in someone else's code instead of mine. Helps me relax at night :). Oh, and test data? A useful tool, but not a sufficient one. It misses sooo much, and all the bugs that cause catastrophic failures are almost always in untested parts of the code. As devs, its our job to not leave parts of the code untested. Period.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
Good discussion and viewpoint.
patbob wrote:
I don't consider myself the owner of any of the code I write.
I have a conflict with this but I lean toward ownershipo because I have created numerous projects that I am the requirements gatherer, system analyst, developer, tester and deployer. I own it 100%. You can see the app I created this way on multiple platforms: Swift (iOS) app in app store : CYaPass on the App Store[^] Android app in Google Play ==> CYaPass in Google Play[^] As a windows winform app: ==> C'YaPass: Forget All Your Passwords | Get C'YaPass[^] and even as a JavaScript, HTML5, Canvas app -- no install required, try it in your browser: C'YaPass: Forget All Your Passwords | WebApp[^] So my ownership ideas come from that. And I know dev's egos go crazy at times, but without ownership people just don't give a crap how things turn out generally. Just an opinion of course.
patbob wrote:
s devs, its our job to not leave parts of the code untested. Period.
I know. You can't do everything. Except, when you have to. :)
-
And these are the people who write the code used by your bank, your car, your life support machine ... Frightening, isn't it?
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
Don't even talk about the banks... :( I worked in the financial industry for 20 years. What an utter mess!
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
-
I don't tend to write test cases; I write industrial automation code and to try to write tests for every possible interaction with the I/O would be a nightmare, so I do simulations with virtual and physical (dummy) hardware in my office. it's easier to simulate real time issues and faults with switches, knobs and various sensors. This finds 99% of all issues before they go out to the field, then on-site startups tend to find the remaining 1% of the edge cases. As the sole programmer and tester (and installer, and trainer) for a decent sized company I have to manage my time very carefully. And yes I have to go to each facility for startups; way too much traveling.
...and this is the way any and every business should run. Nothing will ever be perfect in this new world of complexity. Wait until self driving cars start running into things..oh and they will!
-
I find unit tests helpful when initially developing some new functionality, as they allow me to focus on the new functionality and getting that working in isolation. Then I can look at integrating the new functionality when I know it works. And definitely agree with your point about regression testing. I think this is possibly the single most powerful reason to use them. If I make a change to the code, I want to be sure I have changed all the affected areas, and breaking unit tests gives me exactly that.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare Home | LinkedIn | Google+ | Twitter
I completely agree with the comment about regression testing. You have to at least run the code to make sure it works right? Why not frame that test in a way that can be run again? If you have a decent framework set up and are familiar with how to use it then it won't take much longer anyway. Then sometime down the road you have to modify it (internal enhancement for performance or change in functionality) and it gives you so much more confidence to make the change knowing that you had good coverage tests from before.
-
It's about the environment. Having someone leave after 2 days is preferable. When I was 19, my first full-time position was working with Insurance Calculations for Premiums. Finding out that I COULD BE FINED for errors in my calculations CHANGED me for the better. I would add diagnostic output to the current code (based on a setting)... And it allowed me to see the internals of ANY Premium Calculation change. The worse part was the ratings books were always being updated, and our #1 change was to the ratings system that adjusted "some" premiums. So, I would make my changes, capture last nights run. Restore/rerun using my new calculation, and DIFF the 2 sets of data. Highlight, and MAKE MANAGEMENT sign off that the new calcs were good, and nothing else was affected... From those days forward. Having some semblance of testable code (at least unit tests) was key. And my biggest goofs were thinking something was such an OBVIOUSLY SIMPLE solution that it did NOT need any "coverage" testing...
Thanks for adding to the conversation.
Kirk 10389821 wrote:
my biggest goofs were thinking something was such an OBVIOUSLY SIMPLE solution that it did NOT need any "coverage" testing...
So true. I've found that often to be true also. this whole dev-testing thing is really about the mentality you bring to the coding game. And then, there are tools & methods that can help make it all are part of the process. A good book that makes you contemplate the entire dev life cycle. I'm enjoying the book.