In this case your mocks are the second implementation and you need the interfaces. If you didn't mock, then "you aren't gonna need it".
jim lahey wrote:
and with an interface I can mock out all my dependencies and write proper unit tests
In this case your mocks are the second implementation and you need the interfaces. If you didn't mock, then "you aren't gonna need it".
jim lahey wrote:
and with an interface I can mock out all my dependencies and write proper unit tests
A more interesting question is, why do you need to know the type in the first place? My take against it is code without prefixes are more readable. If the variable name tells me nothing, then the name is the problem, not the type or lack of prefix. Another thing is I find it impractical. It's easy to find prefixes for primitive types, but what about custom types? Should we invent prefixes for all of them? What if we do, doesn't accAccount
look pretty stupid?
That's true. My point was that a swallowed exception leaves the program in an invalid state. Even if the magnitude is smaller, no language can protect us from bad practice.
Try with an empty catch does just the same. Does this statement apply to all these languages, including C# and Java?
Marc Clifton wrote:
What book/site are you reading? That at least has more meat to it than I've encountered.
I am reading Domain Driven Design by Evans: http://books.google.no/books?id=7dlaMs0SECsC&dq=domain+driven+design&pg=PP1&ots=ulyP23_6u6&sig=Qt5nA1JUGaJhKLITdsLbti9fy0U&hl=no&prev=http://www.google.no/search?client=firefox-a&rls=org.mozilla:en-US:official&channel=s&hl=no&q=domain+driven+design&btnG=Google-s%C3%B8k&sa=X&oi=print&ct=title&cad=one-book-with-thumbnail[^]
Marc Clifton wrote:
I don't see how you can have a value without identifying who/what that value is.
Perhaps this was not the best example of them all. DDD defines very clearly what an Entity is, and what a Value is. But sometimes the identification is blurry: Something can be a value in one context, but an entity in another. I guess my database example applies to an aggregate, an entity, consisting of value object. And when I think of it, currency is probably not a value object.
Marc Clifton wrote:
The point of language is not to reduce complexity, but to increase accuracy
I don't see any conflicts here. When we apply an accurate language to our code, we remove the need to constantly map our constructs to whatever our customer calls it. That alone will contribute to lower complexity. Perhaps it's more precise to say that we apply DDD to reduce complexity, and the accurate language is one of our tools to do that.
Good points you made there, but did you have to insult all of in the process? We are, after all, trying to learn and understand this thing.
The goal of DDD, or one of them, is to create a common language for all parties. That means developers, business analytics and customers all use the same words and phrasing. It will take time to fully understand the domain, so this is something we have to apply piecemeal. Like eating the elephant. This language is then used to reduce complexity. We name our objects and services by the same names, and they will have the same interactions. Then there are some rules to adhere to. I have only scratched the surface, but I found the use of Repositories, Entities, Value Objects and Aggregates to be especially powerful. Just by applying those, I found a deeper understanding of the customers domain and saw numerous ways to improve my code. That goes for database design as well. One example: If your aggregate contains a collection of value objects, and you need to update some of them in the database: Just delete them and insert them anew. That is because a value object is not identified by who they are (that's an entity), but by the value(s) they have. Color is one example. Currency another.
The problem with this approach, which the folks at Microsoft should be very well aware of, is that on too many similar looking dialogs we will start to automatically click OK without checking. That's what I did after the 3rd dialog, anyway. It also disappointed me that I was offered no options to turn off this feature. -- Thomas
My showstopper are all the nag screens. I have never seen so many message dialogs per minute in my life! Almost whenever I do something I am told Windows need permission to continue. Can I turn this madness off somewhere?
sKoi wrote:
Comments should focus on the intention of the code not the implementation
I think we both have to search for a very long time to find anyone who disagree. But, how often does this happen? Sooner or later, the code will drift away from the comments. I believe if you are that good, that your code always match your comments, and your comments are right on the spot, then you could drop the comments. Your code would be already be clean and self documenting.
digital man wrote:
If you can't explain what you've done then you probably dodn't understand it
I'd say, if you can't read and understand what they wrote (coded), the code is already in a bad state and no comments can save it. I don't know about you, but I have to understand the code, not the comments, before I can do any changes.
I thought it was because chemical processes work slower in the cold. Isn't a freezer a prefered long term storage for batteries?
I'd like a real observer pattern on the event handling. I don't want to: if (myEvent != null) myEvent(this, EventArgs.Empty);
And I definitely don't want to: MyEventHandler me = myEvent; if (me != null) me(this, EventArgs.Empty);
Why should I event care if there are any listeners? I just want to: myEvent(this, EventArgs.Empty);
I think you are confusing two things: OOP in itself and the need to replace an existing system. In this case, there is no need.