That is a good analysis of some sample requirements. The next complexity that comes to mind is definition of the entities to support both scenarios. Admin area allows you create a customer. User area allows viewing the Customer, together with Orders and LineItems. I assume the same Customer entity is used for both areas of the system. Now in the admin area when creating a customer, you simply have a Collection that is null. This keeps things fairly simple. If it was not null, would you design your system to handle it (insert Orders too) or would you throw an exception and force things to remain simple for CRUD operations beyond reads?
Leftyfarrell
Posts
-
DTO design supporting multiple tables -
DTO design supporting multiple tablesThx, will check into that. By draw the line, I mean with nested objects in an object graph. Not really shown in my object/table example above. But for instance say you have a Course, Session, and Location. A course has multiple sessions, and each session has a location. Does your Course object have a Collection objects for reading convenience? What about CRUD on the Course, where do you "draw the line"... if you add a Session to the collection, do you now save the Course AND insert a new Session? If the Session has a Location within it, and you add a new Location to a new Session and add that to a Course and save the course, do you Insert the Session and the Location? It seems to me that dealing with object graphs for CRUD operations can get complex quickly. But having those properties does properly represent the domain model, and certainly does improve the way you work with the data when only reading it (loading Course, Session and Location info for display purposes). Then for performance reasons, you might get into lazy loading, although that may not be feasible either across WCF boundaries. It seems all about the trade-offs and I'm just looking for the right balance that makes sense.
-
DTO design supporting multiple tablesThanks for your reply. I agree that I don't like the class per table idea. Tables are normalized. Classes should represent proper domain objects. The question then becomes, where do you draw the line on CRUD operations. To me, a UserWidget without the UserControlPath property is kind of useless. On the other hand, when you save a new UserWidget, you cannot edit/change the UserControlPath property value. You could consider it a lookup and allow them to change the WidgetId (indirectly choosing a different UserControlPath). But, if the UserWidget is exposed via WCF, read-only properties are not supported. So I can expose the WidgetId, which is editable, but if I provide a string UserControlPath property, it becomes editable on a WCF data contract, which I don't want. So they load up a UserWidget object, change the UserControlPath string value, and submit the object for saving... now what? Ignore the property value? Allow the saving of this instance to change the value for all other instances? Neither sounds very elegant.
-
DTO design supporting multiple tablesI'm looking for some feedback on the best/alternate ways to handle the following scenario: Widget (defines the pool from which to create widgets) ======== Id UserControlPath DefaultWidget (provides a set of default properties for a widget instance) ========== Id WidgetId Name UserWidget (defines a widget with properties customized by the user) =========== Id WidgetId DefaultWidgetId Name Given the 3 database tables above, how would you create a DTO object? I could have a class per table, but then I have to do additional queries or work with additional objects for all the data I need. I could use nested object (UserWidget has Widget.UserControlPath property). I could flatten out the UserWidget to have those properties plus a UserControlPath property. When loading a UserWidget, I cannot display it without the UserControlPath value. When saving updates to the properties of a UserWidget, the UserControlPath is read only - but WCF data contracts do not support the idea of read-only properties(As far as I know). Saving full object graphs can quickly become complex. How do you model similar relationships? Thx.
-
User Control fires event - page and all other user controls subscribe to eventHere is my scenario... a user control (UCCulture) with a dropdown box for culture. Autopostback on change of dropdown box sets the thread culture to new value early in page lifecyle. In most cases, a post back does not require rebinding data, however, in the case of a culture change, this often does require a rebinding of data using the newly selected culture. I'm looking for the best way to allow UCCulture to fire a CultureChanged event and have the page, masterpage and all other user controls subscribe to this event, so they can rebind their data, if necessary, when the event is fired. I have base classes for Page and UserControl, and would prefer to wire up the events in these base classes instead of manually writing the event hookup code individually. Anyone developed a pattern for this type of thing already?
-
Object Oriented Design - object graphs and lookup table dataYeah, I guess my question is targeting the classes that might normally be created by an ORM. In our case, they are WCF data contracts, and have no behaviour. The system also has other classes with behaviour, such as a UserBusinessManager class, that exposes methods for working with a User object, to perform actions like Load, Update, Insert, etc.
-
Object Oriented Design - object graphs and lookup table dataI would like to get some ideas on the best way to handle dealing with data in an object oriented system. Consider: User ======= UserId UserTypeId FirstName LastName OrganizationId UserType =========== UserTypeId Name Organization ============ OrganizationId Name District YearFounded How would you model these database tables with your objects? For simple lookup table data, would User have a property for just UserTypeId or would it have a property of UserType(which includes both Id and Name)? For more complex data, would User have a property for just OrganizationId or would it have a property of Organization? If you go the route of having only an Id as property, how do you manage the display of property data, for instance, when displaying the User in the UI, you still need to display the UserType.Name value, and not just the Id. Would you load and cache a collection of UserType objects and use this to lookup the UserType.Name value when needed? Would it make sense to expose a readonly property User.UserTypeName for convenience that provides the name value? I say readonly since you can edit the Id value, but not the actual UserType name when saving a user. The object graph can grow large with several interactions... what rules do you use to draw the line as to where an object instance and its related graph stop? Thanks for any opinions. Assume the objects must be WCF friendly for use as data contracts.
-
Avoiding Circular References with Application ArchitectureGreat discussion guys. Thanks for the comments and sorry I was away for a while... this tab got lost within 40 others in my Firefox window... For now, what we wound up doing is something like this: Global.Common assembly contains base classes for Validation. Logic to load validation error message strings is different depending on what "area" of the application you are calling validation from, ie. UI vs. BLL Each of our "areas" also has an assembly for common code (shared among the assemblies in that area) so we have: UI.Common and BLL.Common For the validation example... we created a validation class in UI.Common that inherits from Global.Common, but provides its own implementation that loads the message strings from our BLL by calling through WCF. We will create another validation class in the BLL.Common that inherits from Global.Common, but this implementation will call the BLL methods directly to load the message strings, without going through WCF. This way, the cross-cutting concerns are kept within the 3 Common assemblies and uses a provider model idea for the loading of message resources.
-
Avoiding Circular References with Application ArchitectureThanks Jon, was hoping to get your opinion. I considered that, but worried that it wasn't the "best" or "proper" solution, since the circular reference would technically still be there right? It would just be hidden on one side by the interface. So the Common API would still reference and call the Business Layer for data. All other assemblies would reference an Common Interfaces project and their Common instance would be provided by Unity. Something like that? Again, each one still references the other directly or indirectly and that is what I wasn't sure about.
-
Avoiding Circular References with Application ArchitectureThanks for the suggestion. I simplified the example in hopes of not boring readers with details. Our current architecture currently uses MVP pattern for the UI layer. The presenters make use of a Business Service Client Adapter assembly that provides data from our Business Service layer through WCF. The heart of my question really has to do with cross cutting concerns like Logging, Exception Handling, Validation, Instrumentation etc. We are implementing the use of MS Application Blocks. If the cross cutting concerns are encapsulated in accessor classes within the Common assembly. How do you handle cross cutting concerns that require database access? Should they have their own encapsulated DAL making them like a standalone service? If they try to use the existing Business Logic layer (and DAL) this will create a circular reference. A concrete example could be Validation, where the validation error messages must be localized and loaded from a database.
-
Avoiding Circular References with Application ArchitectureI would like to hear comments and suggestions on the proper way to avoid circular references between projects. Consider an app with the following assemblies: - UI - BusinessLogic - DAL - Common So typically, the dependencies would look like this: UI >> BusinessLogic >> DAL and all 3 above depend on Common So far, so good, but... lets say that I have a common collection of data that is required to be used in all 3 of the main assemblies, such as a list of supported cultures. So I create a static helper class in the Common assembly and this gives me access to the shared singleton collection from all 3 main assemblies. However, to populate that singleton collection when the app starts, I need to hit the database once. To do this right, I would load the collection from the database using my BusinessLogic layer. But I cannot do this because doing so would cause a circular reference between Common and BusinessLogic. What is the best architecture to avoid/work around this problem? Thanks.
-
Architecture that supports Unit Testing - Is there such a thing as Too Many Interfaces?Yeah, we too were happy to see the EF... haha, and happy again to see it go when we shelved it (for now). We hope to pick it up again and provide a DAL implementation for it when v2 comes out. You are right about testing the DAL. The Adapter is testable by mocking the DataAction. The DataAction is testable by providing a connection string to a unit test database. That is where some of the complexity of my original post came from. DataAction method test hits the db, runs a sproc and returns a DataSet or DataTable or DataRow. The Adapter is like the commander of the DAL, orchestrating the call to DataAction and Mapper. My original question was hinting at the dependency tree depth. Our current setup has: Adapter passes DataTable to Mapper and the mapper builds up a User entity and returns it. Now, some of the hidden complexity within the Mapper is created by the need to support multiple languages (globalization), etc. So internally: Mapper depends on a static Globalization class, which has a singleton dictionary (collection of supported culture info) that is used in part of the mapping. If the singleton is not populated, then it too needs to call a separate GlobalizationAdapter (to hit the db and get the collection). It was here that I was wondering about the depth of the unit test. Because the Globalization class is static, I can only mock the GlobalizationAdapter that it uses to go to the db (and avoid a db hit during unit testing). So when I'm unit testing UserMapping, I'm actually testing: UserMapping >> Globalization(static) >> MockGlobalizationAdapter Without mocking the GlobalizationAdapter, the test fails obviously. I guess my question really deals with the best way to handle unit testing classes that depend on static classes or singletons. Or should this dependency chain be re-architected in any way?
-
Exception HierarchyBuilding on Jon's answer, we have similar layers and while not necessary, here is a glimpse of what we did. Define classes for DataAccessLayerException and BusinessLayerException and UILayerException. Then we used the MS Patterns and Practices Exception Handling Application Block to help us handle unhandled exceptions (and by handle, I mean log, determine whether or not to rethrow, etc.) at the boundaries. If the DAL catches an exception, it is wrapped in a DataAccessLayerException and rethrown. The Business Layer can then catch the DataAccessLayerException and deal with it. If an Exception is thrown in the BusinessLayer then it is wrapped in a BusinessLayerException and rethrown. The handling of different exception types can be defined within the application block configuration. In our case, the application block still only provides handling for unexpected errors. We would still have a try.. catch to internally handle errors that you can code for and recover from. The application block handles everything else and logs it the listener(s) of our choice (defined by configuration).
-
Architecture that supports Unit Testing - Is there such a thing as Too Many Interfaces?Thanks for your thoughts everyone. I see some common threads in the responses, so... why is our DAL so complicated? The reason our DAL was broken up into multiple projects (Mapper, DataAction and Adapter) was because for our first DAL we took a stab at using the EntityFramework v1 in a disconnected multi-tier application. When using the Entity Framework, the DataAction in our case would return a ModelUser object, as defined by the entity data model. We did not like the idea of passing this object (tied to the entity framework infrastructure) all the way out to our client tiers. To remedy this, we put a facade on the outside of it (Adapter) and created a Mapper that would translate/convert the ModelUser object into a POCO (Plain Old CLR Object) EntityUser. This translation is not overly straightforward, so it seemed appropriate to split these up. So, the EntityFramework DAL would work like this: Business Layer calls the Adapter Adapter calls the DataAction which returns a ModelUser Adapter calls Mapper which takes the ModelUser and returns an EntityUser Adapter returns EntityUser to the Business Layer So for us, the Adapter, DataAction and Mapper were considered separate parts of the DAL, but still part of a single DAL implementation. Both the DataAction and Mapper methods might have different signatures for a EF DAL vs. ADO.NET DAL implementation. When it came to the ADO.NET DAL, it seemed to make sense to keep the same structure to avoid confusing things. The Business Layer references an IAdapter interface, so the whole DAL can be swapped out. As well, the DataAction implements an interface so that it could be mocked and avoid the database hit during unit test runs. Again, this response is much longer than I'd first hoped... thanks for sticking with me to the end.
-
Architecture that supports Unit Testing - Is there such a thing as Too Many Interfaces?Thanks for the reply. Heh, I see your point. Ok, maybe I can rephrase the question. Does anyone have a general rule of thumb they use in terms of class dependency depth that is OK for unit tests? How many levels into a dependency chain is OK vs. too far in a unit test? If I have 10 objects, chained together with dependencies... and I am writing a unit test for the parent object... can I go in 2 levels before I should create a mock and terminate the chain for testing purposes? 3 levels? 4? Thoughts?
-
Architecture that supports Unit Testing - Is there such a thing as Too Many Interfaces?I have a question about architectures that support unit testing. Say I have these objects and dependencies:
DALUser
>>MappingUser
>>Common.LocalCulture
DALUser
>>IDataActionUser
DALUser
object has dependency onIDataActionUser
, whose instance is populated using dependency injection. It also has an internal dependency on aMappingUser
object. The logical flow looks like this: Business Layer object calls theDALUser
object and requests aUser
objectDALUser
callsIDataActionUser
class to hit the database and return aDataTable
DALUser
callsMappingUser
, passing theDataTable
and theMappingUser
class transforms theDataTable
into aUser
object and returns itMappingUser
depends on a staticCommon.LocalCulture
class that does some culture ID conversionsDALUser
gets theUser
object fromMappingUser
and returns it to the Business Layer, as requested With these dependencies, I can easily mock theIDataActionUser
and return a hardcodedDataTable
from the mock for testing purposes. My question is... what is the appropriate scope of a unit test? Should I create an interface forMappingUser
so that I can mock it and ensure that I am testing ONLYDALUser
? What prevents you from moving towards a design where every class in your system has an interface? How do you determine where the line should be drawn between dependencies that should be mocked and dependencies that you include in a single unit test? If I do create an interface and mockMappingUser
, then my "test" ofDALUser
isn't really testing much at all, sinceDALUser
only acts as a public interface to the business layer and a controller of sorts that calls these dependent objects. Without the functionality of the dependent objects included in the test, theDALUser
class doesn't really do much... so it is still worth unit testing? I'm obviously new to the unit testing game, and trying to absorb the finer points of the philosophy. Thanks in advance for opinions. -
VS2008 stability [modified]I'm using Vista Business and VS2008 with SP1 Beta installed. It does crash sometimes, maybe every 4 days (I usually don't shut down and restart every day but leave windows open so I know where I left off). The biggest issue for me is intellisense that stops working completely. The only thing that I know that fixes this so far is restarting visual studio.
-
AV recommendations?This might be of interest... http://www.av-comparatives.org/[^]
-
Employment Contract Interpretation - invention/creationNo, I'm not starting a business. I'm just looking to be informed about what I'm signing and what the implications could be if I did work on anything on my own.
-
Employment Contract Interpretation - invention/creationLOL... great response!