[Message Deleted]
-
[Message Deleted]
-
[Message Deleted]
(I was typing my reply when your old post disappeared! I figured it had been moved, but I had some trouble finding it. Anyway, here's what I was trying to say before.) You're thinking a little old-skool there. I think if you take advantage of the .Net Framework, you can eliminate the middle-tier and still keep things secure. You don't want to get into COM-hell, which is not entirely eliminated with .Net assemblies. The Windows client and the SQL database are enough. Keep it simple. If you are going to store your information in a database, you won't need to serialise your objects... that's really only useful if you're using files for storage or if you really need a middle-tier. To store an object all you need is to put the current values of the member variables into columns in your database. So if you had an object like this: class myObject { int myNumber; string myString; ...some functions and such... } You would simply create a database table like this: create table myObject ( object_id int, myNumber int, myString varchar(x) ) You need an object ID so you can make sure you don't get multiple instances mixed up. You could use any property of the object to make it unique - doesn't have to be an integer. Then you create a stored procedure to store this object data (and perhaps send back a new unique ID), and another stored procedure to retrieve objects based on their ID. You will be able to use these stored procedures in a secure fashion from the client, and you will be able to take advantage of all kinds of cool security features designed for this exact purpose, and all your validation can be performed on either of (or both of) the two layers. As long as you're not doing super-complex math or predicting the weather or something, you should be able to push all the validation down to the database layer. That is the way to go if you suspect multiple versions of the client, but if you have total control over client updates, then you are safe doing that stuff on the client and avoiding a round trip to some server somewhere. I've just finished a very complex application that uses this model - there was no reason for a middle tier. You haven't given any reason why you really need that.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
(I was typing my reply when your old post disappeared! I figured it had been moved, but I had some trouble finding it. Anyway, here's what I was trying to say before.) You're thinking a little old-skool there. I think if you take advantage of the .Net Framework, you can eliminate the middle-tier and still keep things secure. You don't want to get into COM-hell, which is not entirely eliminated with .Net assemblies. The Windows client and the SQL database are enough. Keep it simple. If you are going to store your information in a database, you won't need to serialise your objects... that's really only useful if you're using files for storage or if you really need a middle-tier. To store an object all you need is to put the current values of the member variables into columns in your database. So if you had an object like this: class myObject { int myNumber; string myString; ...some functions and such... } You would simply create a database table like this: create table myObject ( object_id int, myNumber int, myString varchar(x) ) You need an object ID so you can make sure you don't get multiple instances mixed up. You could use any property of the object to make it unique - doesn't have to be an integer. Then you create a stored procedure to store this object data (and perhaps send back a new unique ID), and another stored procedure to retrieve objects based on their ID. You will be able to use these stored procedures in a secure fashion from the client, and you will be able to take advantage of all kinds of cool security features designed for this exact purpose, and all your validation can be performed on either of (or both of) the two layers. As long as you're not doing super-complex math or predicting the weather or something, you should be able to push all the validation down to the database layer. That is the way to go if you suspect multiple versions of the client, but if you have total control over client updates, then you are safe doing that stuff on the client and avoiding a round trip to some server somewhere. I've just finished a very complex application that uses this model - there was no reason for a middle tier. You haven't given any reason why you really need that.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps![Message Deleted]
-
[Message Deleted]
That makes sense. Are you using some kind of off-the-shelf user interface? I'm curious as to why you don't build that yourself. IMO, the best thin client is a web browser. You have full control over the versioning of the software that way, because when you change the web site, it changes for everyone at the same time. In that case, you do have a middle tier, sort of. You have the thin client (browser), the middle tier is the web server with the ASP(x) engine, and then of course the back-end database. Occasionally, it's not worth it to dumb-down your software to work on old machine, and more cost-effective to upgrade the office machines. It all depends on the size of the company and how many machines would have to be replaced in order to bring it into the modern world. Companies need to keep their office equipment up to date, and a lot of times, deprecated software can help 'motivate' that change. If that makes sense...
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps!