Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Am I being unreasonable?

Am I being unreasonable?

Scheduled Pinned Locked Moved The Lounge
helpcsharpdatabasewcfxml
11 Posts 7 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Offline
    J Offline
    jim lahey
    wrote on last edited by
    #1

    Morning everybody. I've got an issue brewing that I'd like your opinions on. I work for a company that produces GIS software. In the GIS field there is a great deal of knowledge, innovation and skill but in what I would call traditional application development, a lot is left to be desired. For example I see copy and paste code on a daily basis, reimplementation of existing .net framework functionality, database schemas with inconsistent naming conventions and no referential integrity, tightly coupled, non-unit testable code lacking in separation or single responsibility, gratuitous use of structural singletons and an xml configuration ethos that goes like this: web.config or app.config has a path and filename configured which points to a separate xml file somewhere else on the server. This extra xml config has some actual configuration, but can also point to any number of other folder locations, config files elsewhere. To say it's a mess is an understatement. I've been trying to fix some of these issues whenever I encounter them and it's been encouraged by my line manager and colleagues who say they see my point. I give regular tech talks about technology, architecture, best practices, patterns etc. so one would assume they're interested in this. We've got a new product in development that provides GIS web services for geospatial data functionality. I've had nothing to do with this part so far but I know their data model is effectively an in-house reimplementation of parts of ADO.net, specifically weakly-typed datasets, rows, relations etc. in order to cope with wildly divergent GIS database schemas. One of my next tasks is to implement a mixed LDAP and database-backed forms authentication and authorisation module for ASP.net and WCF, which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it. The problem is that I'm being pushed by people who aren't even in development roles to use the reimplemented version of ADO.net which from a programmatical point I'm going from something like:

    var user = userRepository.GetById(123);
    var role = new Role("Arbitrary Role Name");
    role.Permissions.AddRange(new []{new Permission("Create"), new Permission("Delete"), new Permission("Read")});
    user.Roles.Add(role);
    userRepository.Save(user);
    unitOfWork.Commit();

    P L M W B 5 Replies Last reply
    0
    • J jim lahey

      Morning everybody. I've got an issue brewing that I'd like your opinions on. I work for a company that produces GIS software. In the GIS field there is a great deal of knowledge, innovation and skill but in what I would call traditional application development, a lot is left to be desired. For example I see copy and paste code on a daily basis, reimplementation of existing .net framework functionality, database schemas with inconsistent naming conventions and no referential integrity, tightly coupled, non-unit testable code lacking in separation or single responsibility, gratuitous use of structural singletons and an xml configuration ethos that goes like this: web.config or app.config has a path and filename configured which points to a separate xml file somewhere else on the server. This extra xml config has some actual configuration, but can also point to any number of other folder locations, config files elsewhere. To say it's a mess is an understatement. I've been trying to fix some of these issues whenever I encounter them and it's been encouraged by my line manager and colleagues who say they see my point. I give regular tech talks about technology, architecture, best practices, patterns etc. so one would assume they're interested in this. We've got a new product in development that provides GIS web services for geospatial data functionality. I've had nothing to do with this part so far but I know their data model is effectively an in-house reimplementation of parts of ADO.net, specifically weakly-typed datasets, rows, relations etc. in order to cope with wildly divergent GIS database schemas. One of my next tasks is to implement a mixed LDAP and database-backed forms authentication and authorisation module for ASP.net and WCF, which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it. The problem is that I'm being pushed by people who aren't even in development roles to use the reimplemented version of ADO.net which from a programmatical point I'm going from something like:

      var user = userRepository.GetById(123);
      var role = new Role("Arbitrary Role Name");
      role.Permissions.AddRange(new []{new Permission("Create"), new Permission("Delete"), new Permission("Read")});
      user.Roles.Add(role);
      userRepository.Save(user);
      unitOfWork.Commit();

      P Offline
      P Offline
      Pete OHanlon
      wrote on last edited by
      #2

      The simple fact is - if you implement their solution, who gets it in the neck when the delivery over runs? If it's them, then do it their way. If it's you, then you need to dig in. Play the ROI card - with your code you will be delivering the solution quicker, and it will benefit from the unit and user testing that has already been played to test the underlying platform. If it's their code, then their code has to be confidence tested as well.

      *pre-emptive celebratory nipple tassle jiggle* - Sean Ewington

      "Mind bleach! Send me mind bleach!" - Nagy Vilmos

      CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

      L 1 Reply Last reply
      0
      • J jim lahey

        Morning everybody. I've got an issue brewing that I'd like your opinions on. I work for a company that produces GIS software. In the GIS field there is a great deal of knowledge, innovation and skill but in what I would call traditional application development, a lot is left to be desired. For example I see copy and paste code on a daily basis, reimplementation of existing .net framework functionality, database schemas with inconsistent naming conventions and no referential integrity, tightly coupled, non-unit testable code lacking in separation or single responsibility, gratuitous use of structural singletons and an xml configuration ethos that goes like this: web.config or app.config has a path and filename configured which points to a separate xml file somewhere else on the server. This extra xml config has some actual configuration, but can also point to any number of other folder locations, config files elsewhere. To say it's a mess is an understatement. I've been trying to fix some of these issues whenever I encounter them and it's been encouraged by my line manager and colleagues who say they see my point. I give regular tech talks about technology, architecture, best practices, patterns etc. so one would assume they're interested in this. We've got a new product in development that provides GIS web services for geospatial data functionality. I've had nothing to do with this part so far but I know their data model is effectively an in-house reimplementation of parts of ADO.net, specifically weakly-typed datasets, rows, relations etc. in order to cope with wildly divergent GIS database schemas. One of my next tasks is to implement a mixed LDAP and database-backed forms authentication and authorisation module for ASP.net and WCF, which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it. The problem is that I'm being pushed by people who aren't even in development roles to use the reimplemented version of ADO.net which from a programmatical point I'm going from something like:

        var user = userRepository.GetById(123);
        var role = new Role("Arbitrary Role Name");
        role.Permissions.AddRange(new []{new Permission("Create"), new Permission("Delete"), new Permission("Read")});
        user.Roles.Add(role);
        userRepository.Save(user);
        unitOfWork.Commit();

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #3

        jim lahey wrote:

        which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it.

        :thumbsup: I haven't used MEF, NHibernate nor WCF, but still, it does explain what you are going to do. Despite being a noob on the subject-matter, your example-code is easy to follow. It's not "much" code, which makes it easy to maintain.

        jim lahey wrote:

        To something like this:

        Yuck. Never heard of "DRY" apparently.

        jim lahey wrote:

        What do people think? Do I need to dig my heels in or simply cede to pressure from people who don't even do development, thereby negating everything I've learnt (often painfully) since the year 2000?

        Follow your guts. Some people adapt to every environment and thrive everywhere, others (like me) are extremists and simply refuse to support crap. (Sign this[^] if you're an extremist too) FWIW; I see people in every industry skip corners to make that extra buck.

        Bastard Programmer from Hell :suss: if you can't read my code, try converting it here[^]

        1 Reply Last reply
        0
        • P Pete OHanlon

          The simple fact is - if you implement their solution, who gets it in the neck when the delivery over runs? If it's them, then do it their way. If it's you, then you need to dig in. Play the ROI card - with your code you will be delivering the solution quicker, and it will benefit from the unit and user testing that has already been played to test the underlying platform. If it's their code, then their code has to be confidence tested as well.

          *pre-emptive celebratory nipple tassle jiggle* - Sean Ewington

          "Mind bleach! Send me mind bleach!" - Nagy Vilmos

          CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #4

          Pete O'Hanlon wrote:

          who gets it in the neck when the delivery over runs?

          Good argument, and a better attitude than mine :)

          Bastard Programmer from Hell :suss: if you can't read my code, try converting it here[^]

          1 Reply Last reply
          0
          • J jim lahey

            Morning everybody. I've got an issue brewing that I'd like your opinions on. I work for a company that produces GIS software. In the GIS field there is a great deal of knowledge, innovation and skill but in what I would call traditional application development, a lot is left to be desired. For example I see copy and paste code on a daily basis, reimplementation of existing .net framework functionality, database schemas with inconsistent naming conventions and no referential integrity, tightly coupled, non-unit testable code lacking in separation or single responsibility, gratuitous use of structural singletons and an xml configuration ethos that goes like this: web.config or app.config has a path and filename configured which points to a separate xml file somewhere else on the server. This extra xml config has some actual configuration, but can also point to any number of other folder locations, config files elsewhere. To say it's a mess is an understatement. I've been trying to fix some of these issues whenever I encounter them and it's been encouraged by my line manager and colleagues who say they see my point. I give regular tech talks about technology, architecture, best practices, patterns etc. so one would assume they're interested in this. We've got a new product in development that provides GIS web services for geospatial data functionality. I've had nothing to do with this part so far but I know their data model is effectively an in-house reimplementation of parts of ADO.net, specifically weakly-typed datasets, rows, relations etc. in order to cope with wildly divergent GIS database schemas. One of my next tasks is to implement a mixed LDAP and database-backed forms authentication and authorisation module for ASP.net and WCF, which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it. The problem is that I'm being pushed by people who aren't even in development roles to use the reimplemented version of ADO.net which from a programmatical point I'm going from something like:

            var user = userRepository.GetById(123);
            var role = new Role("Arbitrary Role Name");
            role.Permissions.AddRange(new []{new Permission("Create"), new Permission("Delete"), new Permission("Read")});
            user.Roles.Add(role);
            userRepository.Save(user);
            unitOfWork.Commit();

            M Offline
            M Offline
            Mark_Wallace
            wrote on last edited by
            #5

            Well, if you can't take a joke...

            I wanna be a eunuchs developer! Pass me a bread knife!

            1 Reply Last reply
            0
            • J jim lahey

              Morning everybody. I've got an issue brewing that I'd like your opinions on. I work for a company that produces GIS software. In the GIS field there is a great deal of knowledge, innovation and skill but in what I would call traditional application development, a lot is left to be desired. For example I see copy and paste code on a daily basis, reimplementation of existing .net framework functionality, database schemas with inconsistent naming conventions and no referential integrity, tightly coupled, non-unit testable code lacking in separation or single responsibility, gratuitous use of structural singletons and an xml configuration ethos that goes like this: web.config or app.config has a path and filename configured which points to a separate xml file somewhere else on the server. This extra xml config has some actual configuration, but can also point to any number of other folder locations, config files elsewhere. To say it's a mess is an understatement. I've been trying to fix some of these issues whenever I encounter them and it's been encouraged by my line manager and colleagues who say they see my point. I give regular tech talks about technology, architecture, best practices, patterns etc. so one would assume they're interested in this. We've got a new product in development that provides GIS web services for geospatial data functionality. I've had nothing to do with this part so far but I know their data model is effectively an in-house reimplementation of parts of ADO.net, specifically weakly-typed datasets, rows, relations etc. in order to cope with wildly divergent GIS database schemas. One of my next tasks is to implement a mixed LDAP and database-backed forms authentication and authorisation module for ASP.net and WCF, which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it. The problem is that I'm being pushed by people who aren't even in development roles to use the reimplemented version of ADO.net which from a programmatical point I'm going from something like:

              var user = userRepository.GetById(123);
              var role = new Role("Arbitrary Role Name");
              role.Permissions.AddRange(new []{new Permission("Create"), new Permission("Delete"), new Permission("Read")});
              user.Roles.Add(role);
              userRepository.Save(user);
              unitOfWork.Commit();

              W Offline
              W Offline
              wout de zeeuw
              wrote on last edited by
              #6

              If they want datasets, just use those, I don't see the big issue. Both ways are quite doable, datasets are a bit more typing (but why not use type safe generated datasets?), and NHibernate maybe less typing, but easier to shoot yourself in the foot in some obscure way. When you shoot yourself in the foot with NHibernate it will take a lot more time to figure out how that happened. I never really liked ORM's for that reason, there's just a thicker layer of things that can go wrong and are harder to debug. Plus you might happen to be versed with NHibernate, but there are a zillion ORM's out there, so going with something common like a dataset makes it easier to exchange developers.

              Wout

              E J 2 Replies Last reply
              0
              • W wout de zeeuw

                If they want datasets, just use those, I don't see the big issue. Both ways are quite doable, datasets are a bit more typing (but why not use type safe generated datasets?), and NHibernate maybe less typing, but easier to shoot yourself in the foot in some obscure way. When you shoot yourself in the foot with NHibernate it will take a lot more time to figure out how that happened. I never really liked ORM's for that reason, there's just a thicker layer of things that can go wrong and are harder to debug. Plus you might happen to be versed with NHibernate, but there are a zillion ORM's out there, so going with something common like a dataset makes it easier to exchange developers.

                Wout

                E Offline
                E Offline
                Espen Harlinn
                wrote on last edited by
                #7

                wout de zeeuw wrote:

                When you shoot yourself in the foot with NHibernate it will take a lot more time to figure out how that happened.

                :thumbsup: Can't be said often enough :laugh:

                Espen Harlinn Principal Architect, Software - Goodtech Projects & Services AS My LinkedIn Profile

                1 Reply Last reply
                0
                • W wout de zeeuw

                  If they want datasets, just use those, I don't see the big issue. Both ways are quite doable, datasets are a bit more typing (but why not use type safe generated datasets?), and NHibernate maybe less typing, but easier to shoot yourself in the foot in some obscure way. When you shoot yourself in the foot with NHibernate it will take a lot more time to figure out how that happened. I never really liked ORM's for that reason, there's just a thicker layer of things that can go wrong and are harder to debug. Plus you might happen to be versed with NHibernate, but there are a zillion ORM's out there, so going with something common like a dataset makes it easier to exchange developers.

                  Wout

                  J Offline
                  J Offline
                  jim lahey
                  wrote on last edited by
                  #8

                  It's not a .net type safe generated dataset. It's a homebrew reimplementation of ADO.net untyped datasets, but without the ability to query by column name, just index. I'm au fait with Linq2SQL, Entity Framework, NHibernate and have used stuff like ADO.net datasets and datareaders, custom ADO.net data adapters, ADODB Recordsets in the past and I can safely say NHibernate 3.3 knocks them all into a cocked hat. Lean, mockable, unit testable, can connect to any number of databases and has plenty of documentation, blogs, patterns, resources etc on the net. This homebrew implementation is the brainchild of one guy who works here three days a week. By all means implement something yourself if there isn't anything out there, but for elephant's sake, don't reinvent the wheel when funds are tight and we're having to trim staff numbers. I fail to see how having a 6 table schema with a few relations will make me shoot myself in the foot with NHibernate. I'm not doing anything that hasn't been done a thousand times before. A bit of CRUD with a few specific queries and maybe a view or two. It's hardly groundbreaking. I get those kind of comments from a few 50+ colleagues who are stuck in their 1980s C++ or Delphi utopia. "Why would you want to use an ORM?" they ask. "It just overcomplicates something that should be easy, you have to configure it bla bla and you still can't do everything you want". When I ask them what their alternative would be they say they'd use a SqlClient/OleDbClient, get a DataReader and if you're very lucky they'll cast the DataReader columns into a data object of some sort. Congratulations, you've just reinvented the wheel, wasted days of company time and possibly used up the man-hour resources that should have been allocated for giving people a raise. But that's not all, the wheel you've reinvented isn't anywhere near as good or flexible as a tried and tested, publicly available, free component. To quote Ted Dziuba: "if there's one thing that developers love, it's knowing better than conventional wisdom, but conventional wisdom is conventional for a reason: that shit works"

                  W 1 Reply Last reply
                  0
                  • J jim lahey

                    Morning everybody. I've got an issue brewing that I'd like your opinions on. I work for a company that produces GIS software. In the GIS field there is a great deal of knowledge, innovation and skill but in what I would call traditional application development, a lot is left to be desired. For example I see copy and paste code on a daily basis, reimplementation of existing .net framework functionality, database schemas with inconsistent naming conventions and no referential integrity, tightly coupled, non-unit testable code lacking in separation or single responsibility, gratuitous use of structural singletons and an xml configuration ethos that goes like this: web.config or app.config has a path and filename configured which points to a separate xml file somewhere else on the server. This extra xml config has some actual configuration, but can also point to any number of other folder locations, config files elsewhere. To say it's a mess is an understatement. I've been trying to fix some of these issues whenever I encounter them and it's been encouraged by my line manager and colleagues who say they see my point. I give regular tech talks about technology, architecture, best practices, patterns etc. so one would assume they're interested in this. We've got a new product in development that provides GIS web services for geospatial data functionality. I've had nothing to do with this part so far but I know their data model is effectively an in-house reimplementation of parts of ADO.net, specifically weakly-typed datasets, rows, relations etc. in order to cope with wildly divergent GIS database schemas. One of my next tasks is to implement a mixed LDAP and database-backed forms authentication and authorisation module for ASP.net and WCF, which in my mind would be a custom membership and role provider, an LDAP component, WCF Service and a custom Service Behaviour using MEF backed by a database schema with 6 tables and NHibernate serving as my ORM/abstraction layer and unit of work. It's 100% clear to me how I'd do it. The problem is that I'm being pushed by people who aren't even in development roles to use the reimplemented version of ADO.net which from a programmatical point I'm going from something like:

                    var user = userRepository.GetById(123);
                    var role = new Role("Arbitrary Role Name");
                    role.Permissions.AddRange(new []{new Permission("Create"), new Permission("Delete"), new Permission("Read")});
                    user.Roles.Add(role);
                    userRepository.Save(user);
                    unitOfWork.Commit();

                    B Offline
                    B Offline
                    BobJanova
                    wrote on last edited by
                    #9

                    If it doesn't have to interact with other parts of the system that are already using a different data access layer, then you are somewhat justified in digging in. However, take into account that introducing a new technology, even it it's objectively better (and having wrestled with ORMs and NHibernate in the past I'd be sceptical of that), adds business costs when the next guy to look at that part of the code has to learn the technology. I suggest you angle your argument along the lines of 'an untyped data access layer is great for GIS, but this data will be strongly typed and a typed DAL is a better fit'. That sounds less like 'you're a bunch of idiots writing crap' and more like 'yeah you're right ... but not here'.

                    J 1 Reply Last reply
                    0
                    • B BobJanova

                      If it doesn't have to interact with other parts of the system that are already using a different data access layer, then you are somewhat justified in digging in. However, take into account that introducing a new technology, even it it's objectively better (and having wrestled with ORMs and NHibernate in the past I'd be sceptical of that), adds business costs when the next guy to look at that part of the code has to learn the technology. I suggest you angle your argument along the lines of 'an untyped data access layer is great for GIS, but this data will be strongly typed and a typed DAL is a better fit'. That sounds less like 'you're a bunch of idiots writing crap' and more like 'yeah you're right ... but not here'.

                      J Offline
                      J Offline
                      jim lahey
                      wrote on last edited by
                      #10

                      NHibernate is already in use within the company, specifically on the project I inherited when I joined. This new security stuff I'm doing is supposed to be for several projects, including mine.

                      1 Reply Last reply
                      0
                      • J jim lahey

                        It's not a .net type safe generated dataset. It's a homebrew reimplementation of ADO.net untyped datasets, but without the ability to query by column name, just index. I'm au fait with Linq2SQL, Entity Framework, NHibernate and have used stuff like ADO.net datasets and datareaders, custom ADO.net data adapters, ADODB Recordsets in the past and I can safely say NHibernate 3.3 knocks them all into a cocked hat. Lean, mockable, unit testable, can connect to any number of databases and has plenty of documentation, blogs, patterns, resources etc on the net. This homebrew implementation is the brainchild of one guy who works here three days a week. By all means implement something yourself if there isn't anything out there, but for elephant's sake, don't reinvent the wheel when funds are tight and we're having to trim staff numbers. I fail to see how having a 6 table schema with a few relations will make me shoot myself in the foot with NHibernate. I'm not doing anything that hasn't been done a thousand times before. A bit of CRUD with a few specific queries and maybe a view or two. It's hardly groundbreaking. I get those kind of comments from a few 50+ colleagues who are stuck in their 1980s C++ or Delphi utopia. "Why would you want to use an ORM?" they ask. "It just overcomplicates something that should be easy, you have to configure it bla bla and you still can't do everything you want". When I ask them what their alternative would be they say they'd use a SqlClient/OleDbClient, get a DataReader and if you're very lucky they'll cast the DataReader columns into a data object of some sort. Congratulations, you've just reinvented the wheel, wasted days of company time and possibly used up the man-hour resources that should have been allocated for giving people a raise. But that's not all, the wheel you've reinvented isn't anywhere near as good or flexible as a tried and tested, publicly available, free component. To quote Ted Dziuba: "if there's one thing that developers love, it's knowing better than conventional wisdom, but conventional wisdom is conventional for a reason: that shit works"

                        W Offline
                        W Offline
                        wout de zeeuw
                        wrote on last edited by
                        #11

                        jim lahey wrote:

                        6 table schema

                        In that case, no discussion, just do the dataset and get on with it. 6 tables aren't worth having a religious war over with your colleagues.

                        jim lahey wrote:

                        I fail to see how...

                        You wouldn't be the first to shoot yourself in the foot with NHibernate in some obscure way, and also not the last. There are many ways in which things can blow up in your face, like inexplicable bad performance, or memory leaks in the NHibernate layer. For an application that are available outside the company I wouldn't risk that for saving a bit of typing (although code generation might alleviate that). Stuck in 1980s is not necessarily good or bad, it's the argument that counts. Newer is not necessarily better. Countless examples: WPF is dead, Silverlight is dead, etc.

                        Wout

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups