Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Should libraries have a standard API and naming convention?

Should libraries have a standard API and naming convention?

Scheduled Pinned Locked Moved The Lounge
asp-netcsharpjavascriptdatabasedotnet
43 Posts 23 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R resuna

    "However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change." If not, then what would be considered a breaking change? An undocumented one?

    Greg UtasG Offline
    Greg UtasG Offline
    Greg Utas
    wrote on last edited by
    #25

    Sure, an undocumented change. Or one that forces part of an application to be redesigned.

    Robust Services Core | Software Techniques for Lemmings | Articles
    The fox knows many things, but the hedgehog knows one big thing.

    <p><a href="https://github.com/GregUtas/robust-services-core/blob/master/README.md">Robust Services Core</a>
    <em>The fox knows many things, but the hedgehog knows one big thing.</em></p>

    R 1 Reply Last reply
    0
    • C Chris Maunder

      I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

      cheers Chris Maunder

      S Offline
      S Offline
      SeattleC
      wrote on last edited by
      #26

      The four file-reading functions you list do different things. It is less confusing to name them different things than it is to figure out how to overload the name for different kinds of file-reading. There are different pairs of English words you could use (get/set, get/put, load/save, read/write) with little reason to pick one set over another. And while we're being English-centric, shouldn't an API standard support multiple human languages and character sets? Does that sound like too much? Well, then no standardization for you. Three of your four reasons there are multiple names for things (compatibility, bag-of-parts, program-flow) are actually good reasons why interface names are not consistent. We have to predict the future to pick the perfect name, and we're no good at that. I welcome your bold solution to these problems, but don't expect to hear back soon.

      D 1 Reply Last reply
      0
      • Greg UtasG Greg Utas

        Sure, an undocumented change. Or one that forces part of an application to be redesigned.

        Robust Services Core | Software Techniques for Lemmings | Articles
        The fox knows many things, but the hedgehog knows one big thing.

        R Offline
        R Offline
        resuna
        wrote on last edited by
        #27

        As a developer and maintainer of both old lightly maintained code and packages that are effected by breaking changes I can't agree. Any change that breaks existing code is a breaking change, regardless of how broad the changes to support it are. An undocumented breaking change and a documented breaking change just differ in how likely it is that you will notice it ahead of time. If you are actively monitoring the source repos, you will catch it whether it makes release notes or not, and makingthe release notes doesn't mean it gets caught before release... especially with automagic repo updates upstream that sneak changes in that suddenly break docker or nix builds. As a package maintainer, any change that requires the customer to edit their code at all is treated as a breaking change. We work very hard to make sure that old code continues to build without modification. If new code won't work on older versions, like we've added an API call but haven't changed any, or changed the meaning of a parameter in a backwards-compatible way, that's not a breaking change. If we've changed an existing API call so code has to be modified, that's a breaking change.

        J 1 Reply Last reply
        0
        • Greg UtasG Greg Utas

          I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players who've already more or less aligned to the standard they're pushing. This gives them an advantage and makes it difficult for others to differentiate. But I expect each library to standardize internally instead of running amok with inconsistent naming or ways of doing something, although I can see the latter happening when intended for a broad range of applications. Each of the four reasons you listed for a lack of standardization plays a role. Breaking changes are a pet peeve. To me, a breaking change is something that requires a user to redesign their software, which is definitely something to avoid. However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change. But whiners will whine. Fine, so stay on the old release if you expect to do no work to move to the new one. Good libraries and frameworks maintain a low surface-to-volume ratio. I think your #2 and #3 (both the result of multiple ways of doing something) are excusable in a library that supports a broad range of applications. But a library focused on specific types of applications should be more opinionated, settling on a standard way to do each thing in order to improve code reuse and interoperability between the applications that use it. #4, and #2 and #3 when unwarranted, are what a former colleague called superfluous diversity. This is a dead giveaway that the system lacks what Brooks called conceptual integrity. It was almost certainly developed without proper design and code reviews or software architects to steer it on a consistent path.

          Robust Services Core | Software Techniques for Lemmings | Articles
          The fox knows many things, but the hedgehog knows one big thing.

          J Offline
          J Offline
          jschell
          wrote on last edited by
          #28

          Greg Utas wrote:

          I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players

          Even standards don't work. That is why there is HTML versions 1 to 5. And sub versions. Then SSL and TLS with multiple versions of each. Not to mention that secure IP has been proposed but certainly not adopted. SMTP is simple but then if you want to add an attachment. SNMP says a lot about how to get an interrupt but nothing about what it should be. So one cannot program to handle just interrupts since different devices for the same failure will send an interrupt or not. The Java Specification still has the same BNF bugs as when it was first published. And those were formally reported back then. Fixing those doesn't change the language since one must implement it the way it would be fixed anyways. Hungarian notation was invented to deal with specifically with type less parameter checking in C but that seems to escape the notice of adherents even though the originator pointed that out. Nor was there even really a 'standard' once one got beyond very basic types. In the following look at the people who don't like this one. Hungarian notation - Wikipedia[^]

          1 Reply Last reply
          0
          • C Chris Maunder

            I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

            cheers Chris Maunder

            J Offline
            J Offline
            Juan Pablo Reyes Altamirano
            wrote on last edited by
            #29

            I have to conclude also that standardizing programming is an exercise in futility but at least, in a slightly more optimistic way, computer archaeologists of the future have a treasure trove of the different ways we were thinking (of course some will think...how silly, with the Vulkan API, everything was eventually perfect, etc.,etc.) Not saying we're documenting our own stubbornness (or stupidity, *cough* cobol *cough*) but nothing is lost if we have a few dozen ways to execute or express what saving a bunch of 1's and 0's actually is (and I can just hear the voices of dead engineers whispering how magnetic memory cores worked)

            1 Reply Last reply
            0
            • R resuna

              As a developer and maintainer of both old lightly maintained code and packages that are effected by breaking changes I can't agree. Any change that breaks existing code is a breaking change, regardless of how broad the changes to support it are. An undocumented breaking change and a documented breaking change just differ in how likely it is that you will notice it ahead of time. If you are actively monitoring the source repos, you will catch it whether it makes release notes or not, and makingthe release notes doesn't mean it gets caught before release... especially with automagic repo updates upstream that sneak changes in that suddenly break docker or nix builds. As a package maintainer, any change that requires the customer to edit their code at all is treated as a breaking change. We work very hard to make sure that old code continues to build without modification. If new code won't work on older versions, like we've added an API call but haven't changed any, or changed the meaning of a parameter in a backwards-compatible way, that's not a breaking change. If we've changed an existing API call so code has to be modified, that's a breaking change.

              J Offline
              J Offline
              jschell
              wrote on last edited by
              #30

              Good answer. Especially in the context of a 'library'. I certainly appreciate that share ware developers at least seem to adhere to the convention that a major number change in the version means something will break. But it certainly doesn't make me happy knowing that I will need to modify my existing code just to get access to some new feature. I always expect the following for a new version. 1. It will take a substantial amount of time. Certainly weeks. 2. Requires a full regression test. 3. Less experienced developers think they will be able to go up a major version just by dropping in the library. Even worse if I need to go up more than one major version. 1. It will require months. 2. I might need to go up one version and then go up the next version because attempting it at one go is just too likely to lead to production breakage due to unexpected problems. There could even be impacts on the architecture itself.

              R 1 Reply Last reply
              0
              • C Chris Maunder

                I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                cheers Chris Maunder

                D Offline
                D Offline
                dandy72
                wrote on last edited by
                #31

                If libraries were standardized to that extreme throughout the industry, I suspect a lot of people in software development roles would simply be there to 'glue' libraries together and the job would become incredibly boring. Taken a step further, the role could also more easily be replaced with some AI. Something I don't see happening with the current state of the industry. I'm not suggesting anything, one way or another, because I'm thinking in terms of job security...despite what it might read like. :-)

                C 1 Reply Last reply
                0
                • S SeattleC

                  The four file-reading functions you list do different things. It is less confusing to name them different things than it is to figure out how to overload the name for different kinds of file-reading. There are different pairs of English words you could use (get/set, get/put, load/save, read/write) with little reason to pick one set over another. And while we're being English-centric, shouldn't an API standard support multiple human languages and character sets? Does that sound like too much? Well, then no standardization for you. Three of your four reasons there are multiple names for things (compatibility, bag-of-parts, program-flow) are actually good reasons why interface names are not consistent. We have to predict the future to pick the perfect name, and we're no good at that. I welcome your bold solution to these problems, but don't expect to hear back soon.

                  D Offline
                  D Offline
                  dandy72
                  wrote on last edited by
                  #32

                  SeattleC++ wrote:

                  There are different pairs of English words you could use (get/set, get/put, load/save, read/write) with little reason to pick one set over another.

                  You're reminding me of PowerShell, which has (but does not enforce) the use of a specific set of "approved" verbs as the prefix to any function name. MS itself has been pretty good at sticking with it, but I suspect not many people do.

                  1 Reply Last reply
                  0
                  • D dandy72

                    If libraries were standardized to that extreme throughout the industry, I suspect a lot of people in software development roles would simply be there to 'glue' libraries together and the job would become incredibly boring. Taken a step further, the role could also more easily be replaced with some AI. Something I don't see happening with the current state of the industry. I'm not suggesting anything, one way or another, because I'm thinking in terms of job security...despite what it might read like. :-)

                    C Offline
                    C Offline
                    Chris Maunder
                    wrote on last edited by
                    #33

                    You're so close. It's interesting to see many replies here picking apart at my specific example rather than focussing on the broader issue: Every developer seems to be implementing the same, broad-stroke, task, in their own way. So we get slightly different ways of doing things, and slightly different names, and slightly different order in which each step of the task gets done, and multiple paths to achieve the same goal. Copy and paste code solves some of this: many of us find a solution, copy and paste, then adjust so it fits our need and actually works (give or take actual error handling). The next step is obviously AI: you need to (for example) save an uploaded file. The AI generates that bit of code for you. It's the same basic code it suggests to everyone, with variable names changed to protect the innocent. So my actual feeling here is that, no, we as devs won't suddenly all agree on standard names and steps. What we'll start doing, though, is following the lead of the auto-generated code and not need to make up our own slightly different names and steps. It's not about job security. That's like saying a construction worker will lose their job because they are using a nailgun instead of a hammer, or by using pre-fab concrete sections instead of building molds and pouring concrete. There's still tons of work to be done: you just get more of it done in a given time.

                    cheers Chris Maunder

                    D 1 Reply Last reply
                    0
                    • C Chris Maunder

                      I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                      cheers Chris Maunder

                      S Offline
                      S Offline
                      StatementTerminator
                      wrote on last edited by
                      #34

                      I'm not sure I get the issue. Web APIs already have a standard naming convention based on HTTP verbs, and that makes sense given that they use HTTP. I'd prefer that not to change. If you're saying you want a consistent naming convention for accessing any kind of data/storage with any kind of API/library/framework, then we'll never agree on what that should be. For instance, as a DB guy everything is CRUD to me, and I see every kind of data access through that lens and would name things accordingly if it were up to me. Doesn't mean anyone would agree with me though.

                      D 1 Reply Last reply
                      0
                      • C Chris Maunder

                        You're so close. It's interesting to see many replies here picking apart at my specific example rather than focussing on the broader issue: Every developer seems to be implementing the same, broad-stroke, task, in their own way. So we get slightly different ways of doing things, and slightly different names, and slightly different order in which each step of the task gets done, and multiple paths to achieve the same goal. Copy and paste code solves some of this: many of us find a solution, copy and paste, then adjust so it fits our need and actually works (give or take actual error handling). The next step is obviously AI: you need to (for example) save an uploaded file. The AI generates that bit of code for you. It's the same basic code it suggests to everyone, with variable names changed to protect the innocent. So my actual feeling here is that, no, we as devs won't suddenly all agree on standard names and steps. What we'll start doing, though, is following the lead of the auto-generated code and not need to make up our own slightly different names and steps. It's not about job security. That's like saying a construction worker will lose their job because they are using a nailgun instead of a hammer, or by using pre-fab concrete sections instead of building molds and pouring concrete. There's still tons of work to be done: you just get more of it done in a given time.

                        cheers Chris Maunder

                        D Offline
                        D Offline
                        dandy72
                        wrote on last edited by
                        #35

                        Chris Maunder wrote:

                        It's not about job security. That's like saying a construction worker will lose their job because they are using a nailgun instead of a hammer, or by using pre-fab concrete sections instead of building molds and pouring concrete. There's still tons of work to be done: you just get more of it done in a given time.

                        Exactly. People think of AI taking their jobs away from them; I see it as helping take care of the repetitive, mechanical parts, leaving the developer to focus on the more challenging aspects that AI is still in no position to solve. That doesn't worry me. At all.

                        1 Reply Last reply
                        0
                        • S StatementTerminator

                          I'm not sure I get the issue. Web APIs already have a standard naming convention based on HTTP verbs, and that makes sense given that they use HTTP. I'd prefer that not to change. If you're saying you want a consistent naming convention for accessing any kind of data/storage with any kind of API/library/framework, then we'll never agree on what that should be. For instance, as a DB guy everything is CRUD to me, and I see every kind of data access through that lens and would name things accordingly if it were up to me. Doesn't mean anyone would agree with me though.

                          D Offline
                          D Offline
                          dandy72
                          wrote on last edited by
                          #36

                          StatementTerminator wrote:

                          Web APIs already have a standard naming convention based on HTTP verbs, and that makes sense given that they use HTTP. I'd prefer that not to change.

                          I think what Chris is describing is that it needs to be more prevalent. HTTP verbs are merely fundamental building blocks; building complete web sites goes *way* beyond that, and those parts working at a higher level have little to no standardization.

                          S 1 Reply Last reply
                          0
                          • D dandy72

                            StatementTerminator wrote:

                            Web APIs already have a standard naming convention based on HTTP verbs, and that makes sense given that they use HTTP. I'd prefer that not to change.

                            I think what Chris is describing is that it needs to be more prevalent. HTTP verbs are merely fundamental building blocks; building complete web sites goes *way* beyond that, and those parts working at a higher level have little to no standardization.

                            S Offline
                            S Offline
                            StatementTerminator
                            wrote on last edited by
                            #37

                            Oh. If this is about things like the pain of re-learning how to do the same things every time, for instance, MS decides to scramble the .Net framework, I'm feeling that a lot lately. And also the tedium of finding the one true way that everyone can agree on, when it all compiles down to the same thing anyway.

                            1 Reply Last reply
                            0
                            • C Chris Maunder

                              I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                              cheers Chris Maunder

                              M Offline
                              M Offline
                              Member_5893260
                              wrote on last edited by
                              #38

                              It's sort of a nice idea, but on balance I don't think so. On a long enough timescale, standardizing everything is a type of restriction: programming is based on lateral thinking and this is simply a method of making that not work properly. If you're the type of programmer who works by bolting together bits of other people's code, then standardization is great; if you're the type of programmer who creates libraries and APIs, then it doesn't work to create something new based on old ideas of standardization which don't apply to something their creators hadn't even thought of... see what I mean?

                              1 Reply Last reply
                              0
                              • B brompot

                                There are vast difference between programming languages and runtime environments. Take C# and C++, that are not even that far apart. In C++ a class method is called differently than a instance method. In C# there is no difference. In C++ there is a difference between an instance and a pointer to an instance, in C# originally not, but now we have things like ref. And in a procedural language, file.saveas() is not even possible, it would be saveas(file). In other words, nice thought maybe, but not practically possible. In addition, who is going to enforce this? Will we get a library API police? I hope not. Maybe the problem is in the moving around of code, and you should have the functional bits in just one place. When I run in to things like this I ask myself "what do I need to do different to not have this problem". Just a thought.

                                M Offline
                                M Offline
                                Member_5893260
                                wrote on last edited by
                                #39

                                Plus: How do you make a standard for something you haven't thought of yet? What if I have an idea which doesn't fit the model of standards? More importantly, what if the model of standards restricts my thinking so that I don't have the idea?

                                1 Reply Last reply
                                0
                                • J jschell

                                  Good answer. Especially in the context of a 'library'. I certainly appreciate that share ware developers at least seem to adhere to the convention that a major number change in the version means something will break. But it certainly doesn't make me happy knowing that I will need to modify my existing code just to get access to some new feature. I always expect the following for a new version. 1. It will take a substantial amount of time. Certainly weeks. 2. Requires a full regression test. 3. Less experienced developers think they will be able to go up a major version just by dropping in the library. Even worse if I need to go up more than one major version. 1. It will require months. 2. I might need to go up one version and then go up the next version because attempting it at one go is just too likely to lead to production breakage due to unexpected problems. There could even be impacts on the architecture itself.

                                  R Offline
                                  R Offline
                                  resuna
                                  wrote on last edited by
                                  #40

                                  Ideally, with semantic versioning, yeh, a major version kind of implies there are breaking changes somewhere. But of course semantic versioning is a standard and you know what XKCD says about standards. Aside: shareware is something completely different.

                                  1 Reply Last reply
                                  0
                                  • C Chris Maunder

                                    I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                    cheers Chris Maunder

                                    U Offline
                                    U Offline
                                    User 11977249
                                    wrote on last edited by
                                    #41

                                    oooohhhhhh... Hmm. Lets talk about Get. Get is a mutative verb in English. eg: You get married, You get the cold, You get to go, You get killed, You get results. You might think that last one has no effect, but trust me those results are not benign they decide everything from graduating, to informing you a baby is on its way. Getting anything is a sure sign something has changed. Also lets talk Save. When you Save a soul its not preserved on a disk somewhere. When you Save a life, it can still be lost. Saving a place doesn't mean anything is stored there. And saving a goal means preventing it from happening in the first place. So are we storing the data, or saving it from being stored? And append. Great i think we might have a winner, but a + operator? what about the humble comma? surely appending an interface to a type definition shouldn't look like:

                                    class newType : Something + Foo + Disposable

                                    Also Append implies order, how the hell do you append a value to a set? Add maybe, include possibly, but append? What you have discovered in your own way is the story of Babel. The simple fact of the matter is that language is defined as the common cross-section of symbols that have common interpretations across a populous. Too many symbols, Too many definitions, or too many people make it stupidly difficult to synchronise on anything. And stupidly easy for spelling mistakes like Cloneable to be quickly dispersed. We can try to say this is the language, and it will work for a number of simple ideas, i believe the English language can claim some 80+ thousand unique words. Most humans might remember 40+ thousand of those and not necessarily the entirety of their meanings. How many meanings can you name for "of"? I believe over time we will iron out the core differences. But remember human language has been around for at least 10 thousand years. Its still trying to figure out what to call a cheerio (a small Frankfurt sausage). So maybe don't hold your breath.

                                    1 Reply Last reply
                                    0
                                    • C Chris Maunder

                                      I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                      cheers Chris Maunder

                                      M Offline
                                      M Offline
                                      Michael Rockwell 2021
                                      wrote on last edited by
                                      #42

                                      Sounds like a job for AI to suggest and fix-up names in code. Many of the code-assist tools already comply with a naming convention. And companies like Microsoft already have a naming convention that VisualStudio uses to make application development easier. Now if they can get these conventions to flow to their PowerPlatform and Azure.

                                      C 1 Reply Last reply
                                      0
                                      • M Michael Rockwell 2021

                                        Sounds like a job for AI to suggest and fix-up names in code. Many of the code-assist tools already comply with a naming convention. And companies like Microsoft already have a naming convention that VisualStudio uses to make application development easier. Now if they can get these conventions to flow to their PowerPlatform and Azure.

                                        C Offline
                                        C Offline
                                        Chris Maunder
                                        wrote on last edited by
                                        #43

                                        :thumbsup:

                                        cheers Chris Maunder

                                        1 Reply Last reply
                                        0
                                        Reply
                                        • Reply as topic
                                        Log in to reply
                                        • Oldest to Newest
                                        • Newest to Oldest
                                        • Most Votes


                                        • Login

                                        • Don't have an account? Register

                                        • Login or register to search.
                                        • First post
                                          Last post
                                        0
                                        • Categories
                                        • Recent
                                        • Tags
                                        • Popular
                                        • World
                                        • Users
                                        • Groups