Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Should libraries have a standard API and naming convention?

Should libraries have a standard API and naming convention?

Scheduled Pinned Locked Moved The Lounge
asp-netcsharpjavascriptdatabasedotnet
43 Posts 23 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S Slacker007

    From my perspective: There is no other profession, that I personally know of, where there is a pool of vastly differing opinions on how to do things and how things should be, than software development and developers. We have standardized music, medical, and most science fields, but software engineering/development is lacking big time. So, I agree with you that we as an industry should standardize as much as we can (not just APIs), but then who gets to be the king on what the standards are? You? Me? Jeff down the street? What if I don't like your standards? back to square one. IMHO, this is an exercise in futility.

    J Offline
    J Offline
    jschell
    wrote on last edited by
    #15

    Slacker007 wrote:

    We have standardized music, medical, and most science fields, but software engineering/development is lacking big time

    No. Humans are messy. You do know that Ed Sheeran just won a civil case for copyright infringement based on, presumably, one of those 'standards' of the music industry? So certainly not settled for some. You do know that there are license medical doctors that are prescribing CAM (Complementary Alternative Medicine) medicine? You know that every cancer hospital except for one has a CAM center? You do know that people were comparing the way the Mumps vaccine was originally researched and even what it does to how COVID mRNA (and all mRNA) vaccine was researched? It took 30 years for Texas to finally remove the license of a medical doctor who has been prompting and profiting from a medical therapy that was disproved almost at the very time it was first proposed? Not to mention what happened with Aducanumab? Not sure what you mean by "science" but in India you can get a MBA in Astrology (yes spelled correctly) at most or perhaps all universities? I think one offers a Masters of Science as well. Of course in the US 'talk therapy' is still offered by psychologists and even psychiatrists. Not to mention a slew of things like court order anger management. And the DSM (Diagnostic and Statistical Manual of Mental Disorders) is for the most part full of definitions that are nothing more that descriptions of how people describe that the 'feel'. Thus no actual objective criteria. The most recent release was disputed by at least some due to it continuing to do that and even expanding on those sorts of definitions. The number of pay per publish 'science' publications are probably expanding. And it seems to be a trend to now realize that the standard for looking for errors which can lead to false positives in studies is finally (like in the last couple of years) is being revised to be more strict. This came about because of a large effort to try to reproduce results for studies in reputable (not pay per publish) magazines which fail to reproduce the results of the original study?

    1 Reply Last reply
    0
    • J Jeremy Falcon

      Chris Maunder wrote:

      We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible

      That's always the crux of the situation. The reality is, if you have a lot of users, not every dev is hardcore and wants to spend their entire life always relearning. And to be honest, if you want a family and kids you can't really blame some people. So, there needs to be a sense of familiarity even if something new is introduced. If it's completely different with every major release, you'll find yourself losing users that just want to get their job done and don't care about being an uber geek. Love or hate PHP, that's the exact reason it was so hard for it to de-crap (if it ever did). It just got too popular too quick. And to keep that... they kept the crap. The original developer even mentioned this. He never expected PHP to get so popular as it did in the beginning. But, once it did it was too late.

      Chris Maunder wrote:

      There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide

      So if you want a design an API that's "easier" but the original goal of the first version of the API was to be granular... and let's say you can't change the first version much because of a strict ABI compatibility, then adding helper classes and/or a helper API is what I'd usually do. To your point, it does bloat the codebase. I suppose keeping it a separate helper project would help with that. If it's a fundamental paradigm shift though, like using AI and qubits to psychically predict winner lottery numbers while retrieving data, that would be a new project for sure.

      Chris Maunder wrote:

      Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this

      It's worth mentioning that is this a good thing since CPUs are all about cores now and not just upping raw clock speed. I suppose the design of this would be language dependent though. As far as JavaScript, you can keep pretty much the same API design when it comes to async vs synchronous code.

      Chris Maunder wrote:

      We just like making up new paths for writing the same old code bec

      J Offline
      J Offline
      jschell
      wrote on last edited by
      #16

      Jeremy Falcon wrote:

      Nowadays, eventually even the desktop will be replied by a merge of web and desktop technologies.

      Desktop computers? I doubt that. Been tried multiple times using different ways. None of them had any acceptance. Timesharing computers in the 70s was widely used but only because the cost of individual computers was so high.

      J 1 Reply Last reply
      0
      • C Chris Maunder

        I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

        cheers Chris Maunder

        P Offline
        P Offline
        Peter_in_2780
        wrote on last edited by
        #17

        Different windmills, same tilting... Around 1880, my great-great grandfather W E Hearn[^] set out to codify the laws of the State of Victoria. He was equally successful.

        Quote:

        However, the codification was never adopted since "although praised in Parliament, [it] was regarded as too abstract by practising lawyers."

        Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012

        1 Reply Last reply
        0
        • C Chris Maunder

          I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

          cheers Chris Maunder

          Greg UtasG Offline
          Greg UtasG Offline
          Greg Utas
          wrote on last edited by
          #18

          I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players who've already more or less aligned to the standard they're pushing. This gives them an advantage and makes it difficult for others to differentiate. But I expect each library to standardize internally instead of running amok with inconsistent naming or ways of doing something, although I can see the latter happening when intended for a broad range of applications. Each of the four reasons you listed for a lack of standardization plays a role. Breaking changes are a pet peeve. To me, a breaking change is something that requires a user to redesign their software, which is definitely something to avoid. However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change. But whiners will whine. Fine, so stay on the old release if you expect to do no work to move to the new one. Good libraries and frameworks maintain a low surface-to-volume ratio. I think your #2 and #3 (both the result of multiple ways of doing something) are excusable in a library that supports a broad range of applications. But a library focused on specific types of applications should be more opinionated, settling on a standard way to do each thing in order to improve code reuse and interoperability between the applications that use it. #4, and #2 and #3 when unwarranted, are what a former colleague called superfluous diversity. This is a dead giveaway that the system lacks what Brooks called conceptual integrity. It was almost certainly developed without proper design and code reviews or software architects to steer it on a consistent path.

          Robust Services Core | Software Techniques for Lemmings | Articles
          The fox knows many things, but the hedgehog knows one big thing.

          <p><a href="https://github.com/GregUtas/robust-services-core/blob/master/README.md">Robust Services Core</a>
          <em>The fox knows many things, but the hedgehog knows one big thing.</em></p>

          R J 2 Replies Last reply
          0
          • C Chris Maunder

            I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

            cheers Chris Maunder

            B Offline
            B Offline
            BernardIE5317
            wrote on last edited by
            #19

            as you know obtaining the status of a C++ standard library stream requires invoking rdstate . i do not know what the rd stands for . is it per-chance "read" . the set in setstate is a dead giveaway but rd ? who knows . maybe it is returnd_a_state

            1 Reply Last reply
            0
            • Greg UtasG Greg Utas

              I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players who've already more or less aligned to the standard they're pushing. This gives them an advantage and makes it difficult for others to differentiate. But I expect each library to standardize internally instead of running amok with inconsistent naming or ways of doing something, although I can see the latter happening when intended for a broad range of applications. Each of the four reasons you listed for a lack of standardization plays a role. Breaking changes are a pet peeve. To me, a breaking change is something that requires a user to redesign their software, which is definitely something to avoid. However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change. But whiners will whine. Fine, so stay on the old release if you expect to do no work to move to the new one. Good libraries and frameworks maintain a low surface-to-volume ratio. I think your #2 and #3 (both the result of multiple ways of doing something) are excusable in a library that supports a broad range of applications. But a library focused on specific types of applications should be more opinionated, settling on a standard way to do each thing in order to improve code reuse and interoperability between the applications that use it. #4, and #2 and #3 when unwarranted, are what a former colleague called superfluous diversity. This is a dead giveaway that the system lacks what Brooks called conceptual integrity. It was almost certainly developed without proper design and code reviews or software architects to steer it on a consistent path.

              Robust Services Core | Software Techniques for Lemmings | Articles
              The fox knows many things, but the hedgehog knows one big thing.

              R Offline
              R Offline
              resuna
              wrote on last edited by
              #20

              "However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change." If not, then what would be considered a breaking change? An undocumented one?

              Greg UtasG 1 Reply Last reply
              0
              • C Chris Maunder

                I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                cheers Chris Maunder

                Graeme_GrantG Offline
                Graeme_GrantG Offline
                Graeme_Grant
                wrote on last edited by
                #21

                Chris Maunder wrote:

                Open a filestream, call file.CopyToAsync to the stream and close the stream

                Well, CopyToAsync is a stream method. A stream can be for any purpose, both the source and destination. So to use this for your example is a bit unfair. Opening a FileStream is the destination to move the data to.

                FileStream stream = File.OpenWrite(__filename__);

                Graeme


                "I fear not the man who has practiced ten thousand kicks one time, but I fear the man that has practiced one kick ten thousand times!" - Bruce Lee

                1 Reply Last reply
                0
                • J jschell

                  Jeremy Falcon wrote:

                  Nowadays, eventually even the desktop will be replied by a merge of web and desktop technologies.

                  Desktop computers? I doubt that. Been tried multiple times using different ways. None of them had any acceptance. Timesharing computers in the 70s was widely used but only because the cost of individual computers was so high.

                  J Offline
                  J Offline
                  Jeremy Falcon
                  wrote on last edited by
                  #22

                  I'm not interested in your thoughts. You continue to reply to me after we've established you're just going to argue 90% of the time. Again... I would block you if I could. You can't take a hint and just go away.

                  Jeremy Falcon

                  1 Reply Last reply
                  0
                  • C Chris Maunder

                    That's the beauty of Standards: there are so many to choose from.

                    cheers Chris Maunder

                    G Offline
                    G Offline
                    Gary Wheeler
                    wrote on last edited by
                    #23

                    Obligatory xkcd: Standards[^]

                    Software Zen: delete this;

                    1 Reply Last reply
                    0
                    • C Chris Maunder

                      I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                      cheers Chris Maunder

                      B Offline
                      B Offline
                      brompot
                      wrote on last edited by
                      #24

                      There are vast difference between programming languages and runtime environments. Take C# and C++, that are not even that far apart. In C++ a class method is called differently than a instance method. In C# there is no difference. In C++ there is a difference between an instance and a pointer to an instance, in C# originally not, but now we have things like ref. And in a procedural language, file.saveas() is not even possible, it would be saveas(file). In other words, nice thought maybe, but not practically possible. In addition, who is going to enforce this? Will we get a library API police? I hope not. Maybe the problem is in the moving around of code, and you should have the functional bits in just one place. When I run in to things like this I ask myself "what do I need to do different to not have this problem". Just a thought.

                      M 1 Reply Last reply
                      0
                      • R resuna

                        "However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change." If not, then what would be considered a breaking change? An undocumented one?

                        Greg UtasG Offline
                        Greg UtasG Offline
                        Greg Utas
                        wrote on last edited by
                        #25

                        Sure, an undocumented change. Or one that forces part of an application to be redesigned.

                        Robust Services Core | Software Techniques for Lemmings | Articles
                        The fox knows many things, but the hedgehog knows one big thing.

                        <p><a href="https://github.com/GregUtas/robust-services-core/blob/master/README.md">Robust Services Core</a>
                        <em>The fox knows many things, but the hedgehog knows one big thing.</em></p>

                        R 1 Reply Last reply
                        0
                        • C Chris Maunder

                          I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                          cheers Chris Maunder

                          S Offline
                          S Offline
                          SeattleC
                          wrote on last edited by
                          #26

                          The four file-reading functions you list do different things. It is less confusing to name them different things than it is to figure out how to overload the name for different kinds of file-reading. There are different pairs of English words you could use (get/set, get/put, load/save, read/write) with little reason to pick one set over another. And while we're being English-centric, shouldn't an API standard support multiple human languages and character sets? Does that sound like too much? Well, then no standardization for you. Three of your four reasons there are multiple names for things (compatibility, bag-of-parts, program-flow) are actually good reasons why interface names are not consistent. We have to predict the future to pick the perfect name, and we're no good at that. I welcome your bold solution to these problems, but don't expect to hear back soon.

                          D 1 Reply Last reply
                          0
                          • Greg UtasG Greg Utas

                            Sure, an undocumented change. Or one that forces part of an application to be redesigned.

                            Robust Services Core | Software Techniques for Lemmings | Articles
                            The fox knows many things, but the hedgehog knows one big thing.

                            R Offline
                            R Offline
                            resuna
                            wrote on last edited by
                            #27

                            As a developer and maintainer of both old lightly maintained code and packages that are effected by breaking changes I can't agree. Any change that breaks existing code is a breaking change, regardless of how broad the changes to support it are. An undocumented breaking change and a documented breaking change just differ in how likely it is that you will notice it ahead of time. If you are actively monitoring the source repos, you will catch it whether it makes release notes or not, and makingthe release notes doesn't mean it gets caught before release... especially with automagic repo updates upstream that sneak changes in that suddenly break docker or nix builds. As a package maintainer, any change that requires the customer to edit their code at all is treated as a breaking change. We work very hard to make sure that old code continues to build without modification. If new code won't work on older versions, like we've added an API call but haven't changed any, or changed the meaning of a parameter in a backwards-compatible way, that's not a breaking change. If we've changed an existing API call so code has to be modified, that's a breaking change.

                            J 1 Reply Last reply
                            0
                            • Greg UtasG Greg Utas

                              I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players who've already more or less aligned to the standard they're pushing. This gives them an advantage and makes it difficult for others to differentiate. But I expect each library to standardize internally instead of running amok with inconsistent naming or ways of doing something, although I can see the latter happening when intended for a broad range of applications. Each of the four reasons you listed for a lack of standardization plays a role. Breaking changes are a pet peeve. To me, a breaking change is something that requires a user to redesign their software, which is definitely something to avoid. However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change. But whiners will whine. Fine, so stay on the old release if you expect to do no work to move to the new one. Good libraries and frameworks maintain a low surface-to-volume ratio. I think your #2 and #3 (both the result of multiple ways of doing something) are excusable in a library that supports a broad range of applications. But a library focused on specific types of applications should be more opinionated, settling on a standard way to do each thing in order to improve code reuse and interoperability between the applications that use it. #4, and #2 and #3 when unwarranted, are what a former colleague called superfluous diversity. This is a dead giveaway that the system lacks what Brooks called conceptual integrity. It was almost certainly developed without proper design and code reviews or software architects to steer it on a consistent path.

                              Robust Services Core | Software Techniques for Lemmings | Articles
                              The fox knows many things, but the hedgehog knows one big thing.

                              J Offline
                              J Offline
                              jschell
                              wrote on last edited by
                              #28

                              Greg Utas wrote:

                              I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players

                              Even standards don't work. That is why there is HTML versions 1 to 5. And sub versions. Then SSL and TLS with multiple versions of each. Not to mention that secure IP has been proposed but certainly not adopted. SMTP is simple but then if you want to add an attachment. SNMP says a lot about how to get an interrupt but nothing about what it should be. So one cannot program to handle just interrupts since different devices for the same failure will send an interrupt or not. The Java Specification still has the same BNF bugs as when it was first published. And those were formally reported back then. Fixing those doesn't change the language since one must implement it the way it would be fixed anyways. Hungarian notation was invented to deal with specifically with type less parameter checking in C but that seems to escape the notice of adherents even though the originator pointed that out. Nor was there even really a 'standard' once one got beyond very basic types. In the following look at the people who don't like this one. Hungarian notation - Wikipedia[^]

                              1 Reply Last reply
                              0
                              • C Chris Maunder

                                I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                cheers Chris Maunder

                                J Offline
                                J Offline
                                Juan Pablo Reyes Altamirano
                                wrote on last edited by
                                #29

                                I have to conclude also that standardizing programming is an exercise in futility but at least, in a slightly more optimistic way, computer archaeologists of the future have a treasure trove of the different ways we were thinking (of course some will think...how silly, with the Vulkan API, everything was eventually perfect, etc.,etc.) Not saying we're documenting our own stubbornness (or stupidity, *cough* cobol *cough*) but nothing is lost if we have a few dozen ways to execute or express what saving a bunch of 1's and 0's actually is (and I can just hear the voices of dead engineers whispering how magnetic memory cores worked)

                                1 Reply Last reply
                                0
                                • R resuna

                                  As a developer and maintainer of both old lightly maintained code and packages that are effected by breaking changes I can't agree. Any change that breaks existing code is a breaking change, regardless of how broad the changes to support it are. An undocumented breaking change and a documented breaking change just differ in how likely it is that you will notice it ahead of time. If you are actively monitoring the source repos, you will catch it whether it makes release notes or not, and makingthe release notes doesn't mean it gets caught before release... especially with automagic repo updates upstream that sneak changes in that suddenly break docker or nix builds. As a package maintainer, any change that requires the customer to edit their code at all is treated as a breaking change. We work very hard to make sure that old code continues to build without modification. If new code won't work on older versions, like we've added an API call but haven't changed any, or changed the meaning of a parameter in a backwards-compatible way, that's not a breaking change. If we've changed an existing API call so code has to be modified, that's a breaking change.

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #30

                                  Good answer. Especially in the context of a 'library'. I certainly appreciate that share ware developers at least seem to adhere to the convention that a major number change in the version means something will break. But it certainly doesn't make me happy knowing that I will need to modify my existing code just to get access to some new feature. I always expect the following for a new version. 1. It will take a substantial amount of time. Certainly weeks. 2. Requires a full regression test. 3. Less experienced developers think they will be able to go up a major version just by dropping in the library. Even worse if I need to go up more than one major version. 1. It will require months. 2. I might need to go up one version and then go up the next version because attempting it at one go is just too likely to lead to production breakage due to unexpected problems. There could even be impacts on the architecture itself.

                                  R 1 Reply Last reply
                                  0
                                  • C Chris Maunder

                                    I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                    cheers Chris Maunder

                                    D Offline
                                    D Offline
                                    dandy72
                                    wrote on last edited by
                                    #31

                                    If libraries were standardized to that extreme throughout the industry, I suspect a lot of people in software development roles would simply be there to 'glue' libraries together and the job would become incredibly boring. Taken a step further, the role could also more easily be replaced with some AI. Something I don't see happening with the current state of the industry. I'm not suggesting anything, one way or another, because I'm thinking in terms of job security...despite what it might read like. :-)

                                    C 1 Reply Last reply
                                    0
                                    • S SeattleC

                                      The four file-reading functions you list do different things. It is less confusing to name them different things than it is to figure out how to overload the name for different kinds of file-reading. There are different pairs of English words you could use (get/set, get/put, load/save, read/write) with little reason to pick one set over another. And while we're being English-centric, shouldn't an API standard support multiple human languages and character sets? Does that sound like too much? Well, then no standardization for you. Three of your four reasons there are multiple names for things (compatibility, bag-of-parts, program-flow) are actually good reasons why interface names are not consistent. We have to predict the future to pick the perfect name, and we're no good at that. I welcome your bold solution to these problems, but don't expect to hear back soon.

                                      D Offline
                                      D Offline
                                      dandy72
                                      wrote on last edited by
                                      #32

                                      SeattleC++ wrote:

                                      There are different pairs of English words you could use (get/set, get/put, load/save, read/write) with little reason to pick one set over another.

                                      You're reminding me of PowerShell, which has (but does not enforce) the use of a specific set of "approved" verbs as the prefix to any function name. MS itself has been pretty good at sticking with it, but I suspect not many people do.

                                      1 Reply Last reply
                                      0
                                      • D dandy72

                                        If libraries were standardized to that extreme throughout the industry, I suspect a lot of people in software development roles would simply be there to 'glue' libraries together and the job would become incredibly boring. Taken a step further, the role could also more easily be replaced with some AI. Something I don't see happening with the current state of the industry. I'm not suggesting anything, one way or another, because I'm thinking in terms of job security...despite what it might read like. :-)

                                        C Offline
                                        C Offline
                                        Chris Maunder
                                        wrote on last edited by
                                        #33

                                        You're so close. It's interesting to see many replies here picking apart at my specific example rather than focussing on the broader issue: Every developer seems to be implementing the same, broad-stroke, task, in their own way. So we get slightly different ways of doing things, and slightly different names, and slightly different order in which each step of the task gets done, and multiple paths to achieve the same goal. Copy and paste code solves some of this: many of us find a solution, copy and paste, then adjust so it fits our need and actually works (give or take actual error handling). The next step is obviously AI: you need to (for example) save an uploaded file. The AI generates that bit of code for you. It's the same basic code it suggests to everyone, with variable names changed to protect the innocent. So my actual feeling here is that, no, we as devs won't suddenly all agree on standard names and steps. What we'll start doing, though, is following the lead of the auto-generated code and not need to make up our own slightly different names and steps. It's not about job security. That's like saying a construction worker will lose their job because they are using a nailgun instead of a hammer, or by using pre-fab concrete sections instead of building molds and pouring concrete. There's still tons of work to be done: you just get more of it done in a given time.

                                        cheers Chris Maunder

                                        D 1 Reply Last reply
                                        0
                                        • C Chris Maunder

                                          I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                          cheers Chris Maunder

                                          S Offline
                                          S Offline
                                          StatementTerminator
                                          wrote on last edited by
                                          #34

                                          I'm not sure I get the issue. Web APIs already have a standard naming convention based on HTTP verbs, and that makes sense given that they use HTTP. I'd prefer that not to change. If you're saying you want a consistent naming convention for accessing any kind of data/storage with any kind of API/library/framework, then we'll never agree on what that should be. For instance, as a DB guy everything is CRUD to me, and I see every kind of data access through that lens and would name things accordingly if it were up to me. Doesn't mean anyone would agree with me though.

                                          D 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups