Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Should libraries have a standard API and naming convention?

Should libraries have a standard API and naming convention?

Scheduled Pinned Locked Moved The Lounge
asp-netcsharpjavascriptdatabasedotnet
43 Posts 23 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Offline
    C Offline
    Chris Maunder
    wrote on last edited by
    #1

    I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

    cheers Chris Maunder

    S Richard DeemingR L M S 20 Replies Last reply
    0
    • C Chris Maunder

      I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

      cheers Chris Maunder

      S Offline
      S Offline
      Slacker007
      wrote on last edited by
      #2

      From my perspective: There is no other profession, that I personally know of, where there is a pool of vastly differing opinions on how to do things and how things should be, than software development and developers. We have standardized music, medical, and most science fields, but software engineering/development is lacking big time. So, I agree with you that we as an industry should standardize as much as we can (not just APIs), but then who gets to be the king on what the standards are? You? Me? Jeff down the street? What if I don't like your standards? back to square one. IMHO, this is an exercise in futility.

      J C J 3 Replies Last reply
      0
      • C Chris Maunder

        I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

        cheers Chris Maunder

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #3

        1. SaveAs implies a possible conversion. 2. Async is needed for UI reponsiveness and multi-threading. 3. Buffering implies intermediate processing of the input stream 4. Async not needed or not understood. People become more unhappy as the number of options increases. The simpler one's existence, the happier one is. I guess "AI" will make people happier by choosing for them.

        "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

        1 Reply Last reply
        0
        • C Chris Maunder

          I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

          cheers Chris Maunder

          Richard DeemingR Offline
          Richard DeemingR Offline
          Richard Deeming
          wrote on last edited by
          #4

          I'm sure you've never seen this "obligatory XKCD"[^] before. :-D


          "These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer

          "These people looked deep within my soul and assigned me a number based on the order in which I joined" - Homer

          M 1 Reply Last reply
          0
          • Richard DeemingR Richard Deeming

            I'm sure you've never seen this "obligatory XKCD"[^] before. :-D


            "These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer

            M Offline
            M Offline
            Marc Clifton
            wrote on last edited by
            #5

            Quote:

            Should libraries have a standard API and naming convention?

            Absolutely! Microsoft's standards, Google's standards, Apple's standards, Facebook's standards. Or if you prefer, 2023 standards, 2024 standards, 2025 standards.... :laugh:

            Latest Articles:
            A Lightweight Thread Safe In-Memory Keyed Generic Cache Collection Service A Dynamic Where Implementation for Entity Framework

            1 Reply Last reply
            0
            • C Chris Maunder

              I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

              cheers Chris Maunder

              M Offline
              M Offline
              Marc Clifton
              wrote on last edited by
              #6

              Quote:

              Should libraries have a standard API and naming convention?

              We absolutely need a single standard! Microsoft's standard, Google's standard, Apple's standard, Facebook's standard. Or if you prefer, 2023 standards, 2024 standards, 2025 standards.... :laugh:

              Latest Articles:
              A Lightweight Thread Safe In-Memory Keyed Generic Cache Collection Service A Dynamic Where Implementation for Entity Framework

              J 1 Reply Last reply
              0
              • C Chris Maunder

                I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                cheers Chris Maunder

                S Offline
                S Offline
                Single Step Debugger
                wrote on last edited by
                #7

                Because we don't have a good widely accepted book on naming conventions. The reasons are software engendering is relatively new and shockingly fluid trade. I'm pretty sure the first tribe healers had many different words for constipation back then; in the stone age.

                Advertise here – minimum three posts per day are guaranteed.

                C 1 Reply Last reply
                0
                • C Chris Maunder

                  I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                  cheers Chris Maunder

                  R Offline
                  R Offline
                  rnbergren
                  wrote on last edited by
                  #8

                  This has been a discussion point in programming since the first subroutine was written. It will continue forever. Good luck

                  To err is human to really elephant it up you need a computer

                  1 Reply Last reply
                  0
                  • C Chris Maunder

                    I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                    cheers Chris Maunder

                    J Offline
                    J Offline
                    Jeremy Falcon
                    wrote on last edited by
                    #9

                    Chris Maunder wrote:

                    We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible

                    That's always the crux of the situation. The reality is, if you have a lot of users, not every dev is hardcore and wants to spend their entire life always relearning. And to be honest, if you want a family and kids you can't really blame some people. So, there needs to be a sense of familiarity even if something new is introduced. If it's completely different with every major release, you'll find yourself losing users that just want to get their job done and don't care about being an uber geek. Love or hate PHP, that's the exact reason it was so hard for it to de-crap (if it ever did). It just got too popular too quick. And to keep that... they kept the crap. The original developer even mentioned this. He never expected PHP to get so popular as it did in the beginning. But, once it did it was too late.

                    Chris Maunder wrote:

                    There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide

                    So if you want a design an API that's "easier" but the original goal of the first version of the API was to be granular... and let's say you can't change the first version much because of a strict ABI compatibility, then adding helper classes and/or a helper API is what I'd usually do. To your point, it does bloat the codebase. I suppose keeping it a separate helper project would help with that. If it's a fundamental paradigm shift though, like using AI and qubits to psychically predict winner lottery numbers while retrieving data, that would be a new project for sure.

                    Chris Maunder wrote:

                    Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this

                    It's worth mentioning that is this a good thing since CPUs are all about cores now and not just upping raw clock speed. I suppose the design of this would be language dependent though. As far as JavaScript, you can keep pretty much the same API design when it comes to async vs synchronous code.

                    Chris Maunder wrote:

                    We just like making up new paths for writing the same old code bec

                    J 1 Reply Last reply
                    0
                    • M Marc Clifton

                      Quote:

                      Should libraries have a standard API and naming convention?

                      We absolutely need a single standard! Microsoft's standard, Google's standard, Apple's standard, Facebook's standard. Or if you prefer, 2023 standards, 2024 standards, 2025 standards.... :laugh:

                      Latest Articles:
                      A Lightweight Thread Safe In-Memory Keyed Generic Cache Collection Service A Dynamic Where Implementation for Entity Framework

                      J Offline
                      J Offline
                      Jeremy Falcon
                      wrote on last edited by
                      #10

                      We'll call it MGAF. After a few iterations it can be renamed to MacGyver. Then we'll have come full circle. :laugh:

                      Jeremy Falcon

                      1 Reply Last reply
                      0
                      • S Slacker007

                        From my perspective: There is no other profession, that I personally know of, where there is a pool of vastly differing opinions on how to do things and how things should be, than software development and developers. We have standardized music, medical, and most science fields, but software engineering/development is lacking big time. So, I agree with you that we as an industry should standardize as much as we can (not just APIs), but then who gets to be the king on what the standards are? You? Me? Jeff down the street? What if I don't like your standards? back to square one. IMHO, this is an exercise in futility.

                        J Offline
                        J Offline
                        Jeremy Falcon
                        wrote on last edited by
                        #11

                        I guess I misread some of that post then. Coming from the web side, we do have standards in naming conventions. We just didn't invite Jeff to the meeting. :laugh:

                        Jeremy Falcon

                        1 Reply Last reply
                        0
                        • S Slacker007

                          From my perspective: There is no other profession, that I personally know of, where there is a pool of vastly differing opinions on how to do things and how things should be, than software development and developers. We have standardized music, medical, and most science fields, but software engineering/development is lacking big time. So, I agree with you that we as an industry should standardize as much as we can (not just APIs), but then who gets to be the king on what the standards are? You? Me? Jeff down the street? What if I don't like your standards? back to square one. IMHO, this is an exercise in futility.

                          C Offline
                          C Offline
                          Chris Maunder
                          wrote on last edited by
                          #12

                          That's the beauty of Standards: there are so many to choose from.

                          cheers Chris Maunder

                          G 1 Reply Last reply
                          0
                          • S Single Step Debugger

                            Because we don't have a good widely accepted book on naming conventions. The reasons are software engendering is relatively new and shockingly fluid trade. I'm pretty sure the first tribe healers had many different words for constipation back then; in the stone age.

                            Advertise here – minimum three posts per day are guaranteed.

                            C Offline
                            C Offline
                            Chris Maunder
                            wrote on last edited by
                            #13

                            It has really only been 80 years or so. I'm sure we'll get there... ;)

                            cheers Chris Maunder

                            1 Reply Last reply
                            0
                            • C Chris Maunder

                              I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                              cheers Chris Maunder

                              J Offline
                              J Offline
                              jschell
                              wrote on last edited by
                              #14

                              Chris Maunder wrote:

                              any time you get data from a source (uploaded file, network stream, from database) you call it "Get"

                              Conflicts with standard class access for attributes - getter and setter

                              Chris Maunder wrote:

                              any time you need to append data, you call Append, and also override the + operator.

                              Absolutely not - never do that. Overriding operators should only occur in very limited circumstances. Used to be I would claim that it might work for vector addition but I am not even sure I would support that anymore.

                              Chris Maunder wrote:

                              I just want to save the file to disk.

                              I don't see your point. Streams have never been limited to just that. Moving data has always had more potential than that. Certainly true now. And also true long ago. Adding distinct methods for every potential movement of data would be a bad idea.

                              Chris Maunder wrote:

                              We're scared to break backwards compatibility

                              Yes please. More of that. I cringe, with good reason, every time I see someone refactor code because they think they are making it better. I have seen two different production problems show up in just the last 6 months because of that. That doesn't include the ones I stopped from happening because I saw the code before hand and was able to point out the enterprise impact before it rolled out.

                              Chris Maunder wrote:

                              I think we as an industry need a big refactoring.

                              People buy hammers but they do so to build tables, fences, houses, and skyscrapers. Software development is a hammer. It is not the product/service. The sales people do not care if the healthcare site uses two API methods with different names but which do the same thing. And the customers definitely do not. Sure it increases maintenance costs. But so does a full enterprise refactor.

                              1 Reply Last reply
                              0
                              • S Slacker007

                                From my perspective: There is no other profession, that I personally know of, where there is a pool of vastly differing opinions on how to do things and how things should be, than software development and developers. We have standardized music, medical, and most science fields, but software engineering/development is lacking big time. So, I agree with you that we as an industry should standardize as much as we can (not just APIs), but then who gets to be the king on what the standards are? You? Me? Jeff down the street? What if I don't like your standards? back to square one. IMHO, this is an exercise in futility.

                                J Offline
                                J Offline
                                jschell
                                wrote on last edited by
                                #15

                                Slacker007 wrote:

                                We have standardized music, medical, and most science fields, but software engineering/development is lacking big time

                                No. Humans are messy. You do know that Ed Sheeran just won a civil case for copyright infringement based on, presumably, one of those 'standards' of the music industry? So certainly not settled for some. You do know that there are license medical doctors that are prescribing CAM (Complementary Alternative Medicine) medicine? You know that every cancer hospital except for one has a CAM center? You do know that people were comparing the way the Mumps vaccine was originally researched and even what it does to how COVID mRNA (and all mRNA) vaccine was researched? It took 30 years for Texas to finally remove the license of a medical doctor who has been prompting and profiting from a medical therapy that was disproved almost at the very time it was first proposed? Not to mention what happened with Aducanumab? Not sure what you mean by "science" but in India you can get a MBA in Astrology (yes spelled correctly) at most or perhaps all universities? I think one offers a Masters of Science as well. Of course in the US 'talk therapy' is still offered by psychologists and even psychiatrists. Not to mention a slew of things like court order anger management. And the DSM (Diagnostic and Statistical Manual of Mental Disorders) is for the most part full of definitions that are nothing more that descriptions of how people describe that the 'feel'. Thus no actual objective criteria. The most recent release was disputed by at least some due to it continuing to do that and even expanding on those sorts of definitions. The number of pay per publish 'science' publications are probably expanding. And it seems to be a trend to now realize that the standard for looking for errors which can lead to false positives in studies is finally (like in the last couple of years) is being revised to be more strict. This came about because of a large effort to try to reproduce results for studies in reputable (not pay per publish) magazines which fail to reproduce the results of the original study?

                                1 Reply Last reply
                                0
                                • J Jeremy Falcon

                                  Chris Maunder wrote:

                                  We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible

                                  That's always the crux of the situation. The reality is, if you have a lot of users, not every dev is hardcore and wants to spend their entire life always relearning. And to be honest, if you want a family and kids you can't really blame some people. So, there needs to be a sense of familiarity even if something new is introduced. If it's completely different with every major release, you'll find yourself losing users that just want to get their job done and don't care about being an uber geek. Love or hate PHP, that's the exact reason it was so hard for it to de-crap (if it ever did). It just got too popular too quick. And to keep that... they kept the crap. The original developer even mentioned this. He never expected PHP to get so popular as it did in the beginning. But, once it did it was too late.

                                  Chris Maunder wrote:

                                  There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide

                                  So if you want a design an API that's "easier" but the original goal of the first version of the API was to be granular... and let's say you can't change the first version much because of a strict ABI compatibility, then adding helper classes and/or a helper API is what I'd usually do. To your point, it does bloat the codebase. I suppose keeping it a separate helper project would help with that. If it's a fundamental paradigm shift though, like using AI and qubits to psychically predict winner lottery numbers while retrieving data, that would be a new project for sure.

                                  Chris Maunder wrote:

                                  Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this

                                  It's worth mentioning that is this a good thing since CPUs are all about cores now and not just upping raw clock speed. I suppose the design of this would be language dependent though. As far as JavaScript, you can keep pretty much the same API design when it comes to async vs synchronous code.

                                  Chris Maunder wrote:

                                  We just like making up new paths for writing the same old code bec

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #16

                                  Jeremy Falcon wrote:

                                  Nowadays, eventually even the desktop will be replied by a merge of web and desktop technologies.

                                  Desktop computers? I doubt that. Been tried multiple times using different ways. None of them had any acceptance. Timesharing computers in the 70s was widely used but only because the cost of individual computers was so high.

                                  J 1 Reply Last reply
                                  0
                                  • C Chris Maunder

                                    I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                    cheers Chris Maunder

                                    P Offline
                                    P Offline
                                    Peter_in_2780
                                    wrote on last edited by
                                    #17

                                    Different windmills, same tilting... Around 1880, my great-great grandfather W E Hearn[^] set out to codify the laws of the State of Victoria. He was equally successful.

                                    Quote:

                                    However, the codification was never adopted since "although praised in Parliament, [it] was regarded as too abstract by practising lawyers."

                                    Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012

                                    1 Reply Last reply
                                    0
                                    • C Chris Maunder

                                      I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                      cheers Chris Maunder

                                      Greg UtasG Offline
                                      Greg UtasG Offline
                                      Greg Utas
                                      wrote on last edited by
                                      #18

                                      I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players who've already more or less aligned to the standard they're pushing. This gives them an advantage and makes it difficult for others to differentiate. But I expect each library to standardize internally instead of running amok with inconsistent naming or ways of doing something, although I can see the latter happening when intended for a broad range of applications. Each of the four reasons you listed for a lack of standardization plays a role. Breaking changes are a pet peeve. To me, a breaking change is something that requires a user to redesign their software, which is definitely something to avoid. However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change. But whiners will whine. Fine, so stay on the old release if you expect to do no work to move to the new one. Good libraries and frameworks maintain a low surface-to-volume ratio. I think your #2 and #3 (both the result of multiple ways of doing something) are excusable in a library that supports a broad range of applications. But a library focused on specific types of applications should be more opinionated, settling on a standard way to do each thing in order to improve code reuse and interoperability between the applications that use it. #4, and #2 and #3 when unwarranted, are what a former colleague called superfluous diversity. This is a dead giveaway that the system lacks what Brooks called conceptual integrity. It was almost certainly developed without proper design and code reviews or software architects to steer it on a consistent path.

                                      Robust Services Core | Software Techniques for Lemmings | Articles
                                      The fox knows many things, but the hedgehog knows one big thing.

                                      <p><a href="https://github.com/GregUtas/robust-services-core/blob/master/README.md">Robust Services Core</a>
                                      <em>The fox knows many things, but the hedgehog knows one big thing.</em></p>

                                      R J 2 Replies Last reply
                                      0
                                      • C Chris Maunder

                                        I'm in the process of moving some code in between Javascript, .NET Core 5, .NET 7 with detours around Razor and Blazor. The code changes between them are doing my head in. Just reading a file from the browser seems to have 6 different ways, all subtly different, all similarly named. I understand that changes in underlying architecture means that one method won't necessarily translate to another, but surely there are some core things that we, as an industry, could standardise. For example: You have an object, for instance a file. Or a tensor. Or a string. - any time you get data from a source (uploaded file, network stream, from database) you call it "Get" - any time you save data to storage you call it "Save" - any time you need to append data, you call Append, and also override the + operator. The thing that's getting me is I just want to save an uploaded file. A quick check shows I can 1. Call file.SaveAs to save the file 2. Open a filestream, call file.CopyToAsync to the stream and close the stream 3. Read the file's InputStream into a buffer and call File.WriteAllBytes to save the buffer 4. Same as 2, but using CopyTo instead of CopyToAsync I'm sure there are more. I just want to save the file to disk. So I'm wondering, since I've not actually taken the time to read up on this, if this happens because 1. We're scared to break backwards compatibility so we create new methods to avoid breaking old methods, and overloading functions doesn't always work is or possible 2. There are too many ways to do a given task so we present you a bag of parts (eg streams and buffers) and let you mix and match because there's no single "default" way that's practical to provide 3. Program flow has changed sufficiently (eg async and promises) that there truly needs to be different methods to cater for this 4. We just like making up new paths for writing the same old code because they suited our headspace at the time It seems a massive waste of cycles. I think we as an industry need a big refactoring.

                                        cheers Chris Maunder

                                        B Offline
                                        B Offline
                                        BernardIE5317
                                        wrote on last edited by
                                        #19

                                        as you know obtaining the status of a C++ standard library stream requires invoking rdstate . i do not know what the rd stands for . is it per-chance "read" . the set in setstate is a dead giveaway but rd ? who knows . maybe it is returnd_a_state

                                        1 Reply Last reply
                                        0
                                        • Greg UtasG Greg Utas

                                          I don't expect the industry to standardize; this can even be undesirable. Standards can be dominated by big players who've already more or less aligned to the standard they're pushing. This gives them an advantage and makes it difficult for others to differentiate. But I expect each library to standardize internally instead of running amok with inconsistent naming or ways of doing something, although I can see the latter happening when intended for a broad range of applications. Each of the four reasons you listed for a lack of standardization plays a role. Breaking changes are a pet peeve. To me, a breaking change is something that requires a user to redesign their software, which is definitely something to avoid. However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change. But whiners will whine. Fine, so stay on the old release if you expect to do no work to move to the new one. Good libraries and frameworks maintain a low surface-to-volume ratio. I think your #2 and #3 (both the result of multiple ways of doing something) are excusable in a library that supports a broad range of applications. But a library focused on specific types of applications should be more opinionated, settling on a standard way to do each thing in order to improve code reuse and interoperability between the applications that use it. #4, and #2 and #3 when unwarranted, are what a former colleague called superfluous diversity. This is a dead giveaway that the system lacks what Brooks called conceptual integrity. It was almost certainly developed without proper design and code reviews or software architects to steer it on a consistent path.

                                          Robust Services Core | Software Techniques for Lemmings | Articles
                                          The fox knows many things, but the hedgehog knows one big thing.

                                          R Offline
                                          R Offline
                                          resuna
                                          wrote on last edited by
                                          #20

                                          "However, simply changing a function name or its signature, and providing release notes so that users can easily convert to the new interface, shouldn't be considered a breaking change." If not, then what would be considered a breaking change? An undocumented one?

                                          Greg UtasG 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups