Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Is MS slow or fast?

Is MS slow or fast?

Scheduled Pinned Locked Moved The Lounge
databasecomsecuritybusinesshelp
18 Posts 11 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D Offline
    D Offline
    Daniel Turini
    wrote on last edited by
    #1

    The full article " Mike Nash explained that Microsoft's security priorities focus on improving patches, offering better help, reducing vulnerabilities and increasing the quality of its products in order to lessen the need for patches. The analysis presented by the corporate VP in charge of Microsoft's Security Business Unit started with the improvements the company has made since it launched its Trustworthy Computing in January 2002. The initiatives launched have allowed the company to reduce the time it takes to release a patch for new vulnerabilities. Nash explained that Microsoft needed 331 days from the time the Code Red/Nimda worm was discovered until the patch for the vulnerability exploited by this malicious code was released. In the case of SQL Slammer, it took 180 days, for the Welchia worm, 151 days and for Blaster, Microsoft got a patch out in 25 days. " Knowing Open-Source typical times (a patch is released 24h~72h from the discovery of the vulnerability), this seems slow. OTOH, it also does not seem that in 24h~72h you have fully ran regression tests (this may be true on services, but on kernel vulnerabilities, full regression tests may take 1~2 weeks). So, what's your opinion about this? Is MS right, being slow (and, theoretically, cautious), on releasing security patches or is the OSS mantra right, "release soon, release often"? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

    M S M N 4 Replies Last reply
    0
    • D Daniel Turini

      The full article " Mike Nash explained that Microsoft's security priorities focus on improving patches, offering better help, reducing vulnerabilities and increasing the quality of its products in order to lessen the need for patches. The analysis presented by the corporate VP in charge of Microsoft's Security Business Unit started with the improvements the company has made since it launched its Trustworthy Computing in January 2002. The initiatives launched have allowed the company to reduce the time it takes to release a patch for new vulnerabilities. Nash explained that Microsoft needed 331 days from the time the Code Red/Nimda worm was discovered until the patch for the vulnerability exploited by this malicious code was released. In the case of SQL Slammer, it took 180 days, for the Welchia worm, 151 days and for Blaster, Microsoft got a patch out in 25 days. " Knowing Open-Source typical times (a patch is released 24h~72h from the discovery of the vulnerability), this seems slow. OTOH, it also does not seem that in 24h~72h you have fully ran regression tests (this may be true on services, but on kernel vulnerabilities, full regression tests may take 1~2 weeks). So, what's your opinion about this? Is MS right, being slow (and, theoretically, cautious), on releasing security patches or is the OSS mantra right, "release soon, release often"? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

      M Offline
      M Offline
      Matt Gullett
      wrote on last edited by
      #2

      I would say that MS is fast when it wants to be. Fast, however, is a relative term. 25 days from the discovery of the bug or from the first attack? 25 days from the first attack is too long. 25 days to release a patch does not seem to be all that long to me. 25 days allows time to diagnose, design, develop, test and regression test the patch. I would say that 331 days is a longgggg time and 151 days is too long also, but 25 days is not too bad. MS has 100's of millions of copies of its software installed so testing is pretty important here. MS is in a loose-loose situation, (self-inflicted), if they release a patch early and it causes just .5% of their installed base a problem, well that could be 500,000 PCs. If they wait too long, and .5% of their installed base has a problem, they're still could be 500,000 PCs affected. The real issue is not about how fast, or slow, MS is. The issue is that they have shot themselves in the foot in regards to security so many times that nobody believes they are capable of delivering secure applications. As a programmer, I know that this is not the case. If MS wants to and has an internal directive to do it, secure applications will be the norm. I believe that MS has made allot of headway in this regard. Anyone who develops moderately sophisticated applications knows how difficult it is to write truly secure and stable code that performs well. MS has a better chance of developing secure applications than anyone else, if they choose to do so. The other issue here is that MS is being compared like apples and oranges. Which just shows me that the lay person simply does not understand software (nor should they). Linux is an entirely different kind of OS from Windows. (Personally, I think Linux would do better if its pervayers would recognize this and focus on its strengths instead of trying to compare it with Windows.) Windows is a damn good desktop OS. Linux is not, if for no other reason than the fact that Windows has a monopoly (there are other reasons). There is no reason the two OSs can't co-exist. Oh yeah, is MS slow of fast? Surveys says: fast.

      D M 2 Replies Last reply
      0
      • M Matt Gullett

        I would say that MS is fast when it wants to be. Fast, however, is a relative term. 25 days from the discovery of the bug or from the first attack? 25 days from the first attack is too long. 25 days to release a patch does not seem to be all that long to me. 25 days allows time to diagnose, design, develop, test and regression test the patch. I would say that 331 days is a longgggg time and 151 days is too long also, but 25 days is not too bad. MS has 100's of millions of copies of its software installed so testing is pretty important here. MS is in a loose-loose situation, (self-inflicted), if they release a patch early and it causes just .5% of their installed base a problem, well that could be 500,000 PCs. If they wait too long, and .5% of their installed base has a problem, they're still could be 500,000 PCs affected. The real issue is not about how fast, or slow, MS is. The issue is that they have shot themselves in the foot in regards to security so many times that nobody believes they are capable of delivering secure applications. As a programmer, I know that this is not the case. If MS wants to and has an internal directive to do it, secure applications will be the norm. I believe that MS has made allot of headway in this regard. Anyone who develops moderately sophisticated applications knows how difficult it is to write truly secure and stable code that performs well. MS has a better chance of developing secure applications than anyone else, if they choose to do so. The other issue here is that MS is being compared like apples and oranges. Which just shows me that the lay person simply does not understand software (nor should they). Linux is an entirely different kind of OS from Windows. (Personally, I think Linux would do better if its pervayers would recognize this and focus on its strengths instead of trying to compare it with Windows.) Windows is a damn good desktop OS. Linux is not, if for no other reason than the fact that Windows has a monopoly (there are other reasons). There is no reason the two OSs can't co-exist. Oh yeah, is MS slow of fast? Surveys says: fast.

        D Offline
        D Offline
        Daniel Turini
        wrote on last edited by
        #3

        Matt Gullett wrote: The other issue here is that MS is being compared like apples and oranges. Which just shows me that the lay person simply does not understand software (nor should they). Linux is an entirely different kind of OS from Windows. (Personally, I think Linux would do better if its pervayers would recognize this and focus on its strengths instead of trying to compare it with Windows.) Windows is a damn good desktop OS. Linux is not, if for no other reason than the fact that Windows has a monopoly (there are other reasons). There is no reason the two OSs can't co-exist. I usually say that Linux is a server OS with a GUI, and Windows is a desktop OS that can be used as a server. This is clear from the evolution of boths OSes and I think that'll see a healthy growth on Linux, not at the point to kill MS (and I believe it'll be very far from that), but at the point that we, enterprise programmers, won't be able afford ignoring Linux for much longer. Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

        M 1 Reply Last reply
        0
        • D Daniel Turini

          The full article " Mike Nash explained that Microsoft's security priorities focus on improving patches, offering better help, reducing vulnerabilities and increasing the quality of its products in order to lessen the need for patches. The analysis presented by the corporate VP in charge of Microsoft's Security Business Unit started with the improvements the company has made since it launched its Trustworthy Computing in January 2002. The initiatives launched have allowed the company to reduce the time it takes to release a patch for new vulnerabilities. Nash explained that Microsoft needed 331 days from the time the Code Red/Nimda worm was discovered until the patch for the vulnerability exploited by this malicious code was released. In the case of SQL Slammer, it took 180 days, for the Welchia worm, 151 days and for Blaster, Microsoft got a patch out in 25 days. " Knowing Open-Source typical times (a patch is released 24h~72h from the discovery of the vulnerability), this seems slow. OTOH, it also does not seem that in 24h~72h you have fully ran regression tests (this may be true on services, but on kernel vulnerabilities, full regression tests may take 1~2 weeks). So, what's your opinion about this? Is MS right, being slow (and, theoretically, cautious), on releasing security patches or is the OSS mantra right, "release soon, release often"? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

          S Offline
          S Offline
          Senkwe Chanda
          wrote on last edited by
          #4

          I'm sure it's only a matter of time before the fast patch times bite the OSS community in the ass. Regression testing and backwads compatibility become more and more important the more your install base grows. So I think MS right now is doing the best they can. Woke up this morning...and got myself a blog

          D 1 Reply Last reply
          0
          • D Daniel Turini

            Matt Gullett wrote: The other issue here is that MS is being compared like apples and oranges. Which just shows me that the lay person simply does not understand software (nor should they). Linux is an entirely different kind of OS from Windows. (Personally, I think Linux would do better if its pervayers would recognize this and focus on its strengths instead of trying to compare it with Windows.) Windows is a damn good desktop OS. Linux is not, if for no other reason than the fact that Windows has a monopoly (there are other reasons). There is no reason the two OSs can't co-exist. I usually say that Linux is a server OS with a GUI, and Windows is a desktop OS that can be used as a server. This is clear from the evolution of boths OSes and I think that'll see a healthy growth on Linux, not at the point to kill MS (and I believe it'll be very far from that), but at the point that we, enterprise programmers, won't be able afford ignoring Linux for much longer. Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

            M Offline
            M Offline
            Matt Gullett
            wrote on last edited by
            #5

            I agree completely. I have already begun my investigation into Linux and IO forsee using Linux as an inexpensive application server in the next 12-18 months. The biggest problem I see with Linux from my POV is that a good portion (20-40%) of my code base will not work on Linux. As I continue to develop, though, I am being much more careful to ensure compatibility wherever possible. What Linux needs is some good C++ development tools (they may exist, but nothing like Visual Studio), and better support for integrations in a Windows environment. As an example, I am currently working on an interactive cross-tab application that is required to act like a mini-database. In this cross-tab service, I query a SQL Server database (or others) and build data-sets and optimize the data sets for cross-tab queries. I then provide an interface to accept incoming requests, queue them up, and build result sets from the mini-data sets, all the time maintaining stats on the incoming requests, etc. This application is a perfect one for Linux since it is not GUI oriented and is performance sensitive. I could easily see having 2 or 3 Linux servers responsible for running this app, but being queried from web and GUI applications running under Windows. All I need is some good tools and time.

            D G 2 Replies Last reply
            0
            • M Matt Gullett

              I agree completely. I have already begun my investigation into Linux and IO forsee using Linux as an inexpensive application server in the next 12-18 months. The biggest problem I see with Linux from my POV is that a good portion (20-40%) of my code base will not work on Linux. As I continue to develop, though, I am being much more careful to ensure compatibility wherever possible. What Linux needs is some good C++ development tools (they may exist, but nothing like Visual Studio), and better support for integrations in a Windows environment. As an example, I am currently working on an interactive cross-tab application that is required to act like a mini-database. In this cross-tab service, I query a SQL Server database (or others) and build data-sets and optimize the data sets for cross-tab queries. I then provide an interface to accept incoming requests, queue them up, and build result sets from the mini-data sets, all the time maintaining stats on the incoming requests, etc. This application is a perfect one for Linux since it is not GUI oriented and is performance sensitive. I could easily see having 2 or 3 Linux servers responsible for running this app, but being queried from web and GUI applications running under Windows. All I need is some good tools and time.

              D Offline
              D Offline
              Daniel Turini
              wrote on last edited by
              #6

              Matt Gullett wrote: What Linux needs is some good C++ development tools (they may exist, but nothing like Visual Studio), and better support for integrations in a Windows environment. As an example, I am currently working on an interactive cross-tab application that is required to act like a mini-database. I agree, and that's why I'm adopting C# as (almost) my only language for new coding: 1. It'll be easier to port it to Linux, *BSD, Solaris, whatever runs a .NET framework. And most of my code is server based, so, as long as one is connecting through a TCP/IP port to a server, no one cares if it is a Linux or a Windows server. 2. Translating source code from C# to Java does not seem to be a daunting task to me, too, so even if Java grows more and becomes "the new C++" (please God, no! We need fast software!), in the future I can "JUMP" to Java :-D In this way I think that Mono is going to be revolutionary, as it will allow one to develop with Visual Studio, on a Windows machine, with a great environment, and easily port the code to Linux. This may give a great push to Linux, ironically, because it'll allow one to use MS tools on it. Unfortunately, managed C++ is not C++, at least not the C++ I like. So, now, I'm almost never coding in C++ anymore, although it's a language that I love to code in. C# is fun, but not even 10% as fun as C++. Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

              1 Reply Last reply
              0
              • S Senkwe Chanda

                I'm sure it's only a matter of time before the fast patch times bite the OSS community in the ass. Regression testing and backwads compatibility become more and more important the more your install base grows. So I think MS right now is doing the best they can. Woke up this morning...and got myself a blog

                D Offline
                D Offline
                Daniel Turini
                wrote on last edited by
                #7

                Senkwe Chanda wrote: Regression testing and backwads compatibility become more and more important the more your install base grows. Tell Mandrake about this: do you remember the LG CD-ROM drive destruction on 9.2? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

                T 1 Reply Last reply
                0
                • D Daniel Turini

                  The full article " Mike Nash explained that Microsoft's security priorities focus on improving patches, offering better help, reducing vulnerabilities and increasing the quality of its products in order to lessen the need for patches. The analysis presented by the corporate VP in charge of Microsoft's Security Business Unit started with the improvements the company has made since it launched its Trustworthy Computing in January 2002. The initiatives launched have allowed the company to reduce the time it takes to release a patch for new vulnerabilities. Nash explained that Microsoft needed 331 days from the time the Code Red/Nimda worm was discovered until the patch for the vulnerability exploited by this malicious code was released. In the case of SQL Slammer, it took 180 days, for the Welchia worm, 151 days and for Blaster, Microsoft got a patch out in 25 days. " Knowing Open-Source typical times (a patch is released 24h~72h from the discovery of the vulnerability), this seems slow. OTOH, it also does not seem that in 24h~72h you have fully ran regression tests (this may be true on services, but on kernel vulnerabilities, full regression tests may take 1~2 weeks). So, what's your opinion about this? Is MS right, being slow (and, theoretically, cautious), on releasing security patches or is the OSS mantra right, "release soon, release often"? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

                  M Offline
                  M Offline
                  Mike Dimmick
                  wrote on last edited by
                  #8

                  Fast patch is only possible in Open Source if you get a developer or group of developers sufficiently committed to fix a bug within that timeframe. Sometimes it happens; sometimes it doesn't and it takes many weeks for any patch to be released. Fast, accurate patching seems to be even harder, witness the fact that OpenSSH 3.6 and 3.6.1 were released within a day of each other (March 31 and April 1 this year) . Finally, having your distribution's supplier take the patch, recompile the binary, (perhaps) validate it, package it and make the package available takes even longer. RedHat was notorious for being well behind recently. I prefer not to build from source, because the lack of test suites for most Open Source software means that I can't validate that it was built correctly. The tendency is only for the newest versions to be patched; if you need to patch an older version, you may have to wait longer, or you may have to write the patch yourself. Given the level of component incompatibility in the Open Source world, you may need to upgrade applications if you upgrade a library to a newer version. For the Linux kernel in particular, I see no commitment, as part of the code checkin cycle, to even running the limited tests that are available. Where it is warranted, with zero-day attacks or where an exploit has been made public, Microsoft manages to be very fast (e.g. with the NTDLL buffer overflow exposed via WebDAV earlier this year, where a patch was made available within 24 hours). However, with that comes the problem that they could be unable to test all possible released combinations - again, the NTDLL/WebDAV vulnerability comes to mind, where some customers running a particular hotfix found that their systems would not boot after applying the patch. On the whole I believe MS is following the right approach.

                  1 Reply Last reply
                  0
                  • M Matt Gullett

                    I would say that MS is fast when it wants to be. Fast, however, is a relative term. 25 days from the discovery of the bug or from the first attack? 25 days from the first attack is too long. 25 days to release a patch does not seem to be all that long to me. 25 days allows time to diagnose, design, develop, test and regression test the patch. I would say that 331 days is a longgggg time and 151 days is too long also, but 25 days is not too bad. MS has 100's of millions of copies of its software installed so testing is pretty important here. MS is in a loose-loose situation, (self-inflicted), if they release a patch early and it causes just .5% of their installed base a problem, well that could be 500,000 PCs. If they wait too long, and .5% of their installed base has a problem, they're still could be 500,000 PCs affected. The real issue is not about how fast, or slow, MS is. The issue is that they have shot themselves in the foot in regards to security so many times that nobody believes they are capable of delivering secure applications. As a programmer, I know that this is not the case. If MS wants to and has an internal directive to do it, secure applications will be the norm. I believe that MS has made allot of headway in this regard. Anyone who develops moderately sophisticated applications knows how difficult it is to write truly secure and stable code that performs well. MS has a better chance of developing secure applications than anyone else, if they choose to do so. The other issue here is that MS is being compared like apples and oranges. Which just shows me that the lay person simply does not understand software (nor should they). Linux is an entirely different kind of OS from Windows. (Personally, I think Linux would do better if its pervayers would recognize this and focus on its strengths instead of trying to compare it with Windows.) Windows is a damn good desktop OS. Linux is not, if for no other reason than the fact that Windows has a monopoly (there are other reasons). There is no reason the two OSs can't co-exist. Oh yeah, is MS slow of fast? Surveys says: fast.

                    M Offline
                    M Offline
                    Michael A Barnhart
                    wrote on last edited by
                    #9

                    Matt Gullett wrote: MS is in a loose-loose situation, (self-inflicted), if they release a patch early and it causes just .5% of their installed base a problem, well that could be 500,000 PCs. If they wait too long, and .5% of their installed base has a problem, they're still could be 500,000 PCs affected. Well said, This really is the delema for them. They will be found at fault by someone no mater what. Matt Gullett wrote: The issue is that they have shot themselves in the foot in regards to security so many times that nobody believes they are capable of delivering secure applications. Not to defend MS but to be honest to myself, I have to ask have they shot themselves, or delivered what customers have asked of them. Take a look at the Linux comparison. Deliver a patch in 24 hours and do not care if it corrupts my installation or not. How much of the issues are due to us demanding so much backward compatibility between releases? As much funding as MS has it is not infinite (although yes I think they could charge much less for their products.) Matt Gullett wrote: The other issue here is that MS is being compared like apples and oranges. Agree here. If as many people were trying to find fault with Linux as Windows, how do you think it would be fairing? Matt Gullett wrote: There is no reason the two OSs can't co-exist. True, Side note: I still think the developers tend to ignore legal issues though. The GPL lic is why I will not use Linux, personnally. How many have actually read it and consulted with a counsel on it? "For as long as I can remember, I have had memories. Colin Mochrie."

                    1 Reply Last reply
                    0
                    • D Daniel Turini

                      Senkwe Chanda wrote: Regression testing and backwads compatibility become more and more important the more your install base grows. Tell Mandrake about this: do you remember the LG CD-ROM drive destruction on 9.2? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

                      T Offline
                      T Offline
                      Tibor Blazko
                      wrote on last edited by
                      #10

                      i remember any CD problem with any linux but it was CD producer's problem because ignored interface norm and interpreted correct command bad way (i heard about law process with car radio producer where they lost because connecting cables bad way damaged radio) t!

                      1 Reply Last reply
                      0
                      • D Daniel Turini

                        The full article " Mike Nash explained that Microsoft's security priorities focus on improving patches, offering better help, reducing vulnerabilities and increasing the quality of its products in order to lessen the need for patches. The analysis presented by the corporate VP in charge of Microsoft's Security Business Unit started with the improvements the company has made since it launched its Trustworthy Computing in January 2002. The initiatives launched have allowed the company to reduce the time it takes to release a patch for new vulnerabilities. Nash explained that Microsoft needed 331 days from the time the Code Red/Nimda worm was discovered until the patch for the vulnerability exploited by this malicious code was released. In the case of SQL Slammer, it took 180 days, for the Welchia worm, 151 days and for Blaster, Microsoft got a patch out in 25 days. " Knowing Open-Source typical times (a patch is released 24h~72h from the discovery of the vulnerability), this seems slow. OTOH, it also does not seem that in 24h~72h you have fully ran regression tests (this may be true on services, but on kernel vulnerabilities, full regression tests may take 1~2 weeks). So, what's your opinion about this? Is MS right, being slow (and, theoretically, cautious), on releasing security patches or is the OSS mantra right, "release soon, release often"? Trying to make bits uncopyable is like trying to make water not wet. -- Bruce Schneier By the way, dog_spawn isn't a nickname - it is my name with an underscore instead of a space. -- dog_spawn

                        N Offline
                        N Offline
                        Navin
                        wrote on last edited by
                        #11

                        I would agree with you - except there is no evidence they really do a good job of regression testing on their patches/service packs. Two examples: 1. Windows NT 4.0 service pack 4 was a disaster. 2. More recently, the Blaster/Welchia patches. There was one first that fixed the vulnerability that Blaster used, and the patch actually came before the virus did, so that's good. However the patch had ANOTHER vulnerability in the same component (DCOM), and required another patch (I forgot the original patch, the new was 824146.) So clearly some pretty major stuff gets through their testing. So given that, I'd rather have a patch out in a day or two that is buggy than wait for a month to get a buggy patch. No single raindrop believes that it is responsible for the flood.

                        M 1 Reply Last reply
                        0
                        • N Navin

                          I would agree with you - except there is no evidence they really do a good job of regression testing on their patches/service packs. Two examples: 1. Windows NT 4.0 service pack 4 was a disaster. 2. More recently, the Blaster/Welchia patches. There was one first that fixed the vulnerability that Blaster used, and the patch actually came before the virus did, so that's good. However the patch had ANOTHER vulnerability in the same component (DCOM), and required another patch (I forgot the original patch, the new was 824146.) So clearly some pretty major stuff gets through their testing. So given that, I'd rather have a patch out in a day or two that is buggy than wait for a month to get a buggy patch. No single raindrop believes that it is responsible for the flood.

                          M Offline
                          M Offline
                          Mike Dimmick
                          wrote on last edited by
                          #12

                          Navin wrote: 1. Windows NT 4.0 service pack 4 was a disaster. And so it was, but that was released five years ago. As a result of this service pack, the processes for creating and releasing patches and service packs were examined more closely, which I believe has led to better quality patches and service packs since then. 2. More recently, the Blaster/Welchia patches. There was one first that fixed the vulnerability that Blaster used, and the patch actually came before the virus did, so that's good. However the patch had ANOTHER vulnerability in the same component (DCOM), and required another patch (I forgot the original patch, the new was 824146.) Yes, another vulnerability in the same component. This is a failure to detect a different issue, even after a reportedly thorough review of the DCOM/RPC codebase [1]. It does not represent a problem with the original patch - it's another issue that wasn't detected at the same time. If you spent longer trying to do a root-cause analysis and fixing any additional bugs found before releasing the patch, the patch would be further delayed. Microsoft do re-release patches from time to time if an issue is discovered after release. I believe that in general they are trying to do a good job of maintaining their software post-release. [1] Which just goes to show that you can't detect all errors by reviewing your codebase, even if you have a number of highly-skilled security developers who are familiar with the code in question.

                          A 1 Reply Last reply
                          0
                          • M Mike Dimmick

                            Navin wrote: 1. Windows NT 4.0 service pack 4 was a disaster. And so it was, but that was released five years ago. As a result of this service pack, the processes for creating and releasing patches and service packs were examined more closely, which I believe has led to better quality patches and service packs since then. 2. More recently, the Blaster/Welchia patches. There was one first that fixed the vulnerability that Blaster used, and the patch actually came before the virus did, so that's good. However the patch had ANOTHER vulnerability in the same component (DCOM), and required another patch (I forgot the original patch, the new was 824146.) Yes, another vulnerability in the same component. This is a failure to detect a different issue, even after a reportedly thorough review of the DCOM/RPC codebase [1]. It does not represent a problem with the original patch - it's another issue that wasn't detected at the same time. If you spent longer trying to do a root-cause analysis and fixing any additional bugs found before releasing the patch, the patch would be further delayed. Microsoft do re-release patches from time to time if an issue is discovered after release. I believe that in general they are trying to do a good job of maintaining their software post-release. [1] Which just goes to show that you can't detect all errors by reviewing your codebase, even if you have a number of highly-skilled security developers who are familiar with the code in question.

                            A Offline
                            A Offline
                            abhadresh
                            wrote on last edited by
                            #13

                            Mike Dimmick wrote: highly-skilled security developers Weren't we discussing Microsoft? Are these highly-skilled people the same ones that put out all the newbie buffer overrun bugs, or are they a special reserve unit? :~ ab

                            S 1 Reply Last reply
                            0
                            • A abhadresh

                              Mike Dimmick wrote: highly-skilled security developers Weren't we discussing Microsoft? Are these highly-skilled people the same ones that put out all the newbie buffer overrun bugs, or are they a special reserve unit? :~ ab

                              S Offline
                              S Offline
                              Senkwe Chanda
                              wrote on last edited by
                              #14

                              abhadresh wrote: newbie buffer overrun bugs That's FUD. As long as we continue to use C++ and C to write large sections of OS code, buffer overruns will occur. I don't care if the code is written by Dennis Ritchie himself, it WILL have flaws. Woke up this morning...and got myself a blog

                              G 1 Reply Last reply
                              0
                              • M Matt Gullett

                                I agree completely. I have already begun my investigation into Linux and IO forsee using Linux as an inexpensive application server in the next 12-18 months. The biggest problem I see with Linux from my POV is that a good portion (20-40%) of my code base will not work on Linux. As I continue to develop, though, I am being much more careful to ensure compatibility wherever possible. What Linux needs is some good C++ development tools (they may exist, but nothing like Visual Studio), and better support for integrations in a Windows environment. As an example, I am currently working on an interactive cross-tab application that is required to act like a mini-database. In this cross-tab service, I query a SQL Server database (or others) and build data-sets and optimize the data sets for cross-tab queries. I then provide an interface to accept incoming requests, queue them up, and build result sets from the mini-data sets, all the time maintaining stats on the incoming requests, etc. This application is a perfect one for Linux since it is not GUI oriented and is performance sensitive. I could easily see having 2 or 3 Linux servers responsible for running this app, but being queried from web and GUI applications running under Windows. All I need is some good tools and time.

                                G Offline
                                G Offline
                                Ganesh Ramaswamy
                                wrote on last edited by
                                #15

                                >What Linux needs is some good C++ development tools i really am not sure abt this. i am currently managing product with codebase of 250,000 lines of code(c/c++). but we dont have any C++ dev ide for our development. we just use vi/emacs and cscope on top of it to manage everything. so i dont think this is real drawback in linux. just my 2 cents

                                R 1 Reply Last reply
                                0
                                • G Ganesh Ramaswamy

                                  >What Linux needs is some good C++ development tools i really am not sure abt this. i am currently managing product with codebase of 250,000 lines of code(c/c++). but we dont have any C++ dev ide for our development. we just use vi/emacs and cscope on top of it to manage everything. so i dont think this is real drawback in linux. just my 2 cents

                                  R Offline
                                  R Offline
                                  Rohit Sinha
                                  wrote on last edited by
                                  #16

                                  Just because you can build a road digging with your bare hands, peeing on the mud, and then stamping on it to level it, doesn't mean you should. Using proper equipment and material will give you a much better road in much less time that will last much longer. Regards, Rohit Sinha Browsy

                                  Do not wait for leaders; do it alone, person to person. - Mother Teresa

                                  1 Reply Last reply
                                  0
                                  • S Senkwe Chanda

                                    abhadresh wrote: newbie buffer overrun bugs That's FUD. As long as we continue to use C++ and C to write large sections of OS code, buffer overruns will occur. I don't care if the code is written by Dennis Ritchie himself, it WILL have flaws. Woke up this morning...and got myself a blog

                                    G Offline
                                    G Offline
                                    Gary R Wheeler
                                    wrote on last edited by
                                    #17

                                    Senkwe Chanda wrote: As long as we continue to use C++ and C to write large sections of OS code I'm :confused:. What other tools are we going to use? Even assuming we follow the 'sandbox' model espoused by Java and .NET, there is still an underlying mechanism that is implemented (ultimately) in machine code that runs on the 'bare metal'. That machine code is produced by a compiler of some sort, regardless of the syntax of the source language. The possibility will still exist for overruns that can be exploited by malicious applications. I'm not even sure hardware enforcement (at the CPU instruction level) is adequate. Most hardware access enforcement in place now produces such a performance penalty, you can't use them for simple things like stack variable overruns and the like. As a result, there are vulnerabilities.


                                    Software Zen: delete this;

                                    M 1 Reply Last reply
                                    0
                                    • G Gary R Wheeler

                                      Senkwe Chanda wrote: As long as we continue to use C++ and C to write large sections of OS code I'm :confused:. What other tools are we going to use? Even assuming we follow the 'sandbox' model espoused by Java and .NET, there is still an underlying mechanism that is implemented (ultimately) in machine code that runs on the 'bare metal'. That machine code is produced by a compiler of some sort, regardless of the syntax of the source language. The possibility will still exist for overruns that can be exploited by malicious applications. I'm not even sure hardware enforcement (at the CPU instruction level) is adequate. Most hardware access enforcement in place now produces such a performance penalty, you can't use them for simple things like stack variable overruns and the like. As a result, there are vulnerabilities.


                                      Software Zen: delete this;

                                      M Offline
                                      M Offline
                                      Mike Dimmick
                                      wrote on last edited by
                                      #18

                                      Gary R. Wheeler wrote: Most hardware access enforcement in place now produces such a performance penalty, you can't use them for simple things like stack variable overruns and the like. Page-based execute protection will help - the processor implements it in the same way as read- or write-protection. It'll be offered in Windows XP SP2, Windows Server 2003 SP1 and Longhorn on processors that support it (Itanium, Athlon 64 and Opteron). Intel haven't yet announced any plans for support in future x86 processors.

                                      1 Reply Last reply
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Don't have an account? Register

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups