Millions of Docker repos found pushing malware, phishing sites
-
As JFrog security researchers found, around 20% of the 15 million repositories hosted by Docker Hub contained malicious content, ranging from spam to dangerous malware and phishing sites.
I'm so glad someone's making use of containers
-
As JFrog security researchers found, around 20% of the 15 million repositories hosted by Docker Hub contained malicious content, ranging from spam to dangerous malware and phishing sites.
I'm so glad someone's making use of containers
I prefer out-of-the-box programming. I was working for a company having contractual obligations to deliver updates and bug fixes for x years. Lots of young, eager developers where actively++ promoting containers as The way to preserve a complete environment for generating an old system version with old version tools (which has been shown to be essential for reproducing bit-identical versions of the product). I was responsible for establishing a container based environment. Luckily, I would say, we had an OS update. It turned out that the new OS version would not support the old containers, and that was as "promised"
Religious freedom is the freedom to say that two plus two make five.
-
As JFrog security researchers found, around 20% of the 15 million repositories hosted by Docker Hub contained malicious content, ranging from spam to dangerous malware and phishing sites.
I'm so glad someone's making use of containers
I prefer out-of-the-box programming. I was working for a company having contractual obligations to deliver updates and bug fixes for x years. Lots of young, eager developers where actively++ promoting containers as The way to preserve a complete environment for generating an old system version with old version tools (which has been shown to be essential for reproducing bit-identical versions of the product). I was responsible for establishing a container based environment. Luckily, I would say, we had an OS update. It turned out that the new OS version would not support the old containers, and that was as "promised": There was no promise to support the container version that we had used for our container based build system. The new OS version didn't make much long time promises, either. The disclaimers were clear and unambiguous: We could rely on our containers being runnable on new OS versions only for a fraction of our contractual support period. For the container trials, we had to restrict future support to CLI tools, which is rather limiting. In principle, we could have switched to X.11 based GUI tools. Our primary IDEs were not X.11 based, but provided a lot of debugging and testing functionality that couldn't easily be replaced with X.11 based IDEs. So containers were ditched as the basis for long-time support. It was simpler to set aside an old, real, machine with the old OS and old tool set to be used for future customer support, allowing GUI tools to be used (albeit old versions without the newer bells & whistles). Containers may be useful for backend servers. While it in theory is possible to make end user containerized applications (based on X.11): In theory there is no difference between theory and practice, but in practice there is. Containers belong way back in the back room, where *nix resides. Not in the user application domain.
Religious freedom is the freedom to say that two plus two make five.
-
I prefer out-of-the-box programming. I was working for a company having contractual obligations to deliver updates and bug fixes for x years. Lots of young, eager developers where actively++ promoting containers as The way to preserve a complete environment for generating an old system version with old version tools (which has been shown to be essential for reproducing bit-identical versions of the product). I was responsible for establishing a container based environment. Luckily, I would say, we had an OS update. It turned out that the new OS version would not support the old containers, and that was as "promised": There was no promise to support the container version that we had used for our container based build system. The new OS version didn't make much long time promises, either. The disclaimers were clear and unambiguous: We could rely on our containers being runnable on new OS versions only for a fraction of our contractual support period. For the container trials, we had to restrict future support to CLI tools, which is rather limiting. In principle, we could have switched to X.11 based GUI tools. Our primary IDEs were not X.11 based, but provided a lot of debugging and testing functionality that couldn't easily be replaced with X.11 based IDEs. So containers were ditched as the basis for long-time support. It was simpler to set aside an old, real, machine with the old OS and old tool set to be used for future customer support, allowing GUI tools to be used (albeit old versions without the newer bells & whistles). Containers may be useful for backend servers. While it in theory is possible to make end user containerized applications (based on X.11): In theory there is no difference between theory and practice, but in practice there is. Containers belong way back in the back room, where *nix resides. Not in the user application domain.
Religious freedom is the freedom to say that two plus two make five.
Why implement the requirement that way, instead of just setting aside a virtual machine image that contains all of the build tools?
The difficult we do right away... ...the impossible takes slightly longer.
-
Why implement the requirement that way, instead of just setting aside a virtual machine image that contains all of the build tools?
The difficult we do right away... ...the impossible takes slightly longer.
That was the old solution, but it had serious problems with USB (essential for the test equipment). Also, the VMs turned out to be huge, and we had created lots of them. The hope was that containers would create an environment restricted to the tools and their runtime requirements - presumably a lot less than than the full VM OS environment. I believe they found a cure for the USB problems, and disk space was getting cheaper, so maybe they did resort to VMs after I left the company. Picking up a ten year old VM could cause more friction than just running an old container - well, that's what we thought, and it turned out to be wrong. Maybe Linux can run any and every ten year old container. Maybe your favorite VM manager can run any and every ten year old VM image. So maybe reverting to the old VM solution was a wise solution. Yet, I would certainly want to see that 10 year old solution running with all the physical interfaces (USB was only one; we still were using RS232 COM-ports, and software mapping 4 COM-ports to a single D25 LPT-style port). I am not at all sure that the old VM solution really could solve all the issues we encountered when running containers. Whether VMs were a good solution or not: The container solution was not a good one. And then I left the company, leaving them to choose their future path without my competence. Visiting them again and ask them to show how they rebuild a ten year old system might be an interesting exercise!
Religious freedom is the freedom to say that two plus two make five.