How many here use or plan to use Docker?
-
Don't use the term "virtual machine" when close to Docker people, unless you are eager to listen to a 45 minute intense talk that Docker is NOT, I repeat: NOT virualization! Virtualization is evil, Docker is good! And Docker isn't even "lightweight" virtualization. It is useless trying to discuss definitions of "virtualization" with Docker guys, or trying to compare the Docker way of providing isolation with a hypothetical minmal VM providing exactly those functions that your application needs while still being a VM (for the purpose of learning the details of what is so evil about virtualization). It is no use. The answer is given: VMs are evil, by definition. On the more serious side: Yes, the Docker demon is managed by a Linux kernel even in the Windows implemnentation. This is not a Linux virtual machine. On Windows 10, the Docker demon runs inside a Hyper-V VM (so it requires a 64 bit CPU with Extended Page Tables. (On Server 2016 the implementation is somewhat different, and does not use Hyper-V.) You can run Linux docker images in a Windows implementation; the Linux kernel functions are executed by the same kernel that runs the demon. You can obviously also run Windows docker images on Windows, but currently, the demon is in either Linux or Windows mode; it cannot run both flavors side by side. (I have seen rumours that this is being worked on, and will be possible in a future release.) The Linux implementation cannot run Windows images. Docker is essentially suited for backend services: Until you start doing fancy tricks, a container's only interface to the world outside the Docker demon is one or more TCP ports, or for persistent data: Mapping (parts of) an external file system as a Docker volume. There are two main alternatives for providing some sort of user interface: Either the container runs a web server, or you hook up a SSH console to it. In principle, I guess you could run e.g. an X.11 client in a Docker contiainer to give it a GUI interface; I doubt that anyone has seriously done anything like that. I guess that Docker is as suitable for web servers running on a Windows host as for web servers running on a Linux host. But applications running a Windows GUI of any kind cannot be adapted to Docker. Nor can any application that requires user interaction for installation, installation must be pure command-line based, with all parameters supplied either on the call line or in a setup/ini-file. When used for what it is good at, Docker is OK. Strechin
Thanks, that clears things up a lot. I knew Docker is not a virtual machine, but did not know how to call it otherwise, maybe "containerization platform" would fit the bill ?
-
Why would you want to run a UI app in Docker?
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
Mainly for testing purposes so our tester has a ready to run Windows testing environment that can be produced by our Continuous Integration pipeline.
-
There's that and Docker for Windows[^].
This space for rent
Ah thanks.
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
-
Mainly for testing purposes so our tester has a ready to run Windows testing environment that can be produced by our Continuous Integration pipeline.
That makes sense.
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
-
Are you using Docker or similar technologies today? What's been your experience like? What stack do you use it on? Thank you.
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
I use VMWare/VMPlayer when I need to test my development under different environments. I believe Docker, or its original incarnation, was initially created to develop applications that would run within their own OS\VM-like process so that applications could be distributed without any concerns for the configuration of the host operating systems. That phase of this development appeared to have petered out a number of years ago leaving everyone with the new construct that the current version of Docker is today. Technically, I never understood the need for such implementations if an organization is primarily supporting one major operating system. Those that support multiple operating systems are doing it for one of two reasons; the organization has a definitive requirement to do so or they are simply stupid and like to have additional complexity in their environments so they can feel important. In any event, though the explanations here as to what Docker actually is may be correct; to me nonetheless, it is just another form of virtualization even if the Docker supporters deny this... :)
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
-
I use VMWare/VMPlayer when I need to test my development under different environments. I believe Docker, or its original incarnation, was initially created to develop applications that would run within their own OS\VM-like process so that applications could be distributed without any concerns for the configuration of the host operating systems. That phase of this development appeared to have petered out a number of years ago leaving everyone with the new construct that the current version of Docker is today. Technically, I never understood the need for such implementations if an organization is primarily supporting one major operating system. Those that support multiple operating systems are doing it for one of two reasons; the organization has a definitive requirement to do so or they are simply stupid and like to have additional complexity in their environments so they can feel important. In any event, though the explanations here as to what Docker actually is may be correct; to me nonetheless, it is just another form of virtualization even if the Docker supporters deny this... :)
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
I am still trying to educate myself as to all the differences and pros and cons between containers vs VMs, and it's not easy to read past the marketing hype.
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
-
Are you using Docker or similar technologies today? What's been your experience like? What stack do you use it on? Thank you.
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
My naïveté made me think that I could run an ASP.NET app on Docker on Linux. It was not fun and it did not work (for me).
-
My naïveté made me think that I could run an ASP.NET app on Docker on Linux. It was not fun and it did not work (for me).
ASP.NET Core should run okay, at least in theory. What problems did you run into if I may ask?
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
-
ASP.NET Core should run okay, at least in theory. What problems did you run into if I may ask?
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
-
Ah alright. It should be far easier today :-) See [Deploy .NET Core with Docker to EC2 Container Service](http://docs.servicestack.net/deploy-netcore-docker-aws-ecs)
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
-
Don't use the term "virtual machine" when close to Docker people, unless you are eager to listen to a 45 minute intense talk that Docker is NOT, I repeat: NOT virualization! Virtualization is evil, Docker is good! And Docker isn't even "lightweight" virtualization. It is useless trying to discuss definitions of "virtualization" with Docker guys, or trying to compare the Docker way of providing isolation with a hypothetical minmal VM providing exactly those functions that your application needs while still being a VM (for the purpose of learning the details of what is so evil about virtualization). It is no use. The answer is given: VMs are evil, by definition. On the more serious side: Yes, the Docker demon is managed by a Linux kernel even in the Windows implemnentation. This is not a Linux virtual machine. On Windows 10, the Docker demon runs inside a Hyper-V VM (so it requires a 64 bit CPU with Extended Page Tables. (On Server 2016 the implementation is somewhat different, and does not use Hyper-V.) You can run Linux docker images in a Windows implementation; the Linux kernel functions are executed by the same kernel that runs the demon. You can obviously also run Windows docker images on Windows, but currently, the demon is in either Linux or Windows mode; it cannot run both flavors side by side. (I have seen rumours that this is being worked on, and will be possible in a future release.) The Linux implementation cannot run Windows images. Docker is essentially suited for backend services: Until you start doing fancy tricks, a container's only interface to the world outside the Docker demon is one or more TCP ports, or for persistent data: Mapping (parts of) an external file system as a Docker volume. There are two main alternatives for providing some sort of user interface: Either the container runs a web server, or you hook up a SSH console to it. In principle, I guess you could run e.g. an X.11 client in a Docker contiainer to give it a GUI interface; I doubt that anyone has seriously done anything like that. I guess that Docker is as suitable for web servers running on a Windows host as for web servers running on a Linux host. But applications running a Windows GUI of any kind cannot be adapted to Docker. Nor can any application that requires user interaction for installation, installation must be pure command-line based, with all parameters supplied either on the call line or in a setup/ini-file. When used for what it is good at, Docker is OK. Strechin
AFAIK, Docker for WindoZe is mostly meant for developing purposes and is not yet recommended for production(*) (at least last time I checked.) (*) Just as much as WindoZe recommended for production isn't either ...
-
Don't use the term "virtual machine" when close to Docker people, unless you are eager to listen to a 45 minute intense talk that Docker is NOT, I repeat: NOT virualization! Virtualization is evil, Docker is good! And Docker isn't even "lightweight" virtualization. It is useless trying to discuss definitions of "virtualization" with Docker guys, or trying to compare the Docker way of providing isolation with a hypothetical minmal VM providing exactly those functions that your application needs while still being a VM (for the purpose of learning the details of what is so evil about virtualization). It is no use. The answer is given: VMs are evil, by definition. On the more serious side: Yes, the Docker demon is managed by a Linux kernel even in the Windows implemnentation. This is not a Linux virtual machine. On Windows 10, the Docker demon runs inside a Hyper-V VM (so it requires a 64 bit CPU with Extended Page Tables. (On Server 2016 the implementation is somewhat different, and does not use Hyper-V.) You can run Linux docker images in a Windows implementation; the Linux kernel functions are executed by the same kernel that runs the demon. You can obviously also run Windows docker images on Windows, but currently, the demon is in either Linux or Windows mode; it cannot run both flavors side by side. (I have seen rumours that this is being worked on, and will be possible in a future release.) The Linux implementation cannot run Windows images. Docker is essentially suited for backend services: Until you start doing fancy tricks, a container's only interface to the world outside the Docker demon is one or more TCP ports, or for persistent data: Mapping (parts of) an external file system as a Docker volume. There are two main alternatives for providing some sort of user interface: Either the container runs a web server, or you hook up a SSH console to it. In principle, I guess you could run e.g. an X.11 client in a Docker contiainer to give it a GUI interface; I doubt that anyone has seriously done anything like that. I guess that Docker is as suitable for web servers running on a Windows host as for web servers running on a Linux host. But applications running a Windows GUI of any kind cannot be adapted to Docker. Nor can any application that requires user interaction for installation, installation must be pure command-line based, with all parameters supplied either on the call line or in a setup/ini-file. When used for what it is good at, Docker is OK. Strechin
This is what you were talking about: How to host a coder dinner-party | CommitStrip[^] ;)
-
Don't use the term "virtual machine" when close to Docker people, unless you are eager to listen to a 45 minute intense talk that Docker is NOT, I repeat: NOT virualization! Virtualization is evil, Docker is good! And Docker isn't even "lightweight" virtualization. It is useless trying to discuss definitions of "virtualization" with Docker guys, or trying to compare the Docker way of providing isolation with a hypothetical minmal VM providing exactly those functions that your application needs while still being a VM (for the purpose of learning the details of what is so evil about virtualization). It is no use. The answer is given: VMs are evil, by definition. On the more serious side: Yes, the Docker demon is managed by a Linux kernel even in the Windows implemnentation. This is not a Linux virtual machine. On Windows 10, the Docker demon runs inside a Hyper-V VM (so it requires a 64 bit CPU with Extended Page Tables. (On Server 2016 the implementation is somewhat different, and does not use Hyper-V.) You can run Linux docker images in a Windows implementation; the Linux kernel functions are executed by the same kernel that runs the demon. You can obviously also run Windows docker images on Windows, but currently, the demon is in either Linux or Windows mode; it cannot run both flavors side by side. (I have seen rumours that this is being worked on, and will be possible in a future release.) The Linux implementation cannot run Windows images. Docker is essentially suited for backend services: Until you start doing fancy tricks, a container's only interface to the world outside the Docker demon is one or more TCP ports, or for persistent data: Mapping (parts of) an external file system as a Docker volume. There are two main alternatives for providing some sort of user interface: Either the container runs a web server, or you hook up a SSH console to it. In principle, I guess you could run e.g. an X.11 client in a Docker contiainer to give it a GUI interface; I doubt that anyone has seriously done anything like that. I guess that Docker is as suitable for web servers running on a Windows host as for web servers running on a Linux host. But applications running a Windows GUI of any kind cannot be adapted to Docker. Nor can any application that requires user interaction for installation, installation must be pure command-line based, with all parameters supplied either on the call line or in a setup/ini-file. When used for what it is good at, Docker is OK. Strechin
-
Thank you. Will be good to read some critical and not so positive write-ups too I suppose. :)
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
Nish Nishant wrote:
Will be good to read some critical and not so positive write-ups too I suppose.
Oh, don't get me wrong. Docker running Linux on a Win10 box is great. It's just that Docker for Windows sucks.
Latest Article - Building a Prototype Web-Based Diagramming Tool with SVG and Javascript Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
-
Just don't do what one of our vendors recently did: containerise an application that by definition had to run on a single server. Talk about "ooooh, shiny!". So now, that server runs one application plus docker to contain it. Same vendor must have a senior app architect addicted to coder newz. In the last two years what used to be a solid Windows server platform app built with C++ and .NET has added to their integrated app suite ... a module in Java that has to run on Windows, a module in Node.js that has to run on Ubuntu, two web apps and a module in a Docker container that is only supported on CentOS/RedHat. If only we could change vendors...
-
Mainly for testing purposes so our tester has a ready to run Windows testing environment that can be produced by our Continuous Integration pipeline.
-
Someone here set up some scripts to create the docker images and configure them. From what I understand I connect to the local docker instance.
You put everything that is to be included into the new image (except the base image) into a subdirecotry at the host. (Keep everything else out of that subdirectory!) In the root of that subdirectory you save the build script (the "Dockerfile") in the script language described at Docker Build Documentation[^]. Using the CLI interface to the daemon, you give a build command, naming your build script (note that other Docker users will frown if it is named something else than "Dockerfile" with no extension). This will not do the build at the host; it will copy the entire subdirectory into the Docker daemon, and the daemon will do the build. The Dockerfile language is really primitive. Conceptually, the script loads the base image specified (e.g. a Linux base) and RUNs one or more executables (typically some installer), COPY from the directory tree you specified to the file system of the new image, one command line at a time. When all the RUN / COPY commands are performed, the current state, with the newly installed software, is saved as a new image. There is not much more to it, just minor details such as the command to run when the container is started, naming and other optional things. Your first Dockerfile could consist of three lines FROM some-baseimage RUN myprogram-installer.exe CMD myprogram.exe "myprogram-installer.exe" would be placed in the directory tree that is copied to the Docker daemon. "myprogram.exe" lives in the file system of this image only, making up a new layer of your image. It exists inside the Docker daemon only, even there invisible to other images, unless they are built using your image as base). That's it. There is not much to learn, as long as you know how to run installers and start the application... Note that since the entire build is done in a black box outside your control, there is no way for you to supply any sort of parameters through a dialog. All choices must be specified as arguments at the RUN line (possibly by naming a parameter file that you have COPYed to the image earlier in the Dockerfile, if the installer can be parametrized that way).
-
How's your development/debugging experience? Do you create local docker containers? Or do you connect to a remote docker image/instance? (example on AWS/Azure)
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com
Images were Local at first, we're now using AWS ECR to store the images themselves. We don't have any need to persist intances at the moment. We're also running the image pull and run in .sh scripts, if you have the choice I'd suggest python as an alternative if you just want something portable to run on a *NIX system. Developing has been fine, the biggest problem has been the stack we've adopted - Docker, AWS-ECR and AWS-SSM (which we're using as a secure param store) are all new to the team, and the kiddiewinks I work with have barely any BASH exposure, so there has been a learning curve. Running locally has been fine from a debugging/dev perspective - the tooling is pretty much your favoured IDE around whatever you've decided to wrap this in (in our case BASH), the only stuff you won't be familiar with is the docker files (not difficult) and the docker framework you'll need to spin the thing up (in our case the Docker CLI, but stuff is available for python and [dot net](https://hub.docker.com/r/microsoft/dotnet-framework/) ). We also don't really attach a debugger anywhere, as the code is all BASH, so YMMV if you go down a different route.TBH it's not much different from scripting on a linux OS, our main problem has been testing in our Build Manager (Team City) where we've got the latency of build agents spinning up etc.
KeithBarrow.net[^] - It might not be very good, but at least it is free!
-
Images were Local at first, we're now using AWS ECR to store the images themselves. We don't have any need to persist intances at the moment. We're also running the image pull and run in .sh scripts, if you have the choice I'd suggest python as an alternative if you just want something portable to run on a *NIX system. Developing has been fine, the biggest problem has been the stack we've adopted - Docker, AWS-ECR and AWS-SSM (which we're using as a secure param store) are all new to the team, and the kiddiewinks I work with have barely any BASH exposure, so there has been a learning curve. Running locally has been fine from a debugging/dev perspective - the tooling is pretty much your favoured IDE around whatever you've decided to wrap this in (in our case BASH), the only stuff you won't be familiar with is the docker files (not difficult) and the docker framework you'll need to spin the thing up (in our case the Docker CLI, but stuff is available for python and [dot net](https://hub.docker.com/r/microsoft/dotnet-framework/) ). We also don't really attach a debugger anywhere, as the code is all BASH, so YMMV if you go down a different route.TBH it's not much different from scripting on a linux OS, our main problem has been testing in our Build Manager (Team City) where we've got the latency of build agents spinning up etc.
KeithBarrow.net[^] - It might not be very good, but at least it is free!
Thank you, Keith, appreciate the details. :thumbsup:
Nish Nishant Consultant Software Architect Ganymede Software Solutions LLC www.ganymedesoftwaresolutions.com