Docker for Windows
-
My boss has heard enthousiastic stories about Docker and now wants to use it for distributing Windows applications (both .NET and .NET Core). I never could get enthousiastic about Docker as it can not run Winforms applications, but I decided to give it a chance. Made a simple .NET Core console "Hello World" test application and in VS2017 used Add - Docker, built it and to my surprise everything built on the first try. Nevertheless when I saw the size of the image with Windows Nano server, more than 400 Mb, my enthousiasm was gone again. Am I the only one who can not get enthousiastic about Docker, is it me? I would like to know what your opinion is on the matter :confused:
Docker makes more sense with Linux as a target, as you have distros like Alpine that have been designed to be *really* minimal (a base Alpine image is 5MB). I reworked an old Windows server (very old - it was running Server 2003 R2!!) of ours, which had web server, Git & Mercurial repo access, Redmine, MediaWiki into a set of 5 docker images totalling somewhere around 100MB in size. By separating each application into a separate container, I can update each of them without worrying about breaking the others through some common dependency. [docker-compose](https://docs.docker.com/compose/) makes it pretty easy to build, connect and run a set of containers that are menat to run in unison. But for distributing desktop apps? Doesn't make too much sense just yet, especially with Windows, unless you have a large, involved environment you want to make available... However... something like [Windows Sandbox](https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/Windows-Sandbox/ba-p/301849) is a stepping stone towards using containers *or containerisation trechnology* for running desktop applications.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
-
Docker makes more sense with Linux as a target, as you have distros like Alpine that have been designed to be *really* minimal (a base Alpine image is 5MB). I reworked an old Windows server (very old - it was running Server 2003 R2!!) of ours, which had web server, Git & Mercurial repo access, Redmine, MediaWiki into a set of 5 docker images totalling somewhere around 100MB in size. By separating each application into a separate container, I can update each of them without worrying about breaking the others through some common dependency. [docker-compose](https://docs.docker.com/compose/) makes it pretty easy to build, connect and run a set of containers that are menat to run in unison. But for distributing desktop apps? Doesn't make too much sense just yet, especially with Windows, unless you have a large, involved environment you want to make available... However... something like [Windows Sandbox](https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/Windows-Sandbox/ba-p/301849) is a stepping stone towards using containers *or containerisation trechnology* for running desktop applications.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
Thanks, useful information :-\
-
My boss has heard enthousiastic stories about Docker and now wants to use it for distributing Windows applications (both .NET and .NET Core). I never could get enthousiastic about Docker as it can not run Winforms applications, but I decided to give it a chance. Made a simple .NET Core console "Hello World" test application and in VS2017 used Add - Docker, built it and to my surprise everything built on the first try. Nevertheless when I saw the size of the image with Windows Nano server, more than 400 Mb, my enthousiasm was gone again. Am I the only one who can not get enthousiastic about Docker, is it me? I would like to know what your opinion is on the matter :confused:
-
Depending on what the application does, you might be able to skip the docker/containers and go to "server-less" or Function As A Server solutions which can be done with .net/core code.
Aha, going to try that! thanks :-\
-
Back in the day, we could write an executable that consumed less than 500 bytes that could take down a major city's power grid. Ahh, those were good times indeed.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013that's why i enjoy embedded chip programming. for me it's more fun to work within 1kb of memory, than to figure out some modern application framework that might not be valid in a couple years time.
-
My boss has heard enthousiastic stories about Docker and now wants to use it for distributing Windows applications (both .NET and .NET Core). I never could get enthousiastic about Docker as it can not run Winforms applications, but I decided to give it a chance. Made a simple .NET Core console "Hello World" test application and in VS2017 used Add - Docker, built it and to my surprise everything built on the first try. Nevertheless when I saw the size of the image with Windows Nano server, more than 400 Mb, my enthousiasm was gone again. Am I the only one who can not get enthousiastic about Docker, is it me? I would like to know what your opinion is on the matter :confused:
We are in the process of introducing Docker, now that we are moving a lot of development activity over to Linux. Our Linux build nodes will essentially have no utilities or tools at the OS level, everything will be put into Docker images. Since Windows Docker may run Linux containers, and everything is in the container, you can run it on your Windows desktop (assuming that you have a 64 bit Windows 10 Pro or other version that can run Hyper-V). We are going to run our build nodes under Linux, but the developers may run the same Docker images on their desktops. In the Docker world, there are two schools: One school is to wrap a single tool into a container, so a build script using five different tools would run in the OS, and activate the five images one by one. The other school says to put the complete build environment, all required tools including a command shell, into one huge container, and run the script in (or give commands interactively to) the shell inside the container. We have selected the second approach. So nothing depends on the host OS; the scripts (or interactive commands) are identical on the Linux build nodes and on the Windows desktop. Docker on Windows have a significant startup time, regardless of whether we are in Linux or Windows mode. Once the container is running, there are not any significant delays. The system is stable; we haven't had much problems (none that I can remember, that wasn't caused by our own inexperience!). Installation went without problems. Our experience is with Windows Pro; the implementation is somewhat different on Windows Server. In our first step, the images are of Linux flavor, running Linux tools. We may be going on to the next step: Making Windows flavor containers, running Windows tools. But since Docker in suitable only for command-line tools (and optionally X11, but X11 is virtually unknown in Windows), you are limited to those tools having a decent CLI. That is rather strict limitation in a Windows environment; lots of good tools require a GUI. For any given non-trivial development task, you are likely to want to use at least one GUI tool, so you can't put the entire tool set into a container but must do part of the job in containers, par of it outside. What are our reasons for going into Docker? Our build nodes run jobs for a multitude of projects, requiring different tool versions, library versions and what have you. You can never know what state the previous job on the same node left behind, which tool versions it used. So for
-
My boss has heard enthousiastic stories about Docker and now wants to use it for distributing Windows applications (both .NET and .NET Core). I never could get enthousiastic about Docker as it can not run Winforms applications, but I decided to give it a chance. Made a simple .NET Core console "Hello World" test application and in VS2017 used Add - Docker, built it and to my surprise everything built on the first try. Nevertheless when I saw the size of the image with Windows Nano server, more than 400 Mb, my enthousiasm was gone again. Am I the only one who can not get enthousiastic about Docker, is it me? I would like to know what your opinion is on the matter :confused:
One thing to keep in mind is that the base image is immutable and is shared among all Docker services on the machine that use the same base image. So if you had 5 services that consisted solely of 1mb executables, and they all used the same Windows Nano base image, you'd only use up about 405MB of disk space. The images built on top of the base image are just stored as a set of diffs from the base image. Though I think if you ask Docker how big each image is, it'll report 401MB - the size of the base image plus the diff - so unless you know about the immutable bit, it'll look like you're using up more space than you actually are. This can make Hello World apps look huge, but you keep in mind you're only going to have one copy of that base image shared across all apps that use it, it's not so bad. If you take care to ensure that all of your apps and services use the same base image, it can be a pretty sane way to deploy your apps to servers, because you'll get the benefits of having your apps completely self-contained without needing to install each one in its own VM. The massive Hello World isn't as much of an issue on the Linux side of things if you build on top of an Alpine Linux image. I've packaged up a few server apps written in Go, which statically links everything into a single executable, and my whole image (base + diff) was under 10 megabytes.
-
We are in the process of introducing Docker, now that we are moving a lot of development activity over to Linux. Our Linux build nodes will essentially have no utilities or tools at the OS level, everything will be put into Docker images. Since Windows Docker may run Linux containers, and everything is in the container, you can run it on your Windows desktop (assuming that you have a 64 bit Windows 10 Pro or other version that can run Hyper-V). We are going to run our build nodes under Linux, but the developers may run the same Docker images on their desktops. In the Docker world, there are two schools: One school is to wrap a single tool into a container, so a build script using five different tools would run in the OS, and activate the five images one by one. The other school says to put the complete build environment, all required tools including a command shell, into one huge container, and run the script in (or give commands interactively to) the shell inside the container. We have selected the second approach. So nothing depends on the host OS; the scripts (or interactive commands) are identical on the Linux build nodes and on the Windows desktop. Docker on Windows have a significant startup time, regardless of whether we are in Linux or Windows mode. Once the container is running, there are not any significant delays. The system is stable; we haven't had much problems (none that I can remember, that wasn't caused by our own inexperience!). Installation went without problems. Our experience is with Windows Pro; the implementation is somewhat different on Windows Server. In our first step, the images are of Linux flavor, running Linux tools. We may be going on to the next step: Making Windows flavor containers, running Windows tools. But since Docker in suitable only for command-line tools (and optionally X11, but X11 is virtually unknown in Windows), you are limited to those tools having a decent CLI. That is rather strict limitation in a Windows environment; lots of good tools require a GUI. For any given non-trivial development task, you are likely to want to use at least one GUI tool, so you can't put the entire tool set into a container but must do part of the job in containers, par of it outside. What are our reasons for going into Docker? Our build nodes run jobs for a multitude of projects, requiring different tool versions, library versions and what have you. You can never know what state the previous job on the same node left behind, which tool versions it used. So for
Wow, that's more of an article than a reply. One of my colleagues is already building Docker Linux containers for a small board controller, these are tiny, only about 3 Mb. Hence my bosses enthousiasm I think, he probably thinks Windows containers will be that size too, but this will obviously be disappointing to him ... Thanks for taking the time to reply!
-
One thing to keep in mind is that the base image is immutable and is shared among all Docker services on the machine that use the same base image. So if you had 5 services that consisted solely of 1mb executables, and they all used the same Windows Nano base image, you'd only use up about 405MB of disk space. The images built on top of the base image are just stored as a set of diffs from the base image. Though I think if you ask Docker how big each image is, it'll report 401MB - the size of the base image plus the diff - so unless you know about the immutable bit, it'll look like you're using up more space than you actually are. This can make Hello World apps look huge, but you keep in mind you're only going to have one copy of that base image shared across all apps that use it, it's not so bad. If you take care to ensure that all of your apps and services use the same base image, it can be a pretty sane way to deploy your apps to servers, because you'll get the benefits of having your apps completely self-contained without needing to install each one in its own VM. The massive Hello World isn't as much of an issue on the Linux side of things if you build on top of an Alpine Linux image. I've packaged up a few server apps written in Go, which statically links everything into a single executable, and my whole image (base + diff) was under 10 megabytes.
Jealous, jealous of you Linux guys :-\
-
Jealous, jealous of you Linux guys :-\
Maybe Microsoft will put together a really tiny Windows Docker base image. What's smaller than nano? Windows Server Muon, maybe?