-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: implement switching roots and sub-images (aka nested builds) #1
Comments
Hi @muayyad-alsadi. Thanks for opening this issue. I'm not a fan of having a build publish multiple images - I think there should always be only one result at the end. I'm also not a fan of having a build instruction that sets an image name or tag - that should be done separately after the build and is easier to reason about when there is only a single resulting image. While I really like the |
me neither. I don't like starting from a single for example in ubuntu a single source for libreoffice builds more than 150 binary deb packages. although this is too much and less likely to be the most common use-case docker but still it's a real world example. a more realistic example would be a common code-base for both server and client or for server and its replication daemon ..etc. since I'm not going to use multiple images but I'm sure someone would need it, it's a valid use case. and it won't cost us any thing to implement. to make it clean
exactly we would have to start with why not use
I don't think overloading supporting multiple builds (not really nested) is trivial as there is always one single working/destination container and one single fixed build container. and all paths after each
both exposed ports, volumes, cmd, entry point are not allowed before |
Container image building is not and should not be a solution to everything. A Dockerfile is used to produce a single container image. If you need multiple container images you can have several different Dockerfiles and even have common base images if you want to (this is a recommended pattern). If you do have a "monster code-base" with several sub directories that each produce their own container image (which a lot of people do have - including myself) then you can have a separate Dockerfile for each container image you need to build. If you want to automate the process of building all of these images together then you can use a variety of other tools to accomplish that task (many people use Makefiles for this).
I wasn't proposing overloading them, they would behave the same, it's just the context which changes. The source path is always relative to the context directory (which can be local or in a container) and the destination is always in a container. |
no, it's not just about automating or scripting this with not just a different images from same code base, but from the same makefile and same build, take a look at Fedora's MariaDB SPEC-file, the lines below will compile both mariadb server and mariadb client tools %cmake . \
-DCMAKE_INSTALL_PREFIX="%{_prefix}" \
# ....
make %{?_smp_mflags} VERBOSE=1
# ...
make DESTDIR=%{buildroot} install if you want to create to separated images for mariadb server and client one need to either to compile it twice for each image (ie. compile every thing in first pass then discard every thing and keep the server, then compile everything again, then discard every thing except client) or to use the proposed method let's get back to the LibreOffice/OpenOffice example (which has a server by the way, and it's used by many other services like BigBlueButton). Last time I compiled OpenOffice it took more than 8 hours (since fedora has a policy not to include any pre-built thing including jars). Having 150 Dockerfiles each takes 8 hours to build. go projects takes no time to compile maybe this is why you don't see why some people might need this. but there are some elephantine legacy projects that takes forever to compile and repeating this for every component is not a good idea and putting them in a single container in many cases is not the docker way (single process per container, although mariadb example was ok, because only server would be running) sometimes a project requires you to build all components at once because there are some compile-time mapping for plugins ..etc. currently what people do, is to build outside docker at host or downloaded package files rpm, deb or use ubuntu PPA ..etc.) then use ADD/COPY to put them.
after pivot we have sorry for the long post, it's a matter of taste and I'm not trying to change your mind, I'm just trying to give you vision so that you make your mind while knowing all aspects. |
The fast Say you have a client/server application that you want to build container images for, the client and server each having separate images that you can spawn containers from to use independently. You're client and server are both from a pretty big C/C++ codebase and because you're a great code architect you make sure that the client and server share as much code as possible. When you build both the client and server locally it takes about an hour, and you want your resulting images to not have the source code in them either, only the binaries. The first thing you would do is write a Dockerfile which copies in all of the source code and compiles the shared libraries only. You might want to call this "my-app-core" or something similar. This image on its own isn't very useful. It contains all of your codebase and dependencies and the compiled shared object code but doesn't have either your client or server binary yet. Next you'll want to build the client and server binaries separately. You'll write 2 more Dockerfiles: one for the client and one for the server. Starting with the client, you write a Dockerfile which is The Dockerfile for the server would be similar: You'll probably find that building your images this way is much faster than if you have built the client and sever separately from scratch. The total build time is reduced by the fact that you used the common core base image which contained the pre-built shared libraries and you only had to build it once. Depending on how you do it, changes to only either the client or server code requiring rebuilds may be even faster because you can take advantage of build cache and never have to rebuild the common core base image. When you want to publish your application to your users you only need to release the resulting minimal client and server images and not the bloated intermediate "my-app-core" image. |
I actually do something similar, I currently have a system that uses fabric/ansible and my base image has supervisord and openssh disabled. my build script uses docker exec to start sshd then then inject host public key, then run fabric/ansible build, then start another base docker images then scp or pipe tars between containers then disable sshd and cleanup then commit.
people in atomic project has their own integrated builder called reactor (integrates with koji the cloud builder in fedora infrastructure that compiles all fedora/EPEL rpms and push them ..etc.) https://github.com/projectatomic/atomic-reactor but I believe a small simple TAGGED_SWITCH_ROOT would eliminate all of those things and make them much more simpler. |
Hi, everyone. We found that being able to have multiple FROM instructions in a single Dockerfile is very convenient. Take a look at https://github.com/grammarly/rocker |
Regarding rocker with multiple from like this one FROM google/golang:1.4 run imageFROM busybox First it's a nice thing that just work and accomplish the job But I have concerns: My objective is to leave the host alone. You should not depend on what is On Tue, Sep 8, 2015, 10:52 PM Yuriy Bogdanov [email protected]
|
as per our discussion in the links below the proposed change is
I'm not sure about
SWITCH_ROOT_N_PUBLISH
orSWITCH_ROOT_W_TAG
orTAGGED_SWITCH_ROOT
but the idea that the build root is kept, if no
SWITCH_ROOT
it will be used and tagged,if there are at least one
SWITCH_ROOT
the build root is kept a side and a new image is used as root for the tag passed, forTAGGED_SWITCH_ROOT
which can be used multiple times,new_root
are still with respect to build image which is not yet discarded.using the above with
docker build -t foobar:2.5
would result two imagesfoobar:2.5
andfoobar-client:2.5
we even can support a special
<other-image>
that start withself
the idea originates from RPM SPEC-file
here is an example http://pkgs.fedoraproject.org/cgit/mariadb.git/tree/mariadb.spec
moby/moby#7115 (comment)
moby/moby#15271
moby/moby#14298 (comment)
The text was updated successfully, but these errors were encountered: