56

I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation.

Dockerfile

FROM python:3.6

ENV ENV1 9.3
ENV ENV2 9.3.4
...

ADD . /

RUN pip install -r requirements.txt

CMD [ "python", "./manager.py" ]

I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.

4
  • 2
    Depending on what you're trying to do, the --build-arg flag to docker build may be useful. Commented Oct 24, 2017 at 19:02
  • 2
    Do you need the environment variables for building the image, or do you need them when you run the image?
    – gogstad
    Commented Oct 24, 2017 at 20:45
  • I need them when I run the image. Commented Oct 24, 2017 at 20:46
  • You can do it during docker run, not docker build. You can use docker run --env-file [path-toenv-file] to provide the environment variables to the container from a .env file.
    – kovac
    Commented Jun 12, 2018 at 9:48

6 Answers 6

53

Yes, there are a couple of ways you can do this.

Docker Compose

In Docker Compose, you can supply environment variables in the file itself, or point to an external env file:

# docker-compose.yml
version: '2'
services:

  service-name:
    image: service-app
    environment:
    - GREETING=hello
    env_file:
    - .env

Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment):

# docker-compose-dev.yml
version: '2'
services:

  service-name:
    environment:
    - GREETING=goodbye

You can then run it thus:

docker-compose -f docker-compose.yml -f docker-compose-dev.yml up

Docker only

To do this in Docker only, use your entrypoint or command to run an intermediate script, thus:

#Dockerfile

....

ENTRYPOINT ["sh", "bin/start.sh"]

And then in your start script:

#!/bin/sh

source .env

python /manager.py

I've used this related answer as a helpful reference for myself in the past.

Update on PID 1

To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that Unix signals (stop, kill, etc) will not be passed onto your process. This is because that script will become process ID 1, which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen.

To rectify this, you can install an init system. I use dumb-init from Yelp. This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".

4
  • 2
    But the question is if this is possible without docker-compose? It seems like a bit of an overkill if all that I want is this .env file... Commented Oct 24, 2017 at 20:56
  • @hY8vVpf3tyR57Xib: ah, I see what you mean. I wonder, perhaps RUN source .env? Or do CMD ["sh", "start.sh"] which does that source command prior to starting your Python program.
    – halfer
    Commented Oct 24, 2017 at 21:00
  • @halfer Actually, I would be interested in your advice on "how to handle unix signals correctly". I do not know what you mean by dumb init system. Commented Oct 31, 2019 at 17:38
  • 1
    @RichardKiefer: answer edited, let me know if you have further questions.
    – halfer
    Commented Oct 31, 2019 at 17:51
17

I really like @halfers approach, but this could also work. docker run takes an optional parameter called --env-file which is super helpful.

So your docker file could look like this.

COPY .env .env

and then in a build script use:

docker build -t my_docker_image . && docker run --env-file .env my_docker_image
4
  • VOLUME ["/conf.d", "/mnt/logs"] - what does it mean? Commented Feb 28, 2020 at 3:35
  • Ahh good catch! This would be mounting volumes to your docker container. These two volumes would be specifically for logging and monitoring tools like DataDog. Commented Feb 28, 2020 at 17:41
  • Do you want to remove FROM and VOLUME strings to make the answer more readable? Commented Feb 28, 2020 at 17:55
  • 3
    Why do i need COPY .env .env ? The referenced --env-file for docker run could have any name, right? E.g. docker run --env-file .local.env my_docker_image. Am I missing some point here?
    – Colin
    Commented Feb 28, 2021 at 17:45
14

There are various options:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file

docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash

(You can also just reference previously exported variables, see USER below.)

The one answering your question about an .env file is:

cat env.list
# This is a comment
VAR1=value1
VAR2=value2
USER

docker run --env-file env.list ubuntu env | grep VAR
VAR1=value1
VAR2=value2

docker run --env-file env.list ubuntu env | grep USER
USER=denis

You can also load the environment variables from a file. This file should use the syntax variable=value (which sets the variable to the given value) or variable (which takes the value from the local environment), and # for comments.

Regarding the difference between variables needed at (image) build time or (container) runtime and how to combine ENV and ARG for dynamic build arguments you might try this:
ARG or ENV, which one to use in this case?

0
1

Another option would be to use the $BASH_ENV environment variable and override the SHELL in your Dockerfile. This will probably not work for all situations because it doesn't set the environment variables globally but it might work for you.

env.sh

export FOO=BAR

script.sh

echo "FOO = ${FOO}" > /test

Dockerfile

FROM python:3.6

COPY env.sh /env.sh

ENV BASH_ENV=/env.sh

SHELL ["/bin/bash", "-c"]

COPY script.sh /script.sh

RUN /script.sh

And this will give the following output:

docker build -t example .

docker run example cat /test
FOO = BAR
0

If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple export statements and then launches your process.

If you need them build time, have a look at the ARG and ENV statements. You'll need one per variable.

0

Without docker-compose.yml (as most VPS CPanels (open-source PaaS like Dokku, Caprover, Easypanel) don't support docker-compose.yml) so I had to find an alternate solution using --env-file option in a Makefile:

.PHONY: build-staging
build-staging: ## Build the staging docker image.
    docker build -f docker/staging/Dockerfile -t easypanel-nextjs:0.0.1 .

.PHONY: start-staging
start-staging: ## Start the staging docker container.
    docker run --detach --env-file .env.staging --publish 3000:3000 --restart unless-stopped easypanel-nextjs:0.0.1

.PHONY: stop-staging
stop-staging: ## Stop the staging docker container.
    docker stop $$(docker ps -a -q --filter ancestor=easypanel-nextjs:0.0.1)

Now, I just do this in the terminal:

$ make build-staging
$ make start-staging
$ make stop-staging

Obviously, the syntax becomes much cleaner with docker compose but most VPS CPanels don't support it so this is as good as it gets.

My repo that uses this method -> https://github.com/deadcoder0904/easypanel-nextjs-sqlite

Not the answer you're looking for? Browse other questions tagged or ask your own question.