2 years ago
#18252
Phil
Workflow: Docker Services from Development to Production - Best Practice
I have been developing a web application using Python, Flask, Docker(-Compose) and git/github and getting to the point where I try to figure out the best way/workflow to bring it to production. I have read some articles but not sure what is a best practice from different approaches.
My current setup is purely development oriented:
- Local Docker using docker-compose to build various service images (such as db, backend workers, webapp (flask & uwisg), nginx).
- using .env file for docker-compose to pass configuration to the services
- Source code is bind mounted from the local docker host
- db data is stored in a named volume
- Using local git for source control (though I have connected it to a github repository but not been using it much since I am the only one currently developing the application)
From what I understand the steps to production could be the following:
- Implement docker-compose override to distinguish between dev and prod
- Implement Dockerfile Multistage builds to create prod images which include the source code in the image and do not include dev dependencies
- Tag and push the production images to a registry (docker, google?) or better push the git to github?
- [do security scans of the prod images]
- deploy/pull the prod images from the registry (or build from github) on a service like GKE for instance
Is this a common way to do it? Am I missing something?
How would I best go about using an integration/staging environment between dev and prod, so that I can first test new prod builds or debug prod images in integration?
Does GKE for instance offer an easy way to setup an integration environment? Or could I use the Docker installation on my NAS for that?
Any best practices for backing up production (like db data most importantly)?
Thanks in advance!
docker
docker-compose
web-applications
production
dev-to-production
0 Answers
Your Answer