On our previous post, we saw how to set up a basic oracle image to serve as our on-demand database. Let’s take a look now at how to modify our data, and then export it as a new image.
We will need a Docker Registry in place. For the purpose of this test we will use the Official Docker Hub, but you might consider setting up a Docker Private Registry to manage private images securely. Let’s start by creating a new repository to hold the different versions of our image. Once you are logged within the Docker Hub, just hit on the button prepared for this action.
The full name of the image that you will create will be your username, followed by a slash, and then your desired name. Fill in the required fields, and decide whether you want to create this repository as public, or private. Private repositories on the Docker Hub are limited, so we will go with “public” for now.
Once done, the browser will redirect us to our newly – and empty – repository for images.
Let’s go back to our local docker machine. Execute your set of SQL scripts the way you usually do (via SQLPlus command, SQL Developer, or any method you usually deploy your DDBB changes), starting from scratch. This will leave you with an additional layer of data that you can transform into a Docker Image. Once saved as an image, the contents will be unchangeable, and thus, every time you instantiate a container from that image, you can safely modify all data and structure, without concern on impacting other team members.
Go to your docker console, and check your container ID via the “Docker PS” command. On the following figure we have highlighted this value.
We can use the command “commit” to save the changes done to our container since it was instantiated to an image, and then create a new image with these. Make sure to brand it following the same name structure used to create your repository. It is not necessary to type the full ID, a few characters will be enough for Docker to choose the appropriate container.
docker commit 2de7a8 agutier/application_db
Now, let’s check the images hosted on our system with the command “docker images“. Something along the lines of the following screenshot should appear.
This image is saved locally, so we cannot share it with the rest of our team (although we could also save the image as a plain file, and then share that file). Let’s save this in our repository. We will have to identify ourselves with the “docker login” command (using our Docker Hub credentials).
We can commit our image. First, we will create a version id for this image, using “docker tag“. Let’s say its version 1.0. Then, we will send the image to the Docker Hub with the command “docker push“. This operation will take a while, depending on the size of the database you want to save.
Now, check your Docker Hub repository, and a new tagged image will be listed. This means that anyone can now pull and run your image on their local system, with the changes you committed. Why not automate this process, so that any commit to your SQL Scripts repository generates a new tagged image with each change? This is just the first step in managing your infrastructure in a Continuous Integration and Delivery pipeline.
Written by Álvaro G. Cachón