One of the most usual issues that we find on development departments is the cross-impact in databases when generating structure changes. Suddenly “Team A” change a column definition, and as a result, “Team B”’s code will no longer work. Historically, this is solved by duplicating the database and assigning one «environment» per team. While this may work for a while, business does tend to escalate, and at some point, the «fix» will generate more trouble than good.
Following with last month’s post, we will delve some more into the options that Docker brings to our team, and how being able to generate databases on-the-fly resolves cross-team development.
Let’s start with a basic premise: these databases should be ephemeral: data and structure will easily be replicated, so we care little for the contents. We will deal with these environments as if they were cattle, not as pets. We aim to massively deliver these collections of data several times per day, maybe even several times per developer.
On this walkthrough we will consider Oracle 11g as our database of choice, but the base image can easily be changed to our preferred database management system. As well, we will not cover how to set up Docker on your system.
First, let’s get a blank Oracle image. On our docker host, type in the following:
docker run -d -p 49101:1521 wnameless/oracle-xe-11g
Docker will download this image if it’s not on your host’s drive, and return a long id as acknowledge of this order. With the command «docker ps» we will gather important information about this container, as shown in the following capture.
If we fire up SQLDeveloper, and connect to our host machine on port 49101 using the connection data shown on the base image’s information page, a basic setup with data dictionary will greet us.
The «-p 49101:1521» parameter on the command we just executed means «link the port 49101 on the host machine with the port 1521 on the generated container». Thus, we can access Oracle via the default port.
The next step would be to load a script or dump that will generate our data structure, and populate it with data. For that, look forward to part 2 of these post series.
Written by Álvaro G. Cachón