Tutorial: Using Google Cloud SQL Proxy with Wildfly in Kubernetes

Partha Bhattacharya
4 min readMay 15, 2018

This is a hands on introduction to understand how we can use Google Cloud SQL Proxy in Wildlfy and create a datasource to use in J2EE application.

As developer we spend lot of time in exploring and learning new things while working. Some time we spend nights to figure out small issues that might be so obvious due to improper or outdated documentation or lack of concrete example. Recently I was working on a microservice that uses Google Cloud SQL as DB. While working on the project I found most of the information how to deploy or connect to Cloud SQL but no document that explains the steps in one place. So I thought to write this short tutorial which might be helpful to someone.

Prerequisite:
1. Cloud Kubernetes Engine(v1.8+).
2. Create a SQL instance (We will use MySQL engine)
3. Create a database in Google Cloud SQL
4. Create a user to connect to database

Prepare the environment:
We have to create a Service Account to access the database instance.
Following are the steps to create service account
1. Login to gcloud console
2. Go to Project Settings
3. Click Service accounts
4. Create Service account
5. Follow the instruction as provided by Google Cloud documents. For Role, select Cloud SQL > Cloud SQL Client. Alternatively, you can use the primitive Editor role by selecting Project > Editor, but the Editor role includes permissions across Google Cloud Platform.

Create your Secrets:
To use the above created service account we will create a secret to enable our Kubernetes Engine application to access the data in our Cloud SQL instance.
Download the key from service account above as json file and save it as credentials.json . Before we can connect using the proxy we have to enable Cloud SQL Administration API.

$ kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=[PATH_TO_DOWNLODED_JSON_FILE]

Create deployment file:
For deployment in Kubernetes we will use latest Wildfly image and gce-proxy version 1.11. Since our goal is to understand how to integrate Wildfly and Cloud SQL Proxy and use it in Wildfly, we will follow some shortcuts. Ideally one should use a custom Wildfly image with admin user set and mysql client registered as a module. How to create a custom image is out of the scope of this document.

Create a file wildfly.yaml and add the bellow code. The following deployment file has two containers Wildfly and Cloud Sql Proxy image. VolumeMounts will mount the instance credentials so that the proxy can connect to the database instance. Wildfly image exposes management console at port 9990.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wildfly-server
labels:
app: wildfly-server
spec:
template:
metadata:
labels:
app: wildfly-server
spec:
containers:
# [START wildfly_container]
- name: wildfly-server
image: jboss/wildfly
command:
- "/opt/jboss/wildfly/bin/standalone.sh"
args:
- "--server-config"
- "standalone-full-ha.xml"
- "-b"
- "0.0.0.0"
- "-bmanagement"
- "0.0.0.0"
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 9990
protocol: TCP
# [END wildfly_container]
# Change <INSTANCE_CONNECTION_NAME> here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306",
"-credential_file=PATH_TO_DOWNLODED_JSON_FILE"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
Deploy Wildfly in kubernetes cluster
$ kubectl apply -f [PATH_TO_YAML_FILE]/wildfly.yaml
Check the pod status (-w switch is used to monitor the status continuously)
$ kubectl get pods -w
NAME READY STATUS
wildfly-server-645df8748b-b9jhg 0/2 ContainerCreating

Once the pods are running we have to add admin user to Wildlfy instance to test Cloud SQL Proxy. Let us login to the container

$ kubectl exec -it [POD_NAME] /bin/bashExecute the following once you get container prompt 
$ wildfly/bin/add-user.sh admin [PASSWORD] --silent
$ exit

This will close the container shell.

Exopse the Wildfy admin console using kubernetes load balancer. To do this login to Google cloud console and open Kubernetes engine. Select Workloads from your left panel and open the deployed pod. Select Kubernetes action and select Expose. In the port section add 9990 and target 9990 and choose Load-Balancer option. Save the changes. It will take some time to create the service.

Go back to the console and check services status, this takes some time to assign an external IP$ kubectl get svcYou will see the following outputNAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   
kubernetes ClusterIP 10.35.240.1 <none> wildfly-server-6qx2x LoadBalancer 10.35.251.199 35.200.229.224

Open a browser and use the url http://[EXTERNEL_IP]:9990. This will open Wildfly administration console. Provide admin credentials we used earlier while creating admin user.

Testing Cloud SQL Proxy:
We have to first deploy mysql client jar to be able to connect to the database. In administration console select Deployment menu → Add a new deployment, select mysql client jar (Download mysql client jar if required).

In adminstration console select configuration menu.
1. Select Subsystems →Datasources →Non-XA Datasource
2. Add a new datasource
3. Choose MySQL Datasource and Next
4. Provide datasource attributes and Next
5. Click Detected Drivers and select mysql-connecttor-java-[VERSION].jar and Next
6. In Connection URL input add “jdbc:mysql://127.0.0.1:3306/[YOUR_DB]”.
7. Provide username and password of DB user created earlier and Next
8. Test Connection.

Happy coding.

--

--