Tag Archives: terraform

Terraform, null_resources & Azure ASM API

Recently, I was trying to bring up virtual machines in Microsoft Azure but ran into this interesting & annoying problem of not being able to upload SSH keys via the terraform DSL. There is a provision to provide a ssh_key_thumbprint but sadly no way to upload what you would call a KeyPair in AWS jargon.

While terraform does not support this operation via its DSL, It is possible to achieve this using some less-explored features of terraform.


I am using OS X, so my code samples might include some OS X specific commands. However it should be fairly easy to carry out these operations on other operating systems too.

First, the azure cli must be installed. Easiest way to do that is using brew:

$: brew install azure-cli

Post installation you will have to authenticate the azure cli. But that’s fairly easy. All you have to do is $: azure login and subsequent instructions on the screen will handhold you through the process.

Next, generate a SSL certificate that meets the following requirements:

  • The certificate must contain a private key.
  • The certificate must be created for key exchange, exportable to a Personal Information Exchange (.pfx) file.
  • The certificate must use a minimum of 2048-bit encryption.

A SSH keypair requires to be associated with an azure service. So you can create a service.json with the following contents:

Here’s how you can generate a certificate, a .pfx file and upload it to Azure portal.

openssl req -x509 \
  -key $service-deployer.key \
  -nodes \
  -days 1365 -newkey rsa:2048 \
  -out /tmp/$service-deployer.pem \
  -subj ‘/CN=domain.com/O=Domain Inc./C=US’
openssl x509 \
  -outform der \
  -in /tmp/$service-deployer.pem \
  -out /tmp/$service-deployer.pfx
azure service cert create $service /tmp/$service-deployer.pfx

Azure API also provides a way to fetch the list of all certificates uploaded and attached to it’s services.

piyush:azure master λ azure service cert list
info: Executing command service cert list
+ Getting cloud services
+ Getting cloud service certificates
data: Service Name Thumbprint Algorithm
data: domain-gamma 4F2AUA9ADF39830CDEHAJAND553DEANAJNAD8C8F sha1
info: service cert list command OK

The recently uploaded certificate has started showing up with a corresponding thumbprint, that can be used to provision new Azure machines.


So while the above example works well, it does not yet have an automatic essence to it. I am still responsible for the grunt work of checking if the certificate has been uploaded and if not, create one key pair, upload the .pfx and then save the thumbnail corresponding to that service, and all of this before running the terraform plan. Thing can be definitely be done better.


You mainly have to observe these four things in the above example:

  • depends_on
  • null_resource.ssh_key
  • ssh_key_thumbprint: ${file(“./ssl/ssh_thumbprint”)}
  • ssl/cert.sh


While most dependencies in Terraform are implicit; i.e Terraform is able to infer dependencies based on usage of attributes of other resources, Sometime you need to specify explicit dependencies. You can do this with the depends_on parameter which is available on any resource.

I recommend reading more about Terraform dependencies here.

By injecting a depends_on we can defer the responsibility of assurance of a thumbprint to another resource, but that should be done before an Instance is created.

Note (FAQ): Using a local-exec provisioner approach will not work here, because local-exec is done AFTER the resource has been created and not before. Also local-exec provisioner on any previous operation doesn’t guarantee re-run if the resource itself does not change.

Read on, for the solution.


The null_resource is a resource that allows you to configure provisioners that are not directly associated with a single existing resource.

null_resource is like a dummy stub that you can use to insert a node that encapsulates provisioners between two existing stages of the graph. The position is determined by refering to this resource via a depends_on from the child resource. In this case, null_resource will be called from the azure_instance resource.

You can read more about terraform’s null_resource here.

Say we delegate all the duties to a standalone Bash script, we can invoke the script as a local-exec provisioner from the null_resource.


But what if someone deletes the ssh_thumbprint file? Every subsequent terraform run would panic and crash. Solution lies in triggers attribute of a null_resource. triggers is amapping of values which should trigger a rerun of this set of provisioners. Values are meant to be interpolated references to variables or attributes of other resources.

In this case it’s a file that is being read from the filesystem. So any changes forces the resource to be re-trigerred eventually forcing a re-converge on the instances that depend on this null_resource.


Putting together the bash script, which accepts the service name and tries to locate an existing uploaded certificate for that service. If not, it generates a new .pfx using the above mentioned techniques, fetches the ssh_key_thumbprint and saves it to a common file where terraform instance resource can read it from.

Now, you should be able to provision a SSH only VM and use the generated .pem file to login to your freshly created Virtual Machine. Yay!

Enjoyed our content? Subscribe to receive our latest articles right in your inbox:
(no spam, promise!)

Terraform RemoteState Server

Terraform is a pretty nifty tool to layout complex infrastructures across cloud providers. It is an expressway to overcome the otherwise mundane and tedious task of going through insane amount of API documentations.

The output of terraform a run is a JSON which carries an awesome lot of information that the cloud platform provides about a resource; like instance_id, public_ip, local_ip, tags, dns, security groups etc and often it has left me wondering If I could search/access these JSON document from configuration management recipes, playbooks, or modules.

Example: While provisioning a zookeeper instance, I want the local-ip of all the peer nodes. I could run a query that would fetch me local_ips of all the nodes in this VPC that have the same security group. Or while applying a security patch to all the Redis nodes, I need the public-ip of all nodes that carry the tag `node_type: redis`.
I hope you get the idea of use cases by now and It definitely sounds like something that a document DB should be able to handle with relative ease.

To be able to achieve this, Terraform does not expose any pluggable backends to have custom formatters, however it does provide an ability to talk to a RESTful server. Every time a state needs to be read terraform makes a GET call on the /path specified while setting up the remote config. A save operation corresponds to a POST call on the same /path and a DELETE method call for a delete operation.

Here’s how you add a remote config to your terraform project:

terraform remote config \
    -backend=http \

While I wanted to export the information to MongoDB, others might want to store it somewhere else, maybe a Redis? Capitalising on terraforms ability to talk to a RESTful state server, I decided to write a implementation that would take data from the RESTful endpoint and save it to a MongoDB. Once it reaches MongoDB it’s fairly convenient and easy to use that information in the configuration manager code.

So I quickly put together a RESTful server, less than a day’s effort, written in Golang And it is available at http://github.com/oogway/tfstate

Given that you have GOPATH etc configured properly (In case you are new to Golang I suggest reading more about it here), You can download tfstate as simply as:

$: go get github.com/oogway/tfstate

This should provide you with a bianry file that you can execute as:

$: tfstate -config=/path/to/config.yaml

A sample configuration looks like this:

  host: hello.mlab.com:15194
  database: terraform
  username: transformer
  password: 0hS0sw33t

Although tfstate by default talks to MongoDB but implementing your own backend is fairly easy. Each provider has to implement the Storer interface that looks like this:

type Storer interface {
    Setup(cfgpath string) error
    Get(ident string) ([]byte, error)
    Save(ident string, data []byte) error
    Delete(ident string) error

Look at https://github.com/oogway/tfstate/blob/master/mongo.go for a sample implementation of this Interface.

Here’s an output from a working use case:

piyush:infra-monk: master λ tfstate -config tfstate.yaml

2016/06/15 22:19:02 Getting ident azure-state-zookeeper
2016/06/15 22:19:07 Saving ident azure-state-zookeeper to DB
2016/06/15 22:19:27 Saving ident azure-state-zookeeper to DB

2016/06/15 22:20:39 Getting ident aws-state-cassandra
2016/06/15 22:20:41 Saving ident aws-state-cassandra to DB
2016/06/15 22:23:52 Saving ident aws-state-cassandra to DB

Feel free to leave a comment or send Pull Requests 🙂

Enjoyed our content? Subscribe to receive our latest articles right in your inbox:
(no spam, promise!)