Part 4: Deployments & Releases
Ops stuff is not my specialty. I’m far from a CI/CD master, with workable but limited experience working with Jenkins, Docker and Kubernetes, and have a lot to learn on the networking side of things. In terms of Elixir, however, I do have a good grasp on the tools available and how they fit together. It is an interesting state of affairs though because Erlang was designed with the opposite mentality of today's deployment practices...
Cattle or pets?
There is an expression today in the ops world about treating your services as cattle, not pets. This means your services should be treated more like stateless, disposable containers than long-running systems. Containers like Docker and orchestration services like Kubernetes work together to treat your services as cattle, and is quite effective at doing so.
Before I explain how Erlang is different, I will say it doesn’t need to be. It can be built and deployed like any other service in any other language using Docker and K8s. There’s no disadvantage or reason why you can’t. You will, however, miss out on some features that you may or may not want to take advantage of.
Erlang was designed to never go down. It’s usage of supervisors, lightweight processes and hot-code replacement capabilities were all implemented by design, for this reason, and is more oriented towards the 'pet' approach rather than 'cattle' approach of server management, which is actually the opposite of how we like to do things in 2020.
Distillery
When you build an Elixir project, it compiles to bytecode. Bytecode is like the mid-point between a binary and an interpreted language. A single binary includes the language runtime (like in Go, which compiles a binary of your application that includes the Go runtime), or in the case of Rust a binary without a runtime. An interpreted language, like Javascript or Python, executes human readable code instead, so there is no binary, just the runtime and your code.
Bytecode is like a halfway point. It is not human-readable but machine-readable, but it is not a binary that can be executed directly, the bytecode needs to be run by the Erlang VM/BEAM. I like to think of it as interpreted machine code rather than interpreted human code.
Distillery is a package for Elixir that enables you to configure any combination of applications in your project into bytecode. In terms of micro-services, I like to think as each distillery release as a distinct micro-service that could be deployed separately from other releases.
In an umbrella application, which is like a mono-repo in Elixir, you can build as many independent applications as you’d like and group them together in releases as you see fit. You could have 100 applications that become 100 separate releases or just one, and anywhere in between.
This enables you to build all of you your Elixir applications in a single repo but have fine-grained control over how they are released and deployed. Combined with eDeliver, covered below, and you have powerful and flexible deployment capabilities with minimal re-inventing of the wheel.
Elixir Releases
As of Elixir 1.9 Distillery-like behavior is included out-of-the-box with Elixir. Elixir Releases aren't quite as feature-rich as Distillery yet, but for many projects is more than enough, and is a nice 'batteries included' feature that will get you up and running quickly.
As Elixir Releases continues to improve and add features, it will likely bridge the feature gap between itself and Distillery, and likely become the default solution except in the most complex of cases.
eDeliver
eDeliver will take the builds created by Distillery or Elixir Releases, package them up with dependencies and the Erlang Runtime, and deploy them to your production machines accordingly.
With a little more effort you can also deploy hot-code upgrades, so instead of a blue-green deployment you can actually deploy direct to your running service. Hot upgrades do require careful planning and consideration, and is probably not a feature you are likely to use even if the capability is there.
Networked Applications
One more notable feature of Erlang applications is the built-in networking features known as Distributed Erlang, which allows multiple Erlang nodes to communicate with each other via TCP/IP sockets. This allows apps to distribute load between one or more nodes rather transparently, with built-in functionality and efficiencies you wouldn't get without a dedicated protocol.
Where the advanced features of the Erlang runtime fit in the modern ops world is not entirely clear to me yet. Admittedly, as I mentioned at the start of this article, I am not an ops/networking expert by any means, but thought this could be a good introduction to some of the advanced build, release, deploy and network capabilities built into Erlang.
If you are still curious I'd recommend checking out the Gigalixir docs, which goes into far more detail in these areas.