Workflow at Modular Finance, and how we got there

At Modular Finance we have been using go for a while now, 2.5 years or so, and docker a while longer and this post will primarily regard these technology choices, how we use them and how we work with them. Not necessarily of their benefits but rather how we work with them to fit our need. We were far from the first to adopt them, but not really the obvious choice either at the time and no clear way to work with them as we wanted to.

Developer at modfin mostly work in small groups, around 2 people on projects or products. We are 7 developers running 7 different products/applications and people move around between them. Each products usually consists of multiple micro services containing both frontends, backends, databases and other supporting services. From this comes an important thing for us is that devs get the opportunity to learn, explore new tools and find new ways to solve new and old problems. Tools, languages and so on that people find useful and manage to convince other of it awesomeness survive and other things die. Both Go and Docker are examples of this (i tried to bring in clojure, but it did’t get any traction :)

The wish list

Background

I was the first developer at modfin and when we started back in 2013, we had 2 services, one frontend angular app, some nginx stuff and a database. Each service was its own git repo, we packed the java services into fat-jars and scp:ed them to VM and ran it. At this time docker had just been released and people were exited but there were not a lot of info out there on how to manage things.

Things were pretty good in the beginning though, i had the docker mindset but resorted to running one service per VM. There were only three code bases to keep track of and I din’t have time research infrastructure or new languages, I had product to build. Sure, it took an hour or so to set up a new dev environment, but that was fine. However, slowly but surely things grew, more people joined the company and more services and products was added. It really started to become a hassle to keep track of all services and configuration differences between production, testing and dev.

So what to do and what we have done.

Our largest pain at the time as a non consistent environment between developing and production. We wanted an easy and repeatable way to build an develop,

Enter docker, It had matured a lot since I first looked at it, swam had just become a part of the standard docker installation, and we felt ready to go. We started of our dev environment, which brought us the most pain at the time. Every service, database, frontend and all the rest was wrapped in a docker container. We used, and still use, docker compose to manage our local environments that we build some tooling around in order to manage the large number of containers.

A while later we bet on docker swarm and migrated everything into it for production. Which turned out to be a breeze after all the prior work that we had done. We set up a news vm cluster and migrated everything over the course of few days.

Around this point we started experiencing pain with nginx, which we at this point really only used as a reverse proxy. Having to reload configs, adding new services manually, tls cert management and so on. Enter Traefik, IKEA Effect in full swing, but a few of gotchas and pullrequests later it really suited our needs and i would argue be very effective in most modern webapp usecases.

In the mean time the amount of git repositories with services kept growing and we switched to go as our main language for new services. Some products consisted of 30ish repos. So we decided to try out googles tool repo build for the Android Project, which turned out to work ok but a while down the line, kind of just added unnecessary complexity.

The amount of services simply started give us a headache, to mange them in a dev environment through docker compose. So we built a small tool to manage the docker-compose environments for us, dcr github.com/modfin/dcr. It is a simple cli and repl that wraps docker compose and makes life easier managing larger compose environments.

To reduce the number git repos we had to manage and get rid of repo we decided to try out more of a mono repo approach and decided to all services for one product in one git repo. We were inspired by Digital Oceans structure, blog.digitalocean.com/cthulhu-organizing-go-code-in-a-scalable-repo/ but decided on a pragmatic approach only wrapping a product.

Our workflow and structure

The Hello World example of golang more or less involve starting a webserver and these days there are a lot of packages out there. Go is very opinionated in some things and leave other things up to the developer and the community to figure out. Eg. very forcefully care about the lexicographically order of your imports, but then don’t give you any hit or opinion on how to structure packages and code.

Building larger applications in a team puts creates problems with your configuration, your structure, with you IDE and on, below I describe how we are working at the moment and what our internal best practice is.

Since we these days primarily use go, this will be an example focus on how we structure code, manage decencies, use docker for development and production, hook things up in a IDE and so on.

Folder structure for a generic product at Modular Finance

aproduct
├─ db
├─ go
│  ├─ pkg
│  └─ src
│     └─ aproduct
│        ├─ lib
│        │  └─ sharedcode
│        ├─ service0
│        │  ├─ cmd
│        │  │   └─ service0d
│        │  │      └─ main.go
│        │  ├─ internal
│        │  │  ├─ config
│        │  │  ├─ dao
│        │  │  └─ otherpkgs
│        │  ├─ models.go
│        │  ├─ Dockerfile
│        │  └─ Dockerfile.build
│        └─ service1              
├─ k8s
├─ tools
├─ ui
│  └─ frontend0
├─ .env
└─ docker-compose.yml

So how is it used? a great with a folder structure but how is it used. Well our usecase must first be understood. We build web applications in a micro service paradigm. We develop it using docker compose and deploy it on kube.

So lets start off with a look at the docker-compose.yml file

version: "3.0"
services:
  traefik:
    image: traefik
    command: > 
      --web 
      --docker 
      --docker.domain=localhost
      --entryPoints='Name:http Address::80 Compress:true'
    ports:
      - "9090:80"
      - "9080:8080"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    labels:
      - "traefik.enable=false"
    
  service0:
    build:
      context: ./go/src/aproduct/service0
      dockerfile: Dockerfile
    volumes:
      - ./go:/go:delegated
    labels:
      - "traefik.api.frontend.rule=PathPrefixStrip: /api/service0"
  
  frontend0:
     build:
       context: ./ui/frontend0
       dockerfile: Dockerfile
     volumes:
       - ./ui/frontend0:/frontend0:delegated
     labels:
       -  "traefik.app.frontend.rule=PathPrefix: /"
      
  postgres:
    image: postgres
    environment:
      - POSTGRES_PASSWORD=qwerty
    ports:
      - "5432:5432"

Here we use traefik to proxy all ingress traffic to all services behind it. Other then that, it should look like a pretty standard setup for using docker and docker compose

Next we’ll have a look at .env file

UID=1000     # the standard uid for your user  
SSH_PUB=...  # a dedicated read only ssh key pair for a git account, making go able to pull from you private repo. 
SSH_PRI=...  

Next, on to the Dockerfile for the go project service0

FROM golang:1.11-alpine

RUN go get github.com/githubnemo/CompileDaemon

ENV GO111MODULE=on

# mirror dev user
ARG UID
ARG USER
ARG SSH_PUB
ARG SSH_PRI

RUN if test "$USER" != 'root'; then groupadd --gid $UID $USER \
  && useradd --uid $UID --gid $USER --shell /bin/bash --create-home $USER; fi

RUN chown -R $UID /go

USER $USER

# use ssh for git operations (specifically `go get` in our case)
# also disable StrictHostKeyChecking to get around the "authenticity of host ... (yes/no)"
RUN printf "[url \"git@bitbucket.org:\"]\n\tinsteadOf = https://bitbucket.org/\n" >> \
    /home/$USER/.gitconfig
RUN mkdir -p /home/$USER/.ssh
RUN echo "StrictHostKeyChecking no " > /home/$USER/.ssh/config
RUN echo $SSH_PUB > /home/$USER/.ssh/id_rsa.pub
RUN echo $SSH_PRI > /home/$USER/.ssh/id_rsa
 
WORKDIR /go/src/aproduct/service0
CMD CompileDaemon -build="go build -o /service0d ./cmd/service0d/*.go" \
    -command="/service0d" \
    -directory="/go/src/aproduct/service0"

Now things are starting to come together, the ssh keys in the .env is written into the docker containers ssh-keys. This is then used in order to let go retrieve private repositories. We also use a compile daemon, which rebuilds and launches the server on save, since we mount all the go code from the outside.

Lessons learned

A lesson learned from that time is that we from the beginning build everything as micro services. Looking back I really believe that we should have kept to a monolith. Introducing the complexity of a micro services, the mental overhead was simply not worth the modularity gained at that point and we really did’n need to scale things much. Need for scale is a luxury problem.


Author:
Rasmus Holm, CTO/Developer