Diagrams as code using docker + diagram.py

So a colleague showed this python tool to me today and I really liked how easy it is.

Also it has nice icons.

https://github.com/mingrammer/diagrams

In the documentation for diagrams.py the main way to use it is by installing pip and graphviz on my system and the usual python stuff, meaning that either I will have a bloated python system or I have to take care of so many different virtual envs.

These days I try to usually put any tools I want to reuse often into a docker container and run from there.

So for this python tool, I will just make a Dockerfile and build it and then run the python in docker.

Every time I want to run it, it will run from the custom built docker container and that is going to be it, no bloated python setups.

Quite easy with a simple Dockerfile:

FROM python:3
RUN pip install diagrams
RUN apt-get update && apt-get install -y \
    graphviz \
 && rm -rf /var/lib/apt/lists/*

And then just mount the directory the python script is in, and have it run nice and easy like so:

docker run --rm -it -v "${PWD}":/diagram -w /diagram diagrams python diagram.py

The docker command removes the finished container so it is not polluting your docker container cache (the docker image that was built in the first step will remain).

I pushed the Dockerfile and an example here: https://github.com/ledakis/infra-diagrams-python-docker

Simple Fargate project

I got asked to showcase a very simple website that shows something like a page and a picture, that can scale, is built on AWS and with large enterprise in mind.

I have created it on https://github.com/ledakis/simplesite-scale

It includes:

  1. Docker container for the app itself, which runs nginx and has the content of the site baked into the container.
  2. Terraform for the supporting infrastructure the service will run on.
    • VPC/Subnets using the official terraform module.
    • DNS zone + config for my Cloudflare master zone.
    • ECR for the repository the Docker is going to be hosted in.
    • S3 bucket for the application access logs.
    • ACM certificate for the service.
  3. Terraform for the service itself. This deploys:
    • The ALB
    • The ECS service (cluster, task definition, service)
    • Scaling policy (number of connections per target from the ALB)
    • IAM roles

This is a WIP. I want to enhance the scaling, maybe to use the lambda that people generally suggest.

Update 4/12/2019:

It appears EKS+Fargate is a thing now and this sounds very, very interesting!

For v2.0 of this project, I will plan to work on the following:

  1. Convert the task definition to the k8s manifest and try to have terraform deploy it. This is only to be a test, as it will probably involve spinning up an EKS cluster and along with that, the cost for it. (this is just to showcase some terraforming + aws + architecture, so I try to keep costs low as I test it on a personal account)
  2. Change the egress to S3 (for ALB logs and docker layers) and to ECS api to go via a VPC Endpoint so that we remove the need for a NAT Gateway in the VPC. Remember: Principle of least priviledge. The traffic to AWS services should not go through the internet but via a forced path by us.
  3. I will move the certificate creation to the service terraform directory, so then the service can be ran many times independently of the main infrastructure bit.

Replacing ngrok with caddy + ssh

So I find ngrok.io to be an amazing service which I like a lot.

The downside is that I get a different sub-domain each time and that I don't have control over the whole thing.

I have googled for alternatives to ngrok and eventually I ended in this reddit and sequentially to this blog post from Jacob Errington.

This would have covered my needs, but I can't be bothered with setting up Letsencrypt and combining with nginx and stuff when I know of Caddy!

Caddy provides a reverse proxy with automatic LetsEncrypt certificates without the need to care more than writing a sime Caddyfile like the following:

sub.example.com         # the domain to be served
proxy / localhost:3333  # directive to proxy, and the target for proxying

And that's it!

Now you only need to save that file and make sure you set up caddy as a systemd service and load that file.Usually you will be getting 502's if you are not connected, or you can make sure to start it when you ssh into the box.

I will probably add more information on how to set up caddy to be a systemd service that auto starts and restarts on failure.

Hello Website

This is an example blog post. Not much here but that's not the point :)

lektor testing

This is a first attempt to use lektor for my blog. I used jekyll last time on github, lets see how this goes!

GPG signed commits on mac

To set up signed commits on mac:

  • gpg --gen-key to generate the key

  • gpg --list-secret-keys --keyid-format LONG to list your key plus its long key (after the slash after the 2048 or 4096 bit length)

  • gpg --armor --export <PASTE_LONG_KEY_HERE> |pbcopy to copy to public key your clipboard so you can paste it in your profile.

  • git config --global user.signingkey <PASTE_LONG_KEY_HERE> and git config --global commit.gpgsign true to add the key to your git config so you will be signing all your commits using that key. Make sure you want that setting to be global, or per git repo.

  • add echo 'export GPG_TTY=$(tty)' to your .bash_profile or your .zshrc. this will make git ask you for your gpg passhprase every time you commit, to have it remembered, do the following:

  • The following is specific for macs and will add the passphrase to your keychain so you won't be asked every time:

brew upgrade gnupg
brew link --overwrite gnupg
brew install pinentry-mac
echo "pinentry-program /usr/local/bin/pinentry-mac" >> ~/.gnupg/gpg-agent.conf
killall gpg-agent

Try echo "test" | gpg --clearsign for it to ask for your passphrase so it can be added to keychain. In the popup window make sure you tick the box to add to keychain.

Information collected from:

© Copyright 2019-2020 by Theocharis Ledakis.