Featured

What is Gang Scan?

美女任你摸Gang Scan is an open source (and free) attendance tracking system based on custom RFID reader boards that communicate back to a server over wifi. The boards are capable of queueing scan events in the case of intermittent network connectivity, and the server provides simple reporting.

Introducing Shaken Fist

美女任你摸The first public commit to what would become OpenStack Nova was made ten years ago today — at Thu May 27 23:05:26 2010 PDT to be exact. So first off, happy tenth birthday to Nova!

A lot has happened in that time — OpenStack has gone from being two separate Open Source projects to a whole ecosystem, developers have come and gone (and passed away), and OpenStack has weathered the cloud wars of the last decade. OpenStack survived its early growth phase by deliberately offering a “big tent” to the community and associated vendors, with an expansive definition of what should be included. This has resulted in most developers being associated with a corporate sponser, and hence the decrease in the number of developers today as corporate interest wanes — OpenStack has never been great at attracting or retaining hobbist contributors.

My personal involvement with OpenStack started in November 2011, so while I missed the very early days I was around for a lot and made many of the mistakes that I now see in OpenStack.

What do I see as mistakes in OpenStack in hindsight? Well, embracing vendors who later lose interest has been painful, and has increased the complexity of the code base significantly. Nova itself is now nearly 400,000 lines of code, and that’s after splitting off many of the original features of Nova such as block storage and networking. Additionally, a lot of our initial assumptions are no longer true — for example in many cases we had to write code to implement things, where there are now good libraries available from third parties.

That’s not to say that OpenStack is without value — I am a daily user of OpenStack to this day, and use at least three OpenStack public clouds at the moment. That said, OpenStack is a complicated beast with a lot of legacy that makes it hard to maintain and slow to change.

For at least six months I’ve felt the desire for a simpler cloud orchestration layer — both for my own personal uses, and also as a test bed for ideas for what a smaller, simpler cloud might look like. My personal use case involves a relatively small environment which echos what we now think of as edge compute — less than 10 RU of machines with a minimum of orchestration and management overhead.

At the time that I was thinking about these things, the Australian bushfires and COVID-19 came along, and presented me with a lot more spare time than I had expected to have. While I’m still blessed to be employed, all of my social activities have been cancelled, so I find myself at home at a loose end on weekends and evenings at lot more than before.

Thus Shaken Fist was born — named for a Simpson’s meme, Shaken Fist is a deliberately small and highly opinionated cloud implementation aimed at working well in small deployments such as homes, labs, edge compute locations, deployed systems, and so forth.

I’d taken a bit of trouble with each feature in Shaken Fist to think through what the simplest and highest value way of doing something is. For example, instances?always get a config drive and there is no metadata server. There is also only one supported type of virtual networking, and one supported hypervisor. That said, this means Shaken Fist is less than 5,000 lines of code, and small enough that new things can be implemented very quickly by a single middle aged developer.

Shaken Fist definitely has feature gaps — API authentication and scheduling are the most obvious at the moment — but I have plans to fill those when the time comes.

I’m not sure if Shaken Fist is useful to others, but you never know. Its apache2 licensed, and available on github if you’re interested.

A totally cheating sour dough starter

This is the third in a series of posts documenting my adventures in making bread during the COVID-19 shutdown. I’d like to imagine I was running science experiments in making bread on my kids, but really all I was trying to do was eat some toast.

I’m not sure what it was like in other parts of the world, but during the COVID-19 pandemic Australia suffered a bunch of shortages — toilet paper, flour, and yeast were among those things stores simply didn’t have any stock of. Luckily we’d only just done a costco shop so were ok for toilet paper and flour, but we were definitely getting low on yeast. The obvious answer is a sour dough starter, but I’d never done that thing before.

In the end my answer was to cheat and use this recipe. However, I found the instructions unclear, so here’s what I ended up doing:

美女任你摸

  • 2 cups of warm water
  • 2 teaspoons of dry yeast
  • 2 cups of bakers flour

Mix these three items together in a plastic container with enough space for the mix to double in size. Place in a warm place (on the bench on top of the dish washer was our answer), and cover with cloth secured with a rubber band.

Feeding

Once a day you should feed your starter with 1 cup of flour and 1 cup of warm water. Stir throughly.

Reducing size

The recipe online says to feed for five days, but the size of my starter was getting out of hand by a couple of days, so I started baking at that point. I’ll describe the baking process in a later post. The early loaves definitely weren’t as good as the more recent ones, but they were still edible.

Hybernation

Once the starter is going, you feed daily and probably need to bake daily to keep the starters size under control. That obviously doesn’t work so great if you can’t eat an entire loaf of bread a day. You can hybernate the starter by putting it in the fridge, which means you only need to feed it once a week.

To wake a hybernated starter up, take it out of the fridge and feed it. I do this at 8am. That means I can then start the loaf for baking at about noon, and the starter can either go back in the fridge until next time or stay on the bench being fed daily.

I have noticed that sometimes the starter comes out of the fridge with a layer of dark water on top. Its worked out ok for us to just ignore that and stir it into the mix as part of the feeding process. Hopefully we wont die.

A super simple non-breadmaker loaf

This is the second in a series of posts documenting my adventures in making bread during the COVID-19 shutdown. Yes I know all the cool kids made bread for themselves during the shutdown, but I did it too!

A loaf of bread

So here we were, in the middle of a pandemic which closed bakeries and cancelled almost all of my non-work activities. I found this animated GIF on Reddit for a super simple no-kneed bread and decided to give it a go. It turns out that a few things are true:

  • animated GIFs are a super terrible way store recipes
  • that animated GIF was a export of this YouTube video which originally accompanied this blog post
  • and that I only learned these things while to trying and work out who to credit for this recipe

The basic recipe is really easy — chuck the following into a big bowl, stir, and then cover with a plate. Leave resting a warm place for a long time (three or four hours), then turn out onto a floured bench. Fold into a ball with flour, and then bake. You can see a more detailed version in the YouTube video above.

  • 3 cups of bakers flour (not plain white flour)
  • 2 tea spoons of yeast
  • 2 tea spooons of salt
  • 1.5 cups of warm water (again, I use 42 degrees from my gas hot water system)

The dough will seem really dry when you first mix it, but gets wetter as it rises. Don’t panic if it seems tacky and dry.

I think the key here is the baking process, which is how the oven loaf in my previous post about bread maker white loaves was baked. I use a cast iron camp oven (sometimes called a dutch oven), because thermal mass is key. If I had a fancy enamelized cast iron camp oven I’d use that, but I don’t and I wasn’t going shopping during the shutdown to get one. Oh, and they can be crazy expensive at up to $500 AUD.

Another loaf of bread

Warm the oven with the camp oven inside for at least 30 minutes at 230 degrees celsius. Then place the dough inside the camp oven on some baking paper — I tend to use a triffet as well, but I think you could skip that if you didn’t have one. Bake for 30 minutes with the lid on — this helps steam the bread a little and forms a nice crust. Then bake for another 12 minutes with the camp over lid off — this darkens the crust up nicely.

A final loaf of bread

Oh, and I’ve noticed a bit of variation in how wet the dough seems to be when I turn it out and form it in flour, but it doesn’t really seem to change the outcome once baked, so that’s nice.

The original blogger for this receipe also recommends chilling the dough overnight in the fridge before baking, but I haven’t tried that yet.

A breadmaker loaf my kids will actually eat

My dad asked me to document some of my baking experiments from the recent natural disasters, which I wanted to do anyway so that I could remember the recipes. Its taken me a while to get around to though, because animated GIFs on reddit are a terrible medium for recipe storage, and because I’ve been distracted with other shiney objects. That said, let’s start with the basics — a breadmaker bread that my kids will actually eat.

A loaf of bread baked in the oven

This recipe took a bunch of iterations to get right over the last year or so, but I’ll spare you the long boring details. However, I suspect part of the problem is that the receipe varies by bread maker. Oh, and the salt is really important — don’t skip the salt!

Wet ingredients (add first)

  • 1.5 cups of warm water (we have an instantaneous gas hot water system, so I pick 42 degrees)
  • 0.25 cups of oil (I use bran oil)

Dry ingredients (add second)

I just kind of chuck these in, although I tend to put the non-flour ingredients in a corner together for reasons that I can’t explain.

  • 3.5 cups of bakers flour (must be bakers flour, not plain flour)
  • 2 tea spoons of instant yeast (we keep in the freezer in a big packet, not the sashets)
  • 4 tea spoons of white sugar
  • 1 tea spoon of salt
  • 2 tea spoons of bread improver

I then just let my bread maker do its thing, which takes about three hours including baking. If I am going to bake the bread in the over, then the dough takes about two hours, but I let the dough rise for another 30 to 60 minutes before baking.

A loaf of bread from the bread maker

I think to be honest that the result is better from the oven, but a little more work. The bread maker loaves are a bit prone to collapsing (you can see it starting on the example above), and there is a big kneeding hook indent in the middle of the bottom of the loaf.

The oven baking technique took a while to develop, but I’ll cover that in a later post.

Exporting volumes from Cinder and re-creating COW layers

Today I wandered into a bit of a rat hole discovering how to export data from OpenStack Cinder volumes when you don’t have admin permissions, and I thought it was worth documenting here so I remember it for next time.

Let’s assume that you have a Cinder volume named “child1”, which is a 64gb volume originally cloned from “parent1”. parent1 is a 7.9gb VMDK, but the only way I can find to extract child1 is to convert it to a glance image and then download the entire volume as a raw. Something like this:

$ cinder upload-to-image $child1 "extract:$child1"

Where $child1 is the UUID of the Cinder volume. You then need to find the UUID of the image in Glance, which the Cinder upload-to-image command will have told you, but you can also find by searching Glance for your image named “extract:$child1”:

$ glance image-list | grep "extract:$cinder_uuid"

You now need to watch that Glance image until the status of the image is “active”. It will go through a series of steps with names like “queued”, and “uploading” first.

Now you can download the image from Glance:

$ glance image-download --file images/$child1.raw --progress $glance_uuid

And then delete the intermediate glance image:

$ glance image-delete $glance_uuid

I have a bad sample script which does this?in my junk code repository if that is helpful.

What you have at the end of this is a 64gb raw disk file in my example. You can convert that file to qcow2 like this:

$ qemu-img convert $child1.raw $child1.qcow2

But you’re left with a 64gb qcow2 file for your troubles. I experimented with virt-sparsify to reduce the size of this image, but it doesn’t work in my case (no space is saved), I suspect because the disk image has multiple partitions because it originally came from a VMWare environment.

Luckily qemu-img can also re-create the COW layer that existing on the admin-only side of the public cloud barrier. You do this by rebasing the converted qcow2 file onto the original VMDK file like this:

$ qemu-img create -f qcow2 -b $parent1.qcow2 $child1.delta.qcow2$ qemu-img rebase -b $parent1.vmdk $child1.delta.qcow2

In my case I ended up with a 289mb $child1.delta.qcow2 file, which isn’t too shabby. It took about five minutes to produce that delta on my Google Cloud instance from a 7.9gb backing file and a 64gb upper layer.

The Calculating Stars

Winner of both a Hugo, Locus and a Nebula, this book is about a mathematical prodigy battling her way into a career as an astronaut in a post-apolocalyptic 1950s America. Along the way she has to take on the embedded sexism of America in the 50s, as well as her own mild racism. Worse, she suffers from an anxiety condition.

The book is engaging and well written, with an alternative history plot line which believable and interesting. In fact, its quite topical for our current time.

I really enjoyed this book and I will definitely be reading the sequel.

The Calculating Stars Book CoverThe Calculating Stars
Mary Robinette Kowal
May 16, 2019
432

The Right Stuff meets Hidden Figures by way of The Martian. A world in crisis, the birth of space flight and a heroine for her time and ours; the acclaimed first novel in the Lady Astronaut series has something for everyone., On a cold spring night in 1952, a huge meteorite fell to earth and obliterated much of the east coast of the United States, including Washington D.C. The ensuing climate cataclysm will soon render the earth inhospitable for humanity, as the last such meteorite did for the dinosaurs. This looming threat calls for a radically accelerated effort to colonize space, and requires a much larger share of humanity to take part in the process. Elma York's experience as a WASP pilot and mathematician earns her a place in the International Aerospace Coalition's attempts to put man on the moon, as a calculator. But with so many skilled and experienced women pilots and scientists involved with the program, it doesn't take long before Elma begins to wonder why they can't go into space, too. Elma's drive to become the first Lady Astronaut is so strong that even the most dearly held conventions of society may not stand a chance against her.

Configuring load balancing and location headers on Google Cloud

I have a need at the moment to know where my users are in the world. This helps me to identify what compute resources to serve their request with in order to reduce the latency they experience. So how do you do that thing with Google Cloud?

The first step is to setup a series of test backends to send traffic to. I built three regions: Sydney; London; and Los Angeles. It turns out in hindsight that wasn’t actually nessesary though — this would work with a single backend just as well. For my backends I chose a minimal Ubuntu install, running this simple backend HTTP service.

I had some initial trouble finding a single page which walked through the setup of the Google Cloud load balancer to do what I wanted, which is the main reason for writing this post. The steps are:

Create your test instances and configure the backend on them. I ended up with a setup like this:

A list of google cloud VMs

Next setup instance groups to contain these instances. I chose unmanaged instance groups (that is, I don’t want autoscaling). You need to create one per region.

A list of google cloud instance groups

But wait! There’s one more layer of abstraction. We need a backend service. The configuration for these is cunningly hidden on the load balancing page, on a separate tab. Create a service which contains our three instance groups:

A sample backend service

I’ve also added a health check to my service, which just requests “/healthz” from each instance and expects a response of “OK” for healthy backends.

The backend service is also where we configure our extra headers. Click on the “advanced configurations” link, and more options appear:

Additional backend service options

Here I setup the extra HTTP headers the load balancer should insert: X-Region; X-City; and X-Lat-Lon.

And finally we can configure the load balancer. I selected a “HTTP(S) load balancer”, as I only care about incoming HTTP and HTTPS traffic. Obviously you set the load balancer to route traffic from the Internet to your VMs, and you wire the backend of the load balancer to your service. Select your backend service for the backend.

Now we can test! If I go to my load balancer in a web browser, I now get a result like this:

The top part of the page is just the HTTP headers from the request. You can see that we’re now getting helpful location headers. Mission accomplished!

Interviewing hints (or, so you’ve been laid off…)

This post is an attempt to collect a set of general hints and tips for resumes and interviews. It is not concrete truth though, like all things this process is subjective and will differ from place to place. It originally started as a Google doc shared around a previous workplace during some layoffs, but it seems more useful than that so I am publishing it publicly.

I’d welcome comments if you think it will help others.

So something bad happened

I have the distinction of having been through layoffs three times now. I think there are some important first steps:

  • Take a deep breath.
  • Hug your loved ones and then go and sweat on something — take a walk, go to the gym, whatever works for you. Research shows that exercise is a powerful mood stabiliser.
  • Make a plan. Who are you going to apply with? Who could refer you? What do you want to do employment wise? Updating your resume is probably a good first step in that plan.
  • Treat finding a job as your job. You probably can’t do it for eight hours a day, but it should be your primary goal for each “workday”. Have a todo list, track things on that list, and keep track of status.

And remember, being laid off isn’t about you, it is about things outside your control. Don’t take it as a reflection on your abilities.

Resumes

  • The goal of a resume is to get someone to want to interview you. It is not meant to be a complete description of everything you’ve done. So, keep it short and salesy (without lying through oversimplification!).
  • Resumes are also cultural — US firms tend to expect short summary (two pages), Australian firms seem to expect something longer and more detailed. So, ask your friends if you can see their resumes to get a sense of the right style for the market you’re operating in. It is possible you’ll end up with more than one version if you’re applying in two markets at once.
  • Speaking of friends, referrals are gold. Perhaps look through your LinkedIn and other social media and see where people you’ve formerly worked with are now. If you have a good reputation with someone and they’re somewhere cool, ask them to refer you for a job. It might not work, but it can’t hurt.
  • Ratings for skills on LinkedIn help recruiters find you. So perhaps rate your friends for things you think they’re good at and then ask them to return the favour?

Interviews in general

The soft interview questions we all get asked:

  • I would expect to be asked what I’ve done in my career — an “introduce yourself” moment. So try and have a coherent story that is short but interesting — “I’m a system admin who has been working on cloud orchestration and software defined networking for Australia’s largest telco” for example.
  • You will probably be asked why you’re looking for work too. I think there’s no shame in honesty here, something like “I worked for a small systems integrator that did amazing things, but the main customer has been doing large layoffs and stopped spending”.
  • You will also probably be asked why you want this job / want to work with this company. While everyone really knows it is because you enjoy having money, find other things beforehand to say instead. “I want to work with Amazon because I love cloud, Amazon is kicking arse in that space, and I hear you have great people I’d love to work with”.

Note here: the original version of the above point said “I’d love to learn from”, but it was mentioned on Facebook that the flow felt one way there. It has been tweaked to express a desire for a two way flow of learning.

“What have you done” questions: the reality is that almost all work is collaborative these days. So, have some stories about things you’ve personally done and are proud of, but also have some stories of delivering things bigger than one person could do. For example, perhaps the ansible scripts for your project were super cool and mostly you, but perhaps you should also describe how the overall project was important and wouldn’t have worked without your bits.

Silicon Valley interviews: organizations like Google, Facebook, et cetera want to be famous for having hard interviews. Google will deliberately probe until they find an area you don’t know about and then dig into that. Weirdly, they’re not doing that to be mean — they’re trying to gauge how you respond to new situations (and perhaps stress). So, be honest if you don’t know the answer, but then offer up an informed guess. For example, I used to ask people about system calls and strace. We’d keep going until we hit the limit of what they understood. I’d then stop and explain the next layer and then ask them to infer things — “assuming that things work like this, how would this probably work”? It is important to not panic!

Interviews as a sysadmin

  • Interviewers want to know about your attitude as well as your skills. As sysadmins, sometimes we are faced with high pressure situations? — something is down and we need to get it back up and running ASAP. Have a story ready to tell about a time something went wrong. You should demonstrate that you took the time to plan before acting, even in an emergency scenario. Don’t leave the interviewer thinking you’ll be the guy who will accidentally delete everyone’s data because you’re in a rush.
  • An understanding of how the business functions and why “IT” is important is needed. For example, if you get asked to explain what a firewall is, be sure to talk about how it relates to “security policy” as well as the technical elements (ports, packet inspection & whatnot).
  • Your ability to learn new technologies is as important as the technologies you already know.

Interviews as a developer

  • I think people look for curiosity here. Everyone will encounter new things, so they want to hear that you like learning, are a self starter, and can do new stuff. So for example if you’ve just done the CKA exam and passed that would be a great example.
  • You need to have examples of things you have built and why those were interesting. Was the thing poorly defined before you built it? Was it experimental? Did it have a big impact for the customer?
  • An open source portfolio can really help — it means people can concretely see what you’re capable of instead of just playing 20 questions with you. If you don’t have one, don’t start new projects — go find an existing project to contribute to. It is much more effective.

Writing a terraform remote state server

Terraform is a useful tool for deploying cloud resources. This post isn’t an introduction to terraform, so I’ll assume you already know and love it. If you want more, then this getting started guide would be a sensible start.

At its most basic level, terraform deploys cloud resources and stores information about those resources in a file on local disk called terraform.tfstate — it needs that state information so it can make later changes to the deployment, be those modifying resources in use or tearing the whole deployment down. If you had an operations team working on an environment, then you could store the tfstate file in git or a shared filesystem so that the entire team could manage the deployment. However, there is nothing with that approach that stops two members of the team making overlapping changes.

That’s where terraform state servers come in. State servers can implement optional locking, which stops overlapping operations from happening. The protocol that these servers talk isn’t well documented (that I could find), so I wanted to explore that. I wanted to explore that more, so I wrote a simple terraform HTTP state server in python.

To use this state server, configure your terraform file as per demo.tf. The important bits are:

terraform {backend "http" {address = "http://localhost:5000/terraform_state/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a"lock_address = "http://localhost:5000/terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a"lock_method = "PUT"unlock_address = "http://localhost:5000/terraform_lock/4cdd0c76-d78b-11e9-9bea-db9cd8374f3a"unlock_method = "DELETE"}}

Where the URL to the state server will obviously change. The UUID?in the URL (4cdd0c76-d78b-11e9-9bea-db9cd8374f3a in this case) is an example of an external ID you might use to correlate the terraform state with the system that requested it be built. It doesn’t have to be a UUID, it can be any string.

I am using PUT and DELETE for locks due to limitations in the HTTP verbs that the python flask framework exposes. You might be able to get away with the defaults in other languages or frameworks.

To run the python server, make a venv, install the dependancies, and then run:

$ python3 -m venv ~/virtualenvs/remote_state$ . ~/virtualenvs/remote_state/bin/activate$ pip install -U -r requirements.txt$ python stateserver.py

I hope someone else finds this useful.

Setting up VXLAN between nested virt VMs on Google Compute Engine

I wanted to play with a VXLAN mesh between VMs on more than one hypervisor node, but the setup for VXLAN ended up being a separate post because it was a bit long. Read that post first if you want to follow the instructions here.

Now that we have a working VXLAN mesh between our two nodes we can move on to installing libvirt (which is called libvirt-daemon-system on Debian, not libvirt-bin as on Ubuntu):

sudo apt-get install -y qemu-kvm libvirt-daemon-systemsudo virsh net-start defaultsudo virsh net-autostart --network default

I’m going to use a little python helper to launch my VMs, so I need some other dependancies as well:

sudo apt-get install -y python3-pip pkg-config libvirt-dev gitgit clone https://github.com/mikalstill/shakenfistcd shakenfistgit checkout 6bfac153d249752b27d224ad9d079095b640498esudo mkdir /srv/shakenfistsudo cp template.debian.xml /srv/shakenfist/template.xmlsudo pip3 install -r requirements.txt

Let’s launch a quick test VM to make sure the helper works:

sudo python3 daemon.pysudo virsh list

You can destroy that VM for now, it was just testing the install.

sudo virsh destroy ...name...

Next we need to tweak the template that shakenfist is using to start instances so that it uses the bridge for networking (that template is the one you copied to /srv/shakenfist/template.xml earlier). Replace the interface section in the template with this on both nodes:

<interface type='bridge'><mac address={{eth0_mac}}/><source bridge='br-vxlan0'/><model type='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/></interface>

I know the bridge mentioned here doesn’t exist yet, but we’ll deal with that in a second. Before we start VMs though, we need a way of getting IP addresses to them. shakenfist can configure interfaces using config drive, but I’d prefer to use DHCP because who doesn’t love some additional complexity?

On one of the nodes install docker:

sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-commoncurl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"sudo apt-get updatesudo apt-get install docker-ce docker-ce-cli containerd.io

Now we can setup DHCP. Create a place for the configuration file:

sudo mkdir /srv/shakenfist/dhcp

And then create the configuration file at /srv/shakenfist/dhcp/dhcpd.conf with contents like this:

default-lease-time 3600;max-lease-time 7200;option domain-name-servers 8.8.8.8;authoritative;subnet 192.168.200.0 netmask 255.255.255.0 {option routers 192.168.1.1;option broadcast-address 192.168.1.255;pool {range 192.168.200.10 192.168.200.254;}}

Before we can start dhcpd, we need to move the VXLAN device into a bridge so we can add a device for the DHCP server to it. First off remove the vxlan0 device from the last post:

sudo ip link set down dev vxlan0sudo ip link del vxlan0

And now recreate it with a bridge:

sudo ip link add vxlan0 type vxlan id 42 dev eth0 dstport 0sudo bridge fdb append to 00:00:00:00:00:00 dst 34.70.161.180 dev vxlan0sudo ip link add br-vxlan0 type bridgesudo ip link set vxlan0 master br-vxlan0sudo ip link set vxlan0 upsudo ip link set br-vxlan0 upsudo ip link add dhcp-vxlan0 type veth peer name dhcp-vxlan0psudo ip link set dhcp-vxlan0p master br-vxlan0sudo ip link set dhcp-vxlan0 upsudo ip link set dhcp-vxlan0p upsudo ip addr add 192.168.200.1/24 dev dhcp-vxlan0

This block of commands:

  • recreated the vxlan0 interface
  • added it to the mesh with the other node again
  • created a bridge named br-vxlan0
  • moved the vxlan0 interface into it
  • created a veth pair called dhcp-vxlan0 and dhcp-vlan0p
  • moved the peer part of that veth pair into the bridge
  • and then configured an IP on the external half of the veth pair

To make the bridge survive reboots you would need to add it to either /etc/network/interfaces or /etc/netplan/01-netcfg.yml depending on your distribution, but that’s outside the scope of this post.

You should be able to ping again. From the other node give it a try:

$ ping 192.168.200.1PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data.64 bytes from 192.168.200.1: icmp_seq=1 ttl=64 time=19.3 ms64 bytes from 192.168.200.1: icmp_seq=2 ttl=64 time=0.571 ms

We need to do something similar on the other node so it can run VMs as well. It is a tiny bit simpler because there wont be any DHCP there however, and remembering that you need to change 35.223.115.132 to the IP of your first node:

sudo ip link set down dev vxlan0sudo ip link del vxlan0sudo ip link add vxlan0 type vxlan id 42 dev eth0 dstport 0sudobridge fdb append to 00:00:00:00:00:00 dst 35.223.115.132 dev vxlan0sudo ip link add br-vxlan0 type bridgesudo ip link set vxlan0 master br-vxlan0sudo ip link set vxlan0 upsudo ip link set br-vxlan0 up

Note that now we can’t do a ping test because the second VM no longer consumes an IP for the base OS.

Now we can start the docker container with dhcpd listening on dhcp-vxlan0:

sudo docker run -it --rm --init --net host -v /srv/shakenfist/dhcp:/data networkboot/dhcpd dhcp-vxlan0

This runs dhcpd interactively so we can see what happens. Now try starting a VM on the other node:

sudo python3 daemon.py

You can watch the VM booting using the “virsh console” command with the name of the vm from “virsh list“. The dhcpd process should show you something like this:

sudo docker run -it --rm --init --net host -v /srv/shakenfist/dhcp:/data networkboot/dhcpd dhcp-vxlan0Internet Systems Consortium DHCP Server 4.3.5Copyright 2004-2016 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Config file: /data/dhcpd.confDatabase file: /data/dhcpd.leasesPID file: /var/run/dhcpd.pidWrote 0 leases to leases file.Listening on LPF/dhcp-vxlan0/06:ff:bc:7d:11:e3/192.168.200.0/24Sending on LPF/dhcp-vxlan0/06:ff:bc:7d:11:e3/192.168.200.0/24Sending on Socket/fallback/fallback-netServer starting service.DHCPDISCOVER from ee:95:4d:40:ca:a6 via dhcp-vxlan0DHCPOFFER on 192.168.200.10 to ee:95:4d:40:ca:a6 (foo) via dhcp-vxlan0DHCPREQUEST for 192.168.200.10 (192.168.200.1) from ee:95:4d:40:ca:a6 (foo) via dhcp-vxlan0DHCPACK on 192.168.200.10 to ee:95:4d:40:ca:a6 (foo) via dhcp-vxlan0

You can see here that our new VM got the IP 192.168.200.10 from the DHCP server! It is moments like this when you don’t realise that this blog post took me hours to write that I feel really smart.

If we started a VM on the first node (the same command as for the second node), we’d now have two VMs on a virtual network which had working DHCP and could ping each other. I think that’s enough for one evening.