Setting up IPv6 using dhcpv6-pd/slaac with dnsmasq
For this to work it’s important that you understand some concepts in IPv6, I’ll briefly cover the most important ones in this post, but I strongly advice you to read up on them in detail. Also, I have absolutely no clue about networking. This setup is tested and working on Get (Telia), a service provider in Norway.
Some assumptions, because you need that on IPv6
- Your ISP delegates you a /56 using slaac or dhcpv6
- You run an EdgeRouter with EdgeOS 2.x or can figure out what you need to do otherwise
What we want to achieve
- Open the firewall to allow traffic
- Request an /56-prefix using dhcpv6-pd and configure a prefix delegation
- Set up dnsmasq to handle router advertisement and dhcpv6 along side the regular dhcp for IPv4
And then covering some basics
There are multiple ways to delegate and obtain IPv6-prefixes, the most common way is that your ISP delegate a prefix to you using slaac or dhcpv6.
Stateless Address Autoconfiguration (slaac) is where your ISP annouces a prefix using Router Advertisement (ra). Your client then uses this prefix to generate a prefix for your interface using Extended Unique Identifier (eui-64). The results is a global and unique prefix. But nothing more. If we wanted more information, like DNS-servers and such, we’d have to request the prefix using either dhcpv6-stateless or dhcpv6-stateful.
The main difference between slaac and dhcpv6-stateless is that you in dhcpv6-stateless request a prefix using slaac and additional information, like DNS-servers, using dhcpv6. If you request a prefix using dhcpv6-stateful the entire process of assigning a prefix is handled by the dhcpv6-server, more or less exactly like on DHCP on IPv4.
Don’t expose MySQL on the public Internet, you idiot
It’s worth mentioning that IPv6-addresses by the nature of being global and unique are all routed and available on ther Internet. You don’t have NAT to cover your ass for sloppy firewall rules. This is a fairly restrictive configuration, where we allow traffic to be established form the client, but not the other way around.
Notice that we’re allowing traffic from source port 547 to destination port 546 on udp. This is because we need to talk with our upstream router to request a prefix using dhcpv6-pd.
Requesting an /56-prefix and configuring a prefix delegation
Here we’re requesting a prefix on eth0 and a prefix delecation (pd 0) to eth1, with a prefix-id set to :1, which means that it will get the first /64-prefix in the requested /56-prefix on eth0. We also configure the interface to use ::1 as address. This address will also be used later for things like DNS.
Setting up dnsmasq for RA, DNS and DHCP
To enable some more advanced features we’re going to disable the internal dhcp-server (isc-dhcp-server) and instead use dnsmasq to handle all aspects of this, from router advertisement to DNS and DHCP. The perk here is that dnsmasq can act as an authorative DNS-server and inject DNS-records for hosts on the local network. For clients that use DHCP to request IPv4 it will also be able to provide DNS lookups for their respective IPv6 addresses, since the network interface has the same MAC-address.
I recommend taking a look at dnsmasq(8) to see what the different options do. In short we’re setting up dnsmasq to enable router advertisement on the interface we’re listening on and act as a authorative dhcp-server for ipv4. It also advertises local DNS-servers over both IPv4 and IPv6.
If you’re using Get (Telia, Norway) you should be able to request two prefixes, one /128-prefix that I personally just set to autoconfigure on eth0 for IPv6 connectivity on the router. You don’t really need to use it, but I guess it can be handy somehow. Just haven’t figured it out yet. You can also request a /56-prefix, which should give you plenty of room to grow.
If you’re not able to request a prefix it probably means that they haven’t enabled it for you or that you’re blocking it in the firewall. IPv6-support needs to be enable by support before you can request a prefix.
Links that has been helpful for me:
Using Lambda@Edge to fix permalink in Jekyll
Origin Access Identity (OAI) is a secure way to access S3 buckets from CloudFront, think of it as letting CloudFront use the S3 APIs to request objects instead of H. The alternative is to make the bucket publicly available via bucket policy or ACLs, but that’s not ideal.
On S3 you can configure a default index document, which is requested if the specified path doesn’t resolve to anything. This is handy, since static site generators like Jekyll relies on sub-directories for generating “clean URLs”. But with Open Access Identity CloudFront will request the literal object using the S3 APIs, and in this case S3 doesn’t know what to respond.
In this flowchart the users request is intercepted at the origin-request, which means before CloudFront requests the object from the origin, in our case, a S3 bucket. This lets us manipulate the request header to include
index.html so that CloudFront will request the correct object.
Key signing party for small teams
To make key signings as efficient as possible it’s important that all participants comes prepared. We avoid using Key Servers, since they are flakey, slow and might publish more information than you want. Before the event all users should have received a list of keys that will be signed, and imported them into their own keyring.
Most people don’t carry around their master key, or prefer to keep it air-gapped. These people will provide their signature later. During the signing party everyone will take a secure note of KeyID and fingerprints that they have checked. In order to speed things up and help for those with offline keys a list of KeyID and fingerprints can be prepared as a hardcopy, where the participants can take notes.
The benefit of GPG in the workplace
Most companies do a decent job with background checks and general vetting of people before hiring them. And physical security in buildings is often more than good enough, someone will notice if you go around pretending to be me at the office. On the Internet however that is a lot harder, so you need to have some kind of digital proof, something that cannot be copied or easily stolen.
You also get the added benefit of a second vetting. Security works best in layers, it’s best to not assume anything. If the user loses their key or doesn’t have it you just have to do another Key Signing. At least if you want to do anything online for this user.
And that’s GPG, when used in combination security tokens, like YubiKey. Users store the private keys (or signing keys) on security tokens, which cannot be copied, and when lost, is easily noticed. The token is protected by a PIN-number, and the private key stored on the devices cannot be extracted or otherwise removed from the device. The owner of the key can configure how many PIN attempts to allow before the key is locked. On keys like YubiKey you can also set an admin PIN which lets you reset PIN attempts. Signature counters (including number of bad PIN entries) can only be reset by factory resetting the device, which also removes key material.
A very typical scenario is that someone needs to reset or unlink an MFA device from their account. Since it’s easy to impersonate someone over the Internet you need a way to provide proof that the user is who they say they are. This can be done using asymmetric encryption. The idea is that all users exchange public keys in a secure manner, like in person or using Web Key Directory, and that all requests are signed using their private key. This way everyone can verify (to some extent) that the person are who they say they are. We can also encrypt messages using the recipients public key, you sign the message or file to prove who you are and you encrypt the message to ensure that only the recipient can read it.
Lets do the Key Signing Dance
To check that the key holder is who they say they are, we meet up face-to-face, and do the key signing dance:
- The other participants import the key holders public key into their local keyring, using either of these alternatives:
gpg --import foobar.gpg
gpg --locate-keys email@example.com
gpg [--keyserver <pgp.mit.edu|pool.sks-keyservers.net>] --recv-keys firstname.lastname@example.org
curl https://github.com/foobar.gpg | gpg --import
- The key holder confirms their KeyID and fingerprint by reading it out loud
- The other participants verify the key holders valid ID, like a drivers license or passport
- The other participants takes a note that they’ve verified this key
When all keys are verified the participants either sign and export the signature at the venue and hand them over to the key holder, or wait until they have access to their master key and do it then. Out of curtesy it’s common to not publish key signatures to public key servers, but rather export the signature and send it via email to the key holder. Then they can import the signatures and publish it key servers and web key directory at their own will.
- Sign the key holders public key, using the
gpg --sign-key KeyID
- Encrypt the key and send to key holder via email
gpg --export --armor <UID|KeyID> > <UID|KeyID>.key
gpg --encrypt --recipient <UID|KeyID> <UID|KeyID>.key(creates KeyID.key.gpg)
gpg --export --armor <UID|KeyID> | gpg --encrypt --recipient <UID|KeyID> -o <UID|KeyID>.key.gpg
- Listing keys and fingerprint
gpg --list-keys [UID|KeyID]
gpg --list-keys --fingerprint [UID|KeyID]
- Exporting keys
gpg --export --armor <UID|KeyID>
gpg --export --armor SUBKEYID! [SUBKEYID! ..]
- Listing and importing key signatures
gpg --list-sig <UID|KeyID>
gpg --import --import-options merge-only foobar.gpg
gpg --decrypt foobar.key.gpg | gpg --import --import-options merge-only
- Signing and encrypting a file, where UID|KeyID is recipient of file
gpg -se -r <UID|KeyID> [file]
- Creating a clearsign; a message that contains both message and signature
gpg --clear-sign [file](write message and finish with ^D)
Replacing a YubiKey (HSMs)
If you’re using multiple YubiKeys with the same key material on them you need to tell gpg-agent about the serial number of the key you want to use, since they are tied together. The easiest way is to just tell the agent to relearn the new serial numbers.
- Replacing YubiKey using gpg-agent-connect to relearn
- Plug in spare / replacement YubiKey
gpg-connect-agent "scd serialno" "learn --force" /bye
- Replacing YubiKey by removing key stubs
- Find keygrip
gpg --with-keygrip --list-secret-keys KeyIDfor all keys
- Locate stubs in
~/.gnupg/private-keys-v1.d/and delete them
- Insert new YubiKey
- Find keygrip
Why shouldn’t you publish signatures to Key Servers on other users behalf?
I was a little puzzled by this myself, as it seems strange that key servers allow a usage pattern that is not recommended. Apparently there are two primary reasons, most notably that you’ve likely only verified their name and identity, not the ownership of the e-mail account used as their UID. When you send the signature to the key holder via e-mail only the holder of that e-mail account would be able to publish the signature, proving that they own that account. And last but not least, there is a privacy concern. The Key Holder might not want the world to know that you two know each other. So we make it up to them to decide.
Configuring Web Key Directory for GPG
Web Key Directory (WKD) is a proposal for a new way to discover other users keys, using HTTP and TLS. In short it looks up the UID on the users host. This works since all UIDs are email address, and all email addresses are built up of two parts, the username and host part.
When we need to look up a new key, we can just query the server, establish a secure connection using TLS, and ask it to provide the users public key. Boom! Now you don’t need to rely on flakey key servers that are abused by people for nefarious purposes, given their immutable nature.
The documentation for WKD leaves much to be desired, and seems mostly focused on setting up more advanced systems for larger organizations to let users manage their WKD identity. For personal use it’s pretty straight forward to generate and publish.
- UID is SHA1 hashed, Z-base32 encoded
- The public key is in binary format as payload of the UID
- Uses the RFC5785-scheme: https://netwerk.io/.well-known/openpgpkey/hu/dmkqu7xwyxmspm94y6147dss1n59nfag, where the last part is your UID
Show me, show me!
If you’re too lazy, just export the UID hash directly, like so:
vegardx@yondu:~ $ gpg --list-keys --with-wkd-hash email@example.com pub rsa4096/0xBBF808963354ED16 2019-08-06 [SC] Key fingerprint = 4770 5635 6BEF A6F0 FBE7 BB21 BBF8 0896 3354 ED16 uid [ultimate] Vegard Hansen <firstname.lastname@example.org> email@example.com sub rsa4096/0xCE7C14C99AB0CF0C 2019-08-06 [E] sub rsa4096/0xC2CADE62F7C2714B 2019-10-08 [A]
So when you’ve put the file in the correct place with the correct content you should be able to look yourself up, without using a key server, like so:
vegardx@bork:~ $ gpg --locate-keys firstname.lastname@example.org gpg: key BBF808963354ED16: public key "Vegard Hansen <email@example.com>" imported gpg: Total number processed: 1 gpg: imported: 1 gpg: no ultimately trusted keys found pub rsa4096 2019-08-06 [SC] 477056356BEFA6F0FBE7BB21BBF808963354ED16 uid [ unknown] Vegard Hansen <firstname.lastname@example.org> sub rsa4096 2019-08-06 [E] sub rsa4096 2019-10-08 [A]
Invalidate CloudFront with Lambda and S3 events
Bla bla bla… You’re here for the solution, not to hear me talk about it. See code example. Improvise, adapt and overcome.
One thing though, unless you have a shit metric ton of objects that you want to keep all hot and sizzling in cache I suggest you just invalidate the entire path, and not per object. Amazon has this weird pricing model where wildcard invalidations are priced as a single path.
Parking a domain on S3
You might want to “park” a domain to notify people that they’re no longer in use or whatever. Since we’re using Terraform you can update a ton of parked domains at the same time. Which is nice when business decides to rebrand everything. Like they do.
Notice that we’re using a bucket policy and not ACLs to make the contents of the bucket public. This gives us more fine grained control over access to the bucket, and while it doesn’t really matter in this case, it’s a good habit to get into.
Using Terraform to manage redirects
S3 has a few neat features, like letting you publish your webpage or store backups. But one of my favorite features is the ability to set up more or less maintenance-free redirects. This is super useful when you’re in a corporate environment where domain name changes are quite frequent, either due to rebranding or similar.
So, say you want to redirect all traffic on redir.netwerk.io to https://google.com, then a simple configuration like this is enough.
This only works when you want to redirect regular HTTP traffic, if the endpoint you’re redirecting from was using TLS you have to put CloudFront with certificates from Certificate Manager in front.
Salted sha-512 hashes on macOS
This has been a recurring issue for me. I often need (for some weird reason) to send a sha-512 hashed password to someone. This seems like such a trivial task, but since you’ve landed here I guess you’ve also figured out that this is non-trivial without pulling out Python or something similar. And that takes a lot of time.
In comes Docker and Alpine Linux. You can always pass the password and copy the output directly, but then you also have that password in your shell history. Probably not what you want.
docker run --rm -ti alpine:latest mkpasswd -m sha512 [password] [| pbcopy]