zx23 blog

Using Zfs for Easy Offsite Backups

Continuing our series of short posts and cool tips, here’s another one for your arsenal.

You’re probably familiar with zfs and its ability to send and receive dataset snapshots. Did you also know that it can send incremental snapshots, so you can keep your remote backup up to date with minimum transfer? Or how about an easy way to sync data from an old and about-to-be-retired server to a new 128 core one?

What you will need:

  • source server running zfs
  • destination server running zfs
  • root access on both of these servers
  • a network between the two servers, preferably a high-bandwidth one (add more links for redundancy or aggregation)

Start by making a snapshot of the source dataset.

1
# zfs snapshot tank/srv@`date +%s`

Prepare zfs receive process on the remote server. Note that we’re using nc(1) here to send and receive the stream accross an encrypted VPN tunnel. If your servers are connected by the public Internet, you should use ssh here instead. -I option to netcat specifies the size of the TCP receive buffer.

1
# nc -l -I 65536 128core.server.zx23.net 1337 | zfs receive tank/srv

Back on the source server, start the transfer and pipe it into nc(1). Remember to specify a matching TCP send buffer size and also add an IPv4 TOS value to maximise throughput.

1
2
3
4
5
6
7
# zfs send -v tank/srv@1440759638 | nc -O 65536 -T throughput 128core.server.zx23.net 1337
send from @ to tank/srv@1440759683 estimated size is 228.00G
total estimated size is 228.00G
TIME        SENT   SNAPSHOT
12:05:13   10.5M   tank/srv@1440759638
12:05:14   25.5M   tank/srv@1440759638
12:05:15   39.3M   tank/srv@1440759638

So that took care of the initial backup. From now on, you can send incremental snapshots only. You first take a new shapshot of your dataset and then pass a -i option to zfs send to indicate what the previous snapshot was for this dataset.

1
2
# zfs snapshot tank/srv@`date +%s`
# zfs send -v -i tank/srv@1440759638 tank/srv@1440795521 | nc -O 65536 -T throughput 128core.server.zx23.net 1337

zfs receive command on the remote server needs no modification. zfs will realise its receiving an incremental backup and, provided the destination file system exists, will do the right thing.

Note that if the destination filesystem was modified (which can happen if you have atimes enabled on the filesystem), you will get the following error: cannot receive new filesystem stream: destination 'tank/srv' exists. must specify -F to overwrite it. Do as it suggests, and don’t worry, it won’t cause the entire filesystem to be transferred from scratch.

Using Lagg(4) to Roam Between Wired and Wireless Networks

This is such a useful setup to have on any worthy Ono Sendai, that I’m just going to post you the config and leave you to enjoy it.

(PS: if you don’t know what this does, read lagg(4) man page and see if you can figure it out. )

SaltStack (and Python Client API) Is Cool

A little while ago we switched from Puppet to using SaltStack to orchestrate and manage configuration on our fleet of servers and workstations. The main reason for the switch was Salt’s modular design - everything is pluggable. You don’t like the default Jinja templating language? Switch to Genshi, Mako or, our favourite, PyObjects renderer instead. You don’t want to store job data in files on the master? Send it to a Kafka topic, Elasticsearch, HipChat or one of several databases. Or write your own.

As our state coverage grew, a need arose to easily see what states apply to a given minion. show_highstate function from the state module gives us a verbose output of the highstate in all its glory and we could, with the help of the likes of awk and | get the list of state names alone, but thats tedious. Here’s a snippet of show_highstate call, formatted in YAML (default output format for highstate); the name of the state is the value of the sls associative array:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
zygis_authorized_keys:
    ----------
    __env__:
        base
    __sls__:
        users
    file:
        |_
          ----------
          name:
              /home/zygis/.ssh/authorized_keys
        |_
          ----------
          source:
              salt://users/templates/authorized_keys.jinja
        |_
          ----------
          template:
              jinja
        |_
          ----------
          user:
              zygis
        |_
          ----------
          group:
              zygis
        |_
          ----------
          mode:
              0644
        |_
          ----------
          makedirs:
              True

Instead, we achieve what we need with a few lines of Python, using the Python client API:

Put this script on your master and run it as root, giving it a minion ID as an argument. Here it is in action:

1
2
3
# python ./minion_states.py some_minion
['base.pkg', 'base.service', 'bind', 'duo', 'duo.linux', 'groups',
'ntp', 'ssh', 'sudo', 'syslog', 'users']

Didn’t I say that Salt is cool?

DIY “What Is My IP” Service

Today a need arose for a service that would tell us our public IP address. This is such a common question to ask the Internet that Google and DuckDuckGo provide the answer for you directly in the search results, saving you an extra click and TCP connection or two to one of the many websites offering the same information. (Interestingly, neither Yahoo nor Bing have this feature, but thats a different story).

Instead of relying on one search engine or 3rd party site, we looked at how we can build something of our own to solve this problem. And what we came up with was rather obvious - make use of the return directive in nginx rewrite module, which can return any given status code and text to the client.

Excerpt from the rewrite module doc concerning the use of return directive:

1
2
3
4
5
Syntax:     return code [text];
return      code URL;
return      URL;
Default:    -
Context:    server, location, if

So, based on that, we arrive at the following location block:

1
2
3
4
location /ip {
    add_header Content-Type text/plain;
    return 200 '$remote_addr';
}

And here’s the output it produces:

1
2
% curl example.net/ip
10.10.10.10

Neat isn’t it? Hey but what if the consumer of this data is a machine instead of a human? Can we make the output more machine-friendly? Yes we can, this location returns a JSON. Oh my! Or, more accurately, 01001111 01101000 00100000 01101101 01111001 00100001, as its your computer yelling that.

1
2
3
4
location /ip.json {
    add_header Content-Type application/json;
    return 200 '{\n"ip": "$remote_addr"\n}\n';
}

Don’t just take our word for it, see it for yourself:

1
2
3
4
% curl example.net/ip.json
{
    "ip": "10.10.10.10"
}

And then we find the list of nginx variables and we’re on our way of building something resembing httpbin, but without the overhead of a high-level programming language.

Now, nginx is our primary webserver for most websites, but we also have Varnish in front for caching. So, can we return client’s IP directly from Varnish and save one network hop up to nginx? Yes we can, but with a small caveat - we loose the cool Guru Meditation error page if we only want to return the IP address alone.

So here’s the VCL that will do this for your:

1
2
3
4
5
6
7
8
9
10
sub vcl_recv {
    if (req.url == "/ip") {                                                                                                                            
        error 200 client.ip;                                                                                                                      
    }
}

sub vcl_error {
    synthetic obj.response;                                                                                                                                 
    return (deliver);                                                                                                                                       
}

Load that VCL and see it working. Note how the response doesn’t include a trailing new line character, so our shell prompt character, %, ends up on the same line. Thats due to a limitation in VCL, type IP doesn’t support appending with the + operator; if you try this you’ll get the following error from the VCL complier: Operator + not possible on type IP.

1
2
% curl lab.zx23.net/ip
10.10.10.10%

The exercise of writing a VCL to return the IP as JSON is left to the reader. Happy meditation.

Setting Up HE IPv6 Tunnel on FreeBSD

While this varies per country, most residential ISPs in the UK are far from offering an IPv6 service. Instead they seem to be dedicating a substantial amount of time to making promises of ‘delivering it in the future’. The latest from BT, one of the largest residential ISPs here, is ‘BT is thinking to offer IPv6 to their customers by the end of year 2015’. So they’re still thinking about it. And implementing Carrier Grade NAT. Good for them.

Meanwhile, lets setup an IPv6 tunnel with Hurricane Electric, using their Tunnel Broker service. Yes, its free and its awesome. Tunnel Broker service utilises the 6in4 mechanism to encapsulate IPv6 packets in IPv4 packets.

Once you’re signed up and logged in, navigate to Create Regular Tunnel page, enter your IPv4 address and choose a tunnel server. Understand that the closer the tunnel server is to you, the faster (in terms of latency) IPv6 connectivity you’ll get, so choose carefully and run ping / traceroute to candidate servers to determine the one that has the lowest latency.

When done, click Create Tunnel and wait a few seconds while some tests are run. For example, your public IP will need to be pingable, if it isn’t, you’ll be asked to allow it.

On the next page, Tunnel Details, give this tunnel a sensible description. Now we just need to setup our FreeBSD firewall. Lets use the following IPs in the example (you are the client and HE tunnel server is the server):

  • Server IPv4: 203.0.113.42
  • Server IPv6: 2001:db8:dead:beef::1
  • Client IPv4: 198.51.100.15
  • Client IPv6: 2001:db8:dead:beef::2

Here we creata gif(4) interface, assign IPv4 and v6 addresses and configure the default IPv6 gateway:

1
2
3
4
5
# ifconfig gif0 create
# ifconfig gif0 tunnel 198.51.100.15 203.0.113.42
# ifconfig gif0 inet6 2001:db8:dead:beef::2 2001:db8:dead:beef::1 prefixlen 128
# ifconfig gif0 inet6 -ifdisabled
# route -n add -inet6 default 2001:db8:dead:beef::1

If you are running w/o a firewall (wild wild west!) you should have full IPv6 connectivity, give it a go:

1
2
3
4
5
6
7
8
% traceroute6 google.com
traceroute6 to google.com (2a00:1450:4009:801::200e) from 2001:db8:dead:beef::2, 64 hops max, 12 byte packets
1  2001:db8:dead:beef::1  6.809 ms  6.793 ms  6.970 ms
2  v116.core1.lon2.he.net  10.978 ms  7.017 ms  17.997 ms
3  2001:7f8:4::3b41:1  7.484 ms  7.108 ms  7.487 ms
4  2001:4860::1:0:3067  7.466 ms  12.622 ms  7.474 ms
5  2001:4860:0:1::23  7.485 ms  7.364 ms  7.473 ms
6  2a00:1450:4009:801::200e  7.258 ms  7.373 ms  7.203 ms

Otherwise, lets add the following rules to our pf.conf so that traffic can pass through:

/etc/pf.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# create some useful macros
ext_if6="gif0"
v6_gw="203.0.113.42"

# enable logging on tunnel interface; we like to see what goes on
set loginterface $ext_if6

# allow icmp6 on the tunnel interface - this is required for the Neighbor
# Discovery Protocol to work
pass quick on $ext_if6 proto icmp6 all keep state label "icmp6"

# default block all incoming traffic on the tunnel interface, log and
# label for counters
block in log on $ext_if6 all label "block-all-external6"

# pass all outgoing traffic on the tunnel, label for counters
# note the use of 'keep state' rather than 'modulate state' - this is to
# work around a bug in PF IPv6 handling
pass out quick on $ext_if6 all keep state label "pass-all-external6"

# allow incoming and outgoing connections to the tunnel server IPv4
# address for protocol 41 (6in4)
pass in on $ext_if inet proto 41 from $v6_gw to $ext_if modulate state
pass out on $ext_if inet proto 41 from $ext_if to $v6_gw modulate state

Ok, now you’re good to test and make sure it all works as expected:

1
2
3
4
5
6
7
8
9
% ping6 -c 3 eff.org
PING6(56=40+8+8 bytes) 2001:db8:dead:beef::2 --> 2607:f258:102:3::2
16 bytes from 2607:f258:102:3::2, icmp_seq=0 hlim=58 time=150.981 ms
16 bytes from 2607:f258:102:3::2, icmp_seq=1 hlim=58 time=154.884 ms
16 bytes from 2607:f258:102:3::2, icmp_seq=2 hlim=58 time=164.709 ms

--- eff.org ping6 statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 150.981/156.858/164.709/5.776 ms

You can use tcpdump(1) on the external, non-tunnel, interface to see the 6in4 mechanism in action:

1
2
3
4
5
6
7
# tcpdump -ni em0 host 203.0.113.42
15:47:10.057887 IP 198.51.100.15 > 203.0.113.42: IP6 2001:db8:dead:beef::2 > 2607:f258:102:3::2: ICMP6, echo request, seq 0, length 16
15:47:10.222577 IP 203.0.113.42 > 198.51.100.15: IP6 2607:f258:102:3::2 > 2001:db8:dead:beef::2: ICMP6, echo reply, seq 0, length 16
15:47:11.063681 IP 198.51.100.15 > 203.0.113.42: IP6 2001:db8:dead:beef::2 > 2607:f258:102:3::2: ICMP6, echo request, seq 1, length 16
15:47:11.233365 IP 203.0.113.42 > 198.51.100.15: IP6 2607:f258:102:3::2 > 2001:db8:dead:beef::2: ICMP6, echo reply, seq 1, length 16
15:47:12.063405 IP 198.51.100.15 > 203.0.113.42: IP6 2001:db8:dead:beef::2 > 2607:f258:102:3::2: ICMP6, echo request, seq 2, length 16
15:47:12.220879 IP 203.0.113.42 > 198.51.100.15: IP6 2607:f258:102:3::2 > 2001:db8:dead:beef::2: ICMP6, echo reply, seq 2, length 16

And finally, lets not forget to add the required lines to /etc/rc.conf so that changes will work after a reboot - you are going to reboot this server to test your changes, right?

/etc/rc.conf
1
2
3
4
gif_interfaces="gif0"
gifconfig_gif0="198.51.100.15 203.0.113.42"
ifconfig_gif0_ipv6="inet6 2001:db8:dead:beef::2 2001:db8:dead:beef::1 prefixlen 128"
ipv6_defaultrouter="-iface gif0"

DYI DDNS With Nsupdate and Bind

We have 3 letter location codes that we use to identify remote sites and some of those sites are on a dynamic IP connection. As a result, we need a way to keep the DNS records for all locations up to date.

Yes, many DNS providers offer a free dynamic DNS service and most higher end xDSL & cable routers have a dynamic DNS client built. And they may even do a pretty good job once you invest a couple of minutes into getting it all setup, but where’s the fun in that.

Lets start by generating a TSIG (Transaction SIGnature) key pair, which we will use to authenticate our update to the nameserver. dnssec-keygen is part of the bind package on FreeBSD. Note that we’re using example.net. domain here, you must change it to the domain you want to update. For explanation of the other options refer to dnssec-keygen(8) manual.

1
2
% dnssec-keygen -a HMAC-MD5 -b 512 -n HOST example.net.
Kexample.net.+157+32671

This created two files:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
% ls
Kexample.net.+157+32671.key
Kexample.net.+157+32671.private

% cat Kexample.net.+157+32671.key
example.net. IN KEY 512 3 157 iczhJ1Fmo24DeaiQ0sgeJjKV6jd2fgsugx8OdDhnEFvxdC2MqEHxeICPtw+DyuOnOmzVF6wOEKbwXdTwg6aXdQ==

% cat Kexample.net.+157+32671.private
Private-key-format: v1.3
Algorithm: 157 (HMAC_MD5)
Key: iczhJ1Fmo24DeaiQ0sgeJjKV6jd2fgsugx8OdDhnEFvxdC2MqEHxeICPtw+DyuOnOmzVF6wOEKbwXdTwg6aXdQ==
Bits: AAA=
Created: 20140815223024
Publish: 20140815223024
Activate: 20140815223024

Now we setup this key on our bind server to which we’ll be sending the zone update to. Create /var/named/etc/namedb/example.net.key with the following contents; the secret is the same as key in the files generated with dnssec-keygen previously:

/var/named/etc/namedb/example.net.key
1
2
3
4
key "example.net." {
    algorithm hmac-md5;
    secret "gdmqtLsXiI954A3RsmmhNedcBxYjl62UBhHux99NyuKICV7Gw5AvPx4lLELhuZYOEXGZBU7m1UgNEW36TM1A9g==";
};

Then configure your zone in /var/named/etc/namedb/named.conf to allow updates with the key we setup:

/var/named/etc/namedb/named.conf
1
2
3
4
5
zone "example.net" {
    type master;
    file "/etc/namedb/master/example.net.db";
    allow-update { key "example.net."; };
};

Now reload bind and lets move to testing this using the nsupdate tool:

1
2
3
4
5
6
7
# named-checkconf /var/named/etc/namedb/named.conf
# rndc reload

% nsupdate -k Kexample.net.+157+32671.private
> update delete lhr.example.net A
> update add lhr.example.net 86400 A 10.22.23.101
> send

You’ll notice that by default nsupdate doesn’t give any output, uneless there was an error. Instead, you can issue a answer command to nsupdate after send, to see the server response:

1
2
3
4
5
6
7
8
9
10
11
12
13
# nsupdate -k Kexample.net.+157+32671.private
> update delete lhr.example.net A
> update add lhr.example.net 86400 A 10.1.1.10
> send
> answer
Answer:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  32101
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;example.net.                      IN      SOA

;; TSIG PSEUDOSECTION:
example.net.               0       ANY     TSIG    hmac-md5.sig-alg.reg.int. 1408203323 300 16 BoDENzUS1d+yT0Fyd6fq6A== 32101 NOERROR 0

Ok, so thats all working as expected. The last step is to script something that would update DNS automatically whenever the external IP changes. Here’s a basic shell script, notice it doesn’t do any error handling :)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/sh

myip=`ifconfig tun0 | awk '/inet/ { print $2 }'`
oldip=`dig +short @ns0.example.net lhr.example.net`
key="/etc/namedb/Kexample.net.+157+32671.private"

updatedns() {
    logger -t 'ddnsupdate' "Updating lhr.example.net: $oldip -> $myip"
    cat <<EOF | nsupdate -k "$key"
update delete lhr.z23.net A
update add lhr.example.net 86400 A $myip
send
EOF
}

if [ "$myip" != "$oldip" ]; then
    updatedns
fi

Tunning ZFS Scrub

zfs scrub is a pain on one of our servers. It consumes all of the disk IO and any interactive work on it becomes annoying. Our pool is a mirror of two identical Samsung HD754JJ disks, we’re running 9.2-RELEASE, the box has 8GB RAM and default ZFS settings.

Here’s the IO load during scrub as shown by iostat(1):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
% iostat -x ada0 ada1 1 5
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      11.3  31.0   428.6  1155.5   10  17.0  11 
ada1      11.3  31.1   429.5  1154.9    2  11.3   8 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0     120.0   0.0 12985.6     0.0   10 100.2 100 
ada1     103.0   0.0 12067.8     0.0   10  31.9  51 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0     187.8   0.0 21942.1     0.0   10  50.7  98 
ada1     192.8   0.0 21850.7     0.0    7  48.4  99 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0     148.9   0.0 15679.7     0.0   10  56.3 101 
ada1     149.8   0.0 15653.8     0.0    7  39.9  73 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      86.9   1.0  7765.2     4.0    2 123.3  99 
ada1      65.9   1.0  5191.3     4.0    0  47.7  55

And, here’s our current pool status: (yes, we also seem to have a performance issue here, the scrub should go much faster):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
% zpool status
  pool: rpool
 state: ONLINE
  scan: scrub in progress since Wed Aug 13 04:59:44 2014
        247G scanned out of 569G at 10.1M/s, 9h5m to go
        0 repaired, 43.38% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        rpool                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/6f4e7b58-cdb7-11df-b6d7-xxxxxxxxxxxx  ONLINE       0     0     0
            gptid/6ffa234a-cdb7-11df-b6d7-yyyyyyyyyyyy  ONLINE       0     0     0

errors: No known data errors

Direct tunning can be done by adjusting some sysctls, the relevant ones are below (with default values shown).

1
2
3
4
vfs.zfs.no_scrub_prefetch: 0
vfs.zfs.scrub_delay: 4
vfs.zfs.scan_idle: 50
vfs.zfs.vdev.max_pending: 10

After some testing with different settings, we settled with the following configuration. Note that we’re interested in having a responsive server during the scrub here and don’t care if scrub takes a long time to complete.

1
2
3
4
vfs.zfs.no_scrub_prefetch: 1
vfs.zfs.scrub_delay: 15
vfs.zfs.scan_idle: 1000
vfs.zfs.vdev.max_pending: 3

In summary, the above disables scrub prefetch; limits the number of IOPS to about 66 on each device (1000 / 15 = 66); tells ZFS that the pool can be considered idle 1000ms after last activity and sets max pending IO operations per device to 3.

You can read exelent descriptions of these (and other ZFS tunables) on this ZFS guide 1.

Now lets see what iostat(1) looks like with these changes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
% iostat -x ada0 ada1 1 5
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      11.4  30.9   436.1  1155.3    0  17.0  11 
ada1      11.3  31.1   437.0  1154.7    0  11.3   8 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      84.9   0.0  2751.2     0.0    1   6.4  22 
ada1      93.9   0.0  2922.0     0.0    0   3.7  19 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      73.9  18.0  3377.6  2065.9    0   6.0  27 
ada1      71.9  18.0  3250.2  2065.9    0   6.1  23 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      65.9  87.9   810.7   579.9    0   5.7  34 
ada1      61.9  88.9   800.2   579.9    0   3.1  19 
                        extended device statistics  
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b  
ada0      93.9  84.9  3690.8  4320.2    3   8.3  55 
ada1     121.9  84.9  4299.7  4320.2    0   5.6  40

Scrub seems to be running just as fast (when the system isn’t doing any other IO):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
% zpool status
  pool: rpool
 state: ONLINE
  scan: scrub in progress since Wed Aug 13 04:59:44 2014
        265G scanned out of 569G at 10.3M/s, 8h22m to go
        0 repaired, 46.63% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        rpool                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/6f4e7b58-cdb7-11df-b6d7-xxxxxxxxxxxx  ONLINE       0     0     0
            gptid/6ffa234a-cdb7-11df-b6d7-yyyyyyyyyyyy  ONLINE       0     0     0

errors: No known data errors

And interactively the server is much more responsive, so thats objective complete.

Why Is LastPass Hitting Our Webserver?

I occasionally leave varnishlog running after testing / debugging our webstack / webapp configuration and every once in a while I spot an interesting request comming in. Here’s one:

1
2
3
4
5
6
7
8
9
6 SessionOpen  c 38.127.167.46 47322 x.x.x.x:80
6 ReqStart     c 38.127.167.46 47322 319643459
6 RxRequest    c HEAD
6 RxURL        c /
6 RxProtocol   c HTTP/1.0
6 RxHeader     c Host: mail.example.com
6 RxHeader     c Accept: text/html, text/plain, text/css, text/sgml, */*;q=0.01
6 RxHeader     c Accept-Language: en
6 RxHeader     c User-Agent: Lynx/2.8.8dev.9 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/2.12.14

Reverse DNS lookup on the source IP tells us it belongs to LastPass:

1
2
% dig -x 38.127.167.46 +short
38.127.167.46.LastPass.com.

So, why are LastPass making HEAD requests to our webserver, with Lynx?

Nothing turns up in the few minutes I spent searching the Internet for similar reports. A post-Heartbleed post on LastPass blog announcing a new feature in the Security Check tool, which can be run by LastPass users to automatically see if any of their stored sites and services were 1) Affected by Heartbleed, and 2) Should update their passwords for those accounts at this time., suggests that they are checking web server headers and certificate issue dates (either on demand or crawling around), but it can’t be what we’re seeing here, as the request in question was made over plain HTTP and didn’t follow the 301 redirect to HTTPS the webserver issued.

To be continued?

Happy Late World IPv6 Day!

We’re running two days (and 3 years! :) late, but who cares - from today all services on zx23 infrastructure are dual-stacked and available over both IPv4 & IPv6!

PS: Yeah, the image looks lame on the left, but Octopress refuses to center it no matter what I try!

More PF, IPv6 and TCP Issues

Turns out there’s another issue with PF, IPv6 and TCP - this time its concerning the reassemble tcp packet scrubbing option.

Again, don’t turn it on for IPv6, as you’ll have issues making incoming TCP connections over IPv6 (outgoing didn’t seem to be affected in my tests).

You’ll want to instruct PF to apply reassemble tcp option for IPv4 only:

1
scrub inet all reassemble tcp

I’m curious to know if this issue is specifically with the FreeBSD version of PF or is the latest OpenBSD version is affected by these issues as well?