Monday, November 6, 2017

become a part of atlas ripe community

One of the primary objective is to give back what we have learn from the world and in this article, I am doing exactly that. Recently a good friend of mine introduce me to atlas ripe community where to join as a member and host a probe for the benefit of better and real time worldwide networking troubleshooting.

At first I was puzzled how does it work and why should I apply to host a probe. After a demo, it looks like this User Define Measurement or UDM will help my work and so I was convinced. It shown a report of network connectivity from ping to ssl certificate checks from the probes worldwide.

So I applied and you can too! It can be apply here. After sometime I thought my application was rejected because I have not get any response from the atlas ripe community. It was like 1-2weeks after application. But on 17 october 2017, I got the email from ripe community that they shipped the unit! I was excited but it took sometime to reach Malaysia as the parcel travel from Netherlands.

On 31 october 2017, I received the parcel in my mailbox! Take a look below

It was really easy after that, the probe ID is label as a sticker on the prob and once registered to the site, you are ready to plug the prob to the network. It was hassle less, once plug into the router network interface and this unit is usb powered, it took no time to detected by the ripe atlas site.

You can check the probe status here. If you probe is up and service user defined measurement from other users requests, you start to earn credits. This credit can be use for your own user defined measurement! On second day of hosting, I got 538k of credit which is really cool.

If you are in system or network admin, I think this will help you to troubleshoot if you have to measure connectivity from network devices worldwide.

Sunday, November 5, 2017

Going IPv6-only - Gamers, don't do this at home!

Recently I've attended a talk about Cisco's IPv6-only campus building in San Jose . While their internal network is IPv6 only they are still able to talk to IPv4 hosts using NAT64. This motivated me to try this out at home.

Current setup

I'm already running a nicely working dual-stack setup. My ISP assigns me one semi-static IPv4 (officially it's dynamic but it never actually changes) and a static generous /48 over DHCPv6-PD. Internally I have a bunch of DHCP/DHCPv6/SLAAC client devices and two servers hosting a few VMs with static IPv4 and IPv6 addresses.


In this experiment, I want to disable IPv4 connectivity for my client devices. For target hosts only accessible over IPv4 I will set up a DNS64 / NAT64 environment. I want to find out how much my usual activities are affected, for example browsing, checking email and gaming.


  • If something breaks horribly, I want to be able to go back easily and quickly.
  • I only want to test the impact on client devices ( "end user experience" ) - my infrastructure hosts should still be able to communicate over IPv4 where needed.

The plan

  • Set up NAT64 in a VM
  • Set up DNS64
  • Disable DHCP v4 and release all IPv4 addresses on my clients. . I'm not going to actually disable their IPv4 stack, I don't care if windows does automatic IPv4 shenanigans on the local network.

NAT64 setup

First, I've created a new virtual machine on my KVM host. I installed a standard Centos 7 ("Infrastructure Server"). For the actual NAT64 translation I decided to install Jool . There are alternatives around but this seemed to be the most current one.

There are no packages for Centos available, but the installation is still pretty simple:


yum groupinstall "Development Tools"
yum install epel-release
yum install dkms libnl3-devel kernel-devel

Build the kernel module:

dkms install Jool-3.5.4

Build the userspace application:

cd Jool-3.5.4
make install

Then we can start the translation. For this I wrote a simple script:

# enable routing
sysctl -w net.ipv4.conf.all.forwarding=1
sysctl -w net.ipv6.conf.all.forwarding=1

# disable offloading - see
ethtool --offload eth0 gro off

# assign  64:ff9b::/96
/sbin/ip address add 64:ff9b::/96 dev eth0

# start jool
/sbin/modprobe jool pool6=64:ff9b::/96

# enable logging
jool --logging-bib=true
jool --logging-session=true

Two things to note:
  •  I've assigned the standard range 64:ff9b::/96 to the NAT64 box - this is suggested and required if you plan on using for example the google DNS64 instead of your own. If you only roll your own DNS64 then you could use a different range here
  • The script above disables offloading in the VM - but it also needs to be done on the VM host. I didn't realise this at first and it resulted in horrible performance. I should have read the FAQ first …

Finally, once jool is running we also set up a route to this range. I probably could tinker around with radvd on this box to announce the range directly, but it seemed easier to just set up a static route on my gateway(ubnt Edgerouter ), and this worked fine.

set protocols static route6 64:ff9b::/96 next-hop 2a02:999:1337:23::50

Now we should be able to reach IPv4 targets over IPv6 internally. You can simply test this by concatenating the IPv6 prefix above with the IPv4 address:

ping 64:ff9b::
PING 64:ff9b:: 56 data bytes
64 bytes from 64:ff9b::808:808: icmp_seq=1 ttl=57 time=1.18 ms
64 bytes from 64:ff9b::808:808: icmp_seq=2 ttl=57 time=0.996 ms

DNS64 setup

Now that we have a working NAT64 gateway, we also need to tell the IPv6 client when to actually use it. The principle is simple: The client asks our DNS64 Resolver for the AAAA record of its target and our resolver will pass this query on. If it gets a positive answer it will pass it back to the client - the target is reachable with IPv6 directly and we don't need to involve NAT64. But if the server responds with NODATA our resolver will synthesise AAAA records itself based on the target's A records. The synthesised AAAA records point to the NAT64 IP range defined earlier.

For example, has AAAA records, these will be returned as-is. But '' does not. In this case the resolver gets the A record of ( and builds the AAAA record 64:ff9b::c228:d932 (which corresponds to the  4-in-6 notation 64::ff9b::

Google provides such a DNS64 service on the addresses  2001:4860:4860::6464 and 2001:4860:4860::64

Currently my clients are configured to point to a local dnsdist -  a very flexible DNS load balancer. While it's admittedly a bit of an overkill to have a load balancer in my LAN, dnsdist makes experiments like these super easy, because it allows me to  simply switch between standard and DNS64 backends or different DNS64 implementations without the need to reconfigure any of my clients. they will always just see the dnsdist IP as resolver, which they got from SLAAC  ( radvd-options "RDNSS 2a02:999:1337:23::88 {};" ). dnsdist also provides nice real-time graphs and inspection possibilities.

Behind my dnsdist I have a local PowerDNS Recursor  which we will now configure to do DNS64.

We copy the example lua config from the documentation and adapt it to use our 64:ff9b::/96 range. So our dns64.lua file looks like this:

-- this small script implements dns64 without any specials or customization
prefix = "64:ff9b::"

function nodata ( dq )
 if dq.qtype ~= pdns.AAAA then
   return false
 end  --  only AAAA records

 -- don't fake AAAA records if DNSSEC validation failed
 if dq.validationState == pdns.validationstates.Bogus then
    return false

 dq.followupFunction = "getFakeAAAARecords"
 dq.followupPrefix = prefix
 dq.followupName = dq.qname
 return true

-- the address is the reverse of the prefix address above
function preresolve ( dq )
 if dq.qtype == pdns.PTR and dq.qname:isPartOf(newDN("")) then
   dq.followupFunction = "getFakePTRRecords"
   dq.followupPrefix = prefix
   dq.followupName = dq.qname
   return true
 return false

We save the script in /etc/pdns-recursor/dns64.lua and then activate it in /etc/pdns-recursor/recursor.conf:


Now we're ready for prime time and should be able to resolve IPv4-only targets. Let's test (from any box in my lan):

dig aaaa +short


Just to make sure, we want to test if IPv6 enabled targets still resolve correctly. They should *not* be rewritten to our 64:ff9b::/96 prefix!

dig aaaa +short

all good!

Go live

To force the clients to use IPv6 I'm simply disabling the DHCPv4 server on my gateway and release the V4 address (on windows: ipconfig /release or disable IPv4 in the adapter settings ).

To make sure they don't reach anything over IPv4 anymore:

At the same time I open a console on my NAT64 box and tail the logs to see what traffic gets NAT'ed

Nov 03 14:33:01 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:1 (GMT) - Added session 2a02:999:1337:23::100#2090|64:ff9b::c228:d932#2090|||ICMP
Nov 03 14:33:01 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:1 (GMT) - Mapped 2a02:999:1337:23::100#2090 to (ICMP)
Nov 03 14:33:05 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:5 (GMT) - Forgot session 2a02:999:1337:1337:3c75:4fdc:8b1e:64c#53261|64:ff9b::57ec:c857#443|||TCP
Nov 03 14:33:05 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:5 (GMT) - Forgot 2a02:999:1337:1337:3c75:4fdc:8b1e:64c#53261 to (TCP)
Nov 03 14:33:05 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:5 (GMT) - Forgot session 2a02:999:1337:1337:3c75:4fdc:8b1e:64c#53259|64:ff9b::57ec:c857#443|||TCP
Nov 03 14:33:05 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:5 (GMT) - Forgot 2a02:999:1337:1337:3c75:4fdc:8b1e:64c#53259 to (TCP)
Nov 03 14:33:05 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:5 (GMT) - Forgot session 2a02:999:1337:1337:3c75:4fdc:8b1e:64c#53260|64:ff9b::57ec:c857#443|||TCP
Nov 03 14:33:05 nat64 kernel: NAT64 Jool: 2017/11/3 13:33:5 (GMT) - Forgot 2a02:999:1337:1337:3c75:4fdc:8b1e:64c#53260 to (TCP)

All is fine...

I start browsing, reading mail… and you know what? Everything just works(™). As mentioned earlier, in my first attempt the performance was horrible but after disabling offloads on my VM host this problem is gone. Browsing is fast and I don't notice any difference between IPv6 and IPv4-only websites. I'm testing video streaming sites as well, no issues. My roomie tries out her Office VPN, citrix, Skype calls , again, no issues there even though stuff get's NATed.

The only thing I notice is that I can't log in to my router web GUI over IPv6 ("Unable to load router configuration")  - but this is a internal Problem in my LAN and would be fixable as well.

… until you want to play a game

Oh boy. Before I started the experiment I imagined that there might be some issues with games. But it's even worse than I thought.  First of all, GeForce Experience tells me that there is a new driver available. But it just can't download it ("Unable to connect to NVIDIA"). Well, no surprise there, this NVIDIA piece of s...oftware hasn't been a shining knight of bug freeness anyway. I can still download the drivers from the website at least.

Let's start Steam.

So.. yeah, that doesn't look so great.  Offline mode it is. Quick google search shows this bug has been reported 4 years ago already  (DNS people: check by whom ;-) ). The report is for steam on Linux, but Windows has the same issue.

The Ubisoft Launcher is not better. ("A Ubisoft service is not available at the moment")
) Again, I can start Assassins Creed Origins in offline mode, so there's at least that.

How about Blizzard? The client starts fine, but can't update games. Overwatch does not even start ("unable to locate resources") , Hearthstone makes it at least to the main menu but you can't enter a game.

The Epic games launcher started fine the first time and Unreal Tournament can be fired up as well. It doesn't find any online games though. I re-enabled IPv4 quickly to test if it finds games then (it does), and disabled IPv4 again. After that, the Epic Launcher showed an Error. A little later it worked again.

The Origin Client sometimes works and sometimes does not ("You are offline"). Battlefield 1 can be started, but only the offline campaign is available.

At that point I gave up - IPv6 only and gaming do not match (yet). Well, at least I can do some backseat gaming on (works fine in the browser but seems to have problems displaying ads in the desktop app - which would be nice, but it also thinks the ad is showing and it mutes the stream for eternity.)


If it weren't for my addiction to occasionally harassing pixels , going IPv6 only in my network would be no problem. NAT64 and DNS64 works fine and is pretty easy to set up (assuming there is an existing dual stack setup).

Dear game developers: You need to act now and start supporting IPv6. Forums are already starting to fill with complaints of people who can't play multiplayer games because they're behind CGNAT , and this will only get worse. This applies to both support on gaming consoles (No IPv6 support in the SWITCH? Nintendo, are you for real?) and for game service hosting.