Split horizon DNS master/slave with Bind

Split horizon is the ability for a DNS-server to give a different answer to a query based on the source of the query. A common use-case is when using the same DNS-server for internal and external queries. When your DNS is publicly available, you really don’t want to enable recursion to the outside world but internally it could be handy. Besides security there are also examples where resolving a certain name needs to return an internal IP while externally that IP is useless and it’s better to return something else.

Why split horizon?

One way to accomplish the above scenario would be to set up two DNS-servers. One to use internally, another to be public. This works fine but creates a lot of administrative overhead. Not to mention having slave-servers would require you to have another two machines extra. Split horizon allows you to have only one DNS-server, with or without a slave, that replies different based on some conditions (usually the source of a request)

Set up split horizon

To set up split horizon with bind, we will use acl’s and views. In this example, I’m assuming that a basic knowledge of bind exists and I will use the example that was set up in a previous post about master/slave DNS.

What we would like to create is two different answers for some zones, based on the source IP of a request. So if a host with an IP in the subnet (let’s call that internal) queries our DNS, he should be returned an internal IP-address as answer. When the same query is initiated by a machine outside that subnet (let’s call that external), the DNS-server should return another IP-address. Some zones should return equal information for internal and external IP’s.

Taking the previous example (from the previous post), we will use zone blaat.test which will be different for internal and external and zone miauw.test which will be common to internal and external.

As a first step, we will create the split horizon master DNS. For now we will ignore the slave and correct the configuration of the slave later to avoid too much complexity.

Bind configuration of the master

We’ll start by changing our /etc/named.conf drastically. To avoid the need to maintain duplicate zone information for zones that are equal regardless of where the request came from, we will import the zone configuration for all zones.

/etc/named.conf on the master

The basic options and logging remain as they were. The rest of the configuration is changed.

27-29: contains an ACL. Here you can list the host or subnets that are matched by that ACL named internal-acl
31-35: contains the view called internal-view and it matches the ACL internal-acl. So hosts that are in the subnet will end up in this view
37-41: contains the view called external-view and it matches all hosts that weren’t matched before. So hosts that are not valid for the internal-acl will end up here.

Zone configuration

As you can see, the zone configuration is excluded from named.conf so we can re-use the common zone definitions (in /etc/named.common.zones) for both views. A restriction of using views is that all zones must be part of one or more views.

As mentioned earlier, we want the zone blaat.test to be different for the external and internal view so we need to define this zone twice.

The internal zone definitions are made in /etc/named.internal.zones:

The external zone definitions are made in /etc/named.external.zones:

Finally, the common zone definitions are made in /etc/named.common.zones:

When looking at the zones defined in named.internal.zones and named.external.zones, you can see that both files contain the same zone configuration except for the files that contains the zone data:



The common zone miauw.test remains unchanged.

After changing all of the above files, reload the changes:

Test to see if the server replies different when the request originates from a source within the specified subnet or from outside the subnet:

As you can see in the example, the server gives a different answer for a query that originated to localhost (so using as source) or it’s real IP (so using as source) which matches the internal-acl.

As a last test, we can check if the common zone is known from within both views and that the answer is equal:

The next step: add a slave for the split horizon master

Until now, the changes involved in comparison with a regular setup are not very complicated. It’s only at the moment when a slave comes into the picture that it’s getting (a little) more complicated.

We need to make sure that, in the slave’s configuration, the same zone get’s transferred twice. Once for each view. Since the zone’s name is equal for both views, it can easily be confused at the slave level because zone transfers are not aware of views. When we wouldn’t take any measures, the last updated zone, regardless of which view it was updated for, would overwrite the zone data of both views on the slave.

To resolve this problem, we will need to create the same views on the slave and restrict the zone transfer to the slave for each of those views. There are multiple ways to do this but for this example, I will use TSIG (Transaction SIGnatures). The key used for the zone-transfer will be different for each view ensuring that the correct zone+view get’s transferred to the same one on the slave.

The first step is to generate two keys for TSIG, one for the internal-view transfers and the other for the external-view transfers. For that, you can use the dnssec-keygen:

The keygen generates two files, a.key and a .private. We only need they key which is generated in the .key file:

Now that we have our keys, we can start adjusting our /etc/named.conf on the master to restrict zone-transfers to the slave, depending on the key.

Explanation of the changes:

31-34: key definition of key named external-key for the external-view transfers
36-39: key definition of key named internal-key for the internal-view transfers
43: the internal-key matches the internal-view (en the external-key doesn’t match)
44: matches the slave-server to the internal-key
50: the external-key matches the external-view (en the internal-key doesn’t match)
51: matches the slave-server to the external-key

On the slave, we need to make similar changes as we first did to our master to make it view-aware plus the changes involved for doing the correct zone-transfer. The changes of the slave are made on the configuration which was explained in a previous post about master/slave DNS.

First, we’ll change the /etc/named.conf of the slave:

The only difference between the slave and master’s configuration, besides the standard options, is the IP-address of the server-statement in both views. The real zone defintions are made in the included files (named.external.zones, named.internal.zones & named.common.zones). Those files need to be created on the slave:




After changing the configuration on both the slave and master, we can reload the configuration to make the changes active. To prevent incorrect zone transfers, it’s better to first stop the slave.

After reload the configuration of the master and restarting the slave, the /var/named/data/-directory, where we chose to store our zone-data on the slave should contain some data, transferred from the master:

Since we have data here, the transfer between the master and slave is working fine. This should also be visible in /var/named/data/named.run. To test of the split horizon configuration works on the slave too, we can test it:

In case the transfer wouldn’t initiate correctly or the data isn’t correct while you are sure that your configuration is, you can force a retransfer of the zones with the follow commands:

Be sure to check the ip-addresses of the master and slave in the /etc/named.conf on both the master and slave (they are different) if the zone transfer doesn’t work as expected.

After following this (rather long) example, creating a split horizon DNS with master and slave should be a piece of cake :)

7 thoughts on “Split horizon DNS master/slave with Bind

  1. Hi !

    While googeling to find a solution to a split dns solution for another scenario, I found your post. Very helpful, I used your examples to setup my internal/external dns…

    Thanks for this great post !

  2. Hi, This is a very well explained Split Horizon DNS setup and we implement a very similiar solution. However, when testing the most recent ISC-Bind release 9.10.2-P3 we seem to have hit an issue with zone referenced by multiple views “common zones” – I’m not entirely happy with the new in-view implementation that ISC suggest as that leads to many complications for systems that provision zone files to our servers – Just curious to know how you might work around this as the named.common.zones cannot be referneced as above via the latest bind release. Named will fail to start and will report errors like “writeable file ‘/zonesfilename.txt/’: already in use: ” i.e. the same filename cannot be reference via multiple views – I’d be very interested in your thoughts on this. – Thanks.

  3. I think your post is great, however for me having common zones in both internal and external view causes an error that the file already exists.

  4. Same for me, in the slave having the same included file for common zones throws this error: writeable file ‘var/named/data/db.domain’: already in use: /etc/named.common.zones:21

  5. Does anybody knows a free DNS server software with health-checks? I.e. that can dynamically modify zones relying on some reachability information.

Leave a Reply to sysadmin Cancel reply

Your email address will not be published. Required fields are marked *