How to support both IPv4 and IPv6 connections

46,942

Solution 1

The best approach is to create an IPv6 server socket that can also accept IPv4 connections. To do so, create a regular IPv6 socket, turn off the socket option IPV6_V6ONLY, bind it to the "any" address, and start receiving. IPv4 addresses will be presented as IPv6 addresses, in the IPv4-mapped format.

The major difference across systems is whether IPV6_V6ONLY is a) available, and b) turned on or off by default. It is turned off by default on Linux (i.e. allowing dual-stack sockets without setsockopt), and is turned on on most other systems.

In addition, the IPv6 stack on Windows XP doesn't support that option. In these cases, you will need to create two separate server sockets, and place them into select or into multiple threads.

Solution 2

The socket API is governed by IETF RFCs and should be the same on all platforms including windows WRT IPv6.

For IPv4/IPv6 applications it's ALL about getaddrinfo() and getnameinfo(). getaddrinfo is a genius - looks at DNS, port names and capabilities of the client to resolve the eternal question of “can I use IPv4, IPv6 or both to reach a particular destination?” Or if you're going the dual-stack route and want it to return IPv4-mapped IPv6 addresses, it will do that too.

It provides a direct sockaddr * structure that can be plugged into bind(), recvfrom(), sendto() and the address family for socket()… In many cases this means no messy sockaddr_in(6) structures to fill out and deal with.

For UDP implementations I would be careful about setting dual-stack sockets or, more generally, binding to all interfaces (INADDR_ANY). The classic issue is that, when addresses are not locked down (see bind()) to specific interfaces and the system has multiple interfaces requests, responses may transit from different addresses for computers with multiple addresses based on the whims of the OS routing table, confusing application protocols—especially any systems with authentication requirements.

For UDP implementations where this is not a problem, or TCP, dual stack sockets can save a lot of time when IPv*-enabling your system. One must be careful to not rely entirely on dual-stack where it`s not absolutely necessary as there are no shortage of reasonable platforms (Old Linux, BSD, Windows 2003) deployed with IPv6 stacks not capable of dual stack sockets.

Solution 3

I've been playing with this under Windows and it actually does appear to be a security issue there, if you bind to the loopback address then the IPv6 socket is correctly bound to [::1] but the mapped IPv4 socket is bound to INADDR_ANY, so your (supposedly) safely local-only app is actually exposed to the world.

Solution 4

The RFCs don't really specify the existence of the IPV6_V6ONLY socket option, but, if it is absent, the RFCs are pretty clear that the implementation should be as though that option is FALSE.

Where the option is present, I would argue that it should default FALSE, but, for reasons passing understanding, BSD and Windows implementations default to TRUE. There is a bizarre claim that this is a security concern because an unknowing IPv6 programmer could bind thinking they were binding only to IN6ADDR_ANY for only IPv6 and accidentally accept an IPv4 connection causing a security problem. I think this is both far-fetched and absurd in addition to a surprise to anyone expecting an RFC-compliant implementation.

In the case of Windows, non-compiance won't usually be a surprise. In the case of BSD, this is unfortunate at best.

Share:
46,942
Charles
Author by

Charles

Mobile technology entrepreneur - http://www.demeulenaer.com.

Updated on October 22, 2020

Comments

  • Charles
    Charles over 3 years

    I'm currently working on a UDP socket application and I need to build in support so that IPV4 and IPV6 connections can send packets to a server.

    I was hoping that someone could help me out and point me in the right direction; the majority of the documentation that I found was not complete. It'd also be helpful if you could point out any differences between Winsock and BSD sockets.

    Thanks in advance!

  • bortzmeyer
    bortzmeyer over 13 years
    Saying that IPV6_V6ONLY is off by default on Linux is wrong: it depends on the operating system, not just on the kernel. For instance, on Debian GNU/Linux, it recently switched to on by default.
  • bortzmeyer
    bortzmeyer over 13 years
    The standard on IPv6 API, RFC 3493, describes IPV6_V6ONLY in its section 5.3 if you want to read all the details.
  • Per Johansson
    Per Johansson over 12 years
    OS X also has it off by default, but the best thing is to always set it explicitly. The local sysadmin might've changed it after all.
  • tjd
    tjd over 11 years
    The default on Windows is enabled (just implemented this on Win7).
  • Andrius Bentkus
    Andrius Bentkus over 11 years
    if IPV6_V6ONLY is not available, does it imply that the OS doesn't support dual stacking?
  • Martin v. Löwis
    Martin v. Löwis over 11 years
    @Andrius Bentkus: as Windows XP demonstrates, it is well possible to have a system where you can simultaneously use IPv4 and IPv6, yet IPV6_V6ONLY is not available. Whether or not this is "dual stacking" depends on your definition of that term.
  • patryk.beza
    patryk.beza almost 8 years
    @bortzmeyer I have Linux Stretch and IPV6_V6ONLY is off by default (I verified that). Even man ipv6 specifies that: The default value for this flag is defined by the contents of the file /proc/sys/net/ipv6/bindv6only. The default value for that file is 0 (false). I think that your post probably referred to one of the old Linux Debian release.
  • Brijesh Valera
    Brijesh Valera over 6 years
    Can you point me to the example source code which supports both IPv4 and IPv6 connections (which parses IPv4-mapped format too)?
  • plugwash
    plugwash about 6 years
    @patryk.beza Debian testing/unstable briefly set the default to 0 in version 4.40 of the netbase package the change was reverted a few months later in version 4.42 of netbase. The change did not make it into any stable release. I guess bortzeyer saw the news about the original change but missed the news of the revert.
  • HRH Sven Olaf of CyberBunker
    HRH Sven Olaf of CyberBunker over 2 years
    problem is... the documentation in the manpages about said functions is not clear at all about it 'just' being for ipv4 and ipv6 and it reads like it will give a struct sockaddr for ANY protocol family that has a hostname that resolves to word... also that whole 'hints' thing.. exactly how much of a 'hint' is it. it wasn't a 'hint' it was a 'command' as far as i'm concerned. :P also there is quite a bit of operational risk in it returning a whole lot of ipv6 sockaddr's when ipv6 is enabled on a box one will have to cycle through with connect() timeouts too but doesn't actually route anywhere.