Why do browsers not use SRV records?

6,529

Solution 1

Why do browsers not use SRV records?

Because SRV records did not exist when http was onceived and because http is not assumd to be a service.

SRV records have been around for years...

Hahaha. Do you remember the time when HTTP started? Wen the first browsers were writtten? THAT was a long time ago.

SRV are first in RFC 2782. HTTP goes to RFC 1945 for 1.0. Guess which was first.

Solution 2

SRV records offer three things:

  1. Multiple hostnames - can be done without
  2. Alternate ports - bad idea - see below
  3. A fix for the CNAME at zone apex problem

Re: alternate ports - SRV records could be used as a way of running web servers on alternate ports without having to advertise that fact in the URL. This is a bad thing. Corporate firewall policies very commonly prohibit access to "unusual" ports, and encouraging the idea of using alternate ports would be poor for site accessibility.

The only tangible benefit I see is for #3 - it would allow example.com to get redirected to webhost.example.net without requiring a CNAME (which isn't permitted in a zone apex) or an A record (which is bad for zone maintenance).

Share:
6,529

Related videos on Youtube

fadedbee
Author by

fadedbee

Updated on September 18, 2022

Comments

  • fadedbee
    fadedbee almost 2 years

    It seems like a minimal amount of work and it will make the server-side implementation of reliable websites much simpler. Also SRV records have been around for years...

    Is there something I'm missing here?

    Edit: @DJ Pon3 - what I'm talking about is:

    1. one site served from two datacentres without needing BGP, but still working if either datacentre goes offline. (Can also be achieved by short DNS TTLs.)

    2. multiple httpS servers on different ports on one IP address.

    • Rob Moir
      Rob Moir over 12 years
      I'm not clear as to what problem, precisely, you think this would solve. Its been perfectly possible to create reliable web services without srv records so far.
    • joeqwerty
      joeqwerty over 12 years
      I think (and maybe only because I'm a simpleton) that it would solve the issue of running a web site on an alternate port without the user needing to know what port the site is running on and having to type the port number in the URL.
    • JdeBP
      JdeBP over 12 years
    • Alnitak
      Alnitak over 12 years
    • Alnitak
      Alnitak over 12 years
      @chrisdew why have you asked the exact same question on both sites?
    • fadedbee
      fadedbee about 12 years
      @Alnitak - apologies, I didn't know which site was appropriate.
  • fadedbee
    fadedbee over 12 years
    HTTP 1.1 was 2616, so also missed it. Is HTTP 1.2 with SRV support that we need?
  • TomTom
    TomTom over 12 years
    No, because quess what - it is not needed ;)
  • JdeBP
    JdeBP over 12 years
    -1 for the rather silly argument that relative ages constrain interoperability. The world really does have the capability of making two separate inventions work together once they exist, and has done just that many times throughout history. It has even done it twice over for SRV resource records and HTTP.
  • Alnitak
    Alnitak over 12 years
    @JdeBP you well know that if it were that easy it would have been done by now. The problem is transitioning sites to an HTTP+SRV mechanism without providing an inferior experience to the countless millions of users that would be stuck on old browsers.
  • JdeBP
    JdeBP over 12 years
    You either don't or cannot read, Alnitak. In what you are replying to I explicitly pointed out that it has been done, twice over, long since.
  • JdeBP
    JdeBP over 12 years
    -1 for missing the whole point, despite many people making it over the years as they ask for this and the questioner even alluding to it, which is of course the explicit load balancing and fallback information for clients.
  • Alnitak
    Alnitak over 12 years
    @JdeBP IMNSHO load balancing and fallback data does not belong in the DNS - that's well into the realms of "Stupid DNS Tricks (TM)". They both belong in the IP routing layer - that's the only place you can provide seamless failover between services.
  • Alnitak
    Alnitak over 12 years
    You mean a couple of times someone wrote an internet draft? That's hardly the same - it's trivial to write one, and then the real world hits you and you find that actually there's shed loads of edge cases and other issues that means it won't work in the real world, and eventually the draft expires and is mostly forgotten. Hell, I've had that happen to quite a few of mine already.
  • FlashFan
    FlashFan over 9 years
    Actually, Alternate Ports is a good idea because protocols should not be bound to ports. Imagine a world where the post office always had to be at the second floor in the building, wouldn't that be pointless? That's what we have address books (DNS) for! What's really a bad idea is defining outgoing firewall rules based on a port. It's just pointless because attackers always could use the not-blocked ports. Additionally, imagine a world where it would be denied to go to floor 2 in every building, just because it could be a post office. Funny, isn't it? ;)
  • Alnitak
    Alnitak over 9 years
    @FlashFan unfortunately, corporates persist in wanting to block internet egress by assuming that all web sites are on port 80 or 443.
  • FlashFan
    FlashFan over 9 years
    Yes, I know. That's why enabling SRV records would be good, because it force the corporates to stop doing those pointless, bad practices. No matter how much outgoing ports you block, as long as there is one port open, you can do everything you want, because you can do everything though every port. The fact, that you cannot even know if what goes through the TLS connection on port 443 really is HTTP, does only underline this.
  • Cosmic Ossifrage
    Cosmic Ossifrage over 9 years
    @FlashFan blocking egress on non-standard ports is often thought good security practice; you can somewhat constrain access to many non-standard services, particularly if a machine is compromised. It is also often a requirement for compliance (such as PCI-DSS regulations). While malware may itself communicate with C&C on standard web ports, many enterprises take this a step further: internal machines cannot communicate with anything outbound, even ports 80/443, leaving instead such tasks up to a proxy, which may be responsible for external DNS resolution also, and aiding compromise detection.
  • stolsvik
    stolsvik about 9 years
    Woosh.. Did you hear that?!
  • tmsh
    tmsh over 3 years
    Old answer but... large changes like this have have been achieved... examples include adding SSL support (HTTPS), then adding SNI to SSL. It's not about edge cases. I'd be impressed if you can name one "edge case" that breaks the idea of SRV with HTTP. The really important point missed by this answer is that, yes we could change it, but changing 1000's of different client and server implementations to support it has a cost. For this to succeed, there needs to be real value in implementing it. Unlike implementing SSL, there's no big cost benefit to it so nobody bothers.
  • Admin
    Admin about 2 years
    @FlashFan in case you didn't discover this until now: it is certainly not possible to use any protocol on any port - a lot of routers, switches, backbones and the likes support protocol filtering - you wont be able to establish non-HTTP / HTTPS - connections via ports 80 and 443 if such a filter is enabled, for instance. In fact, its rather trivial to enable most of them - and it has been done all over the world in all sorts of networks, especially corporate and governement-ones. This can be done on layer 2, even (but is usually is done on layer 6 or 7)
  • Admin
    Admin about 2 years
    @specializt No, you can't filter out non-HTTP protocols on a TLS connection. That would require you to be able to decrypt the connection, and that is only possible if you force all network users to trust your MITM certificate.